Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150176 stories
·
33 followers

Is Domain-Driven Design Worth the Investment?

1 Share
Is Domain-Driven Design Worth the Investment?

Every organization has at least one system that everyone dreads touching. It works. Technically. But nobody can confidently explain what it does or why it does it that way. Making changes takes longer than it should. Estimates are unreliable. New developers stare at the code for weeks before they can contribute.

This doesn’t happen because the original developers were bad at their jobs. It happens because the system was built as a collection of technical components (databases, APIs, services) without a clear model of the business it was meant to support. Over time, the business evolved, but the software’s structure didn’t evolve with it. Business rules got scattered. Terminology drifted. The reasoning behind decisions disappeared as the people who made them moved on.

The result is a system where the cost of change increases with every release. Not because the technology is old, but because the intent is buried. Nobody knows where “the rule” lives. Nobody knows which change is safe. Every modification is an archaeology project.

This is the problem DDD solves, and it’s a problem with real financial consequences.

What Domain-Driven Design Actually Changes

Domain-Driven Design is not a framework or a technology. It’s an approach that puts business understanding at the center of how software is structured. Before anyone writes code, the team develops a shared model of the business domain: the rules, the processes, the language, the boundaries. That model then shapes the software directly.

This changes several things that matter at the organizational level.

1. The business and engineering speak the same language

In most organizations, there’s a translation layer between what the business says and what the code does. Product describes a feature using business terms. Engineering interprets those terms, maps them to technical concepts, and builds something that hopefully matches the intent. When it doesn’t, the gap is discovered late: in QA, in production, or worse, in a customer complaint.

Domain-Driven Design eliminates this translation layer by establishing a ubiquitous language, a shared vocabulary used consistently by business stakeholders, product managers, and developers. When the business says “policy,” the code has a Policy. When the business says “claim is adjudicated,” the code raises a ClaimAdjudicated event. The language in the conference room is the language in the codebase.

The business impact: fewer misunderstandings, fewer features that miss the mark, and faster conversations about what needs to change. When everyone uses the same words to mean the same things, the cost of communication drops and the accuracy of implementation rises.

2. Complexity is contained, not spread

As systems grow, complexity tends to spread. A change to pricing logic touches the order system, which touches invoicing, which touches reporting. Everything is connected to everything else, and the blast radius of any change is unpredictable.

DDD addresses this through bounded contexts. These are boundaries that define where specific models and rules apply. The pricing context owns pricing logic. The invoicing context owns invoice generation. They communicate through defined interfaces, not shared databases or implicit dependencies. Each context can evolve independently, with its own model, its own rules, and its own team.

The business impact: changes become safer and more predictable. When pricing needs to change, the pricing team changes the pricing context. They don’t need to coordinate with invoicing, reporting, and three other teams in a two-week planning exercise. This is how organizations scale engineering without scaling coordination overhead proportionally.

3. Business rules are explicit and locatable

In most codebases, business rules are scattered. Some live in the application layer. Some are encoded in database constraints. Some exist only in the minds of developers who wrote them. Some are duplicated in multiple places, with subtle differences between copies.

When a rule needs to change (and rules always change) the first challenge isn’t implementing the change. It’s finding the rule. And then finding all the other places where a version of that rule might exist. And then figuring out which version is correct.

DDD requires that business rules live in the domain model, in one place. A pricing rule is in the pricing aggregate. A validation rule is in the entity that enforces it. When the business says “we need to change how we calculate late fees,” the developer knows exactly where to look.

The business impact: faster changes, fewer defects from partial updates, and less time spent on “investigation” before actual work begins. When rules have a clear address, the cost of business evolution drops.

4. Systems age well instead of decaying

Most software systems get harder to maintain over time. Not because the technology degrades, but because the conceptual integrity erodes. Early decisions that made sense are overridden by quick fixes. Naming conventions drift. New features are bolted on without revisiting the underlying model. Slowly, the system becomes a patchwork: functional, but fragile and expensive.

DDD-based systems resist this decay because the model is designed to evolve. When business understanding deepens, the model is refined to reflect that understanding. When boundaries shift, contexts are restructured. The design is a living representation of the business, not a frozen snapshot of what someone understood three years ago.

The business impact: lower total cost of ownership. The system in year five is not much more expensive to maintain than the system in year one. Features take roughly the same amount of time to build, rather than the slowdown that characterizes systems built without intentional modeling.

What Leadership Should Actually Care About

Predictable delivery in complex domains

The hardest part of estimating software work isn’t the coding. It’s the uncertainty about scope, impact, and hidden dependencies. In systems without clear boundaries and rules, every estimate includes a “discovery tax”: time spent understanding the current state before any change can begin.

DDD reduces this tax by making the system’s structure mirror the business’s structure. When a product manager describes a change in business terms, the engineering team can map that change to specific contexts and aggregates. The scope is clearer. The impact is more containable. Estimates become more reliable. Not perfect, but better.

Team autonomy that actually works

Many organizations pursue team autonomy through microservices, hoping that service boundaries will enable independent delivery. But if those boundaries don’t align with business boundaries, teams still end up coordinating constantly. Service A needs a field from Service B. Service C’s deployment breaks Service D’s assumptions. The boundaries are technical, not meaningful.

DDD provides the framework for drawing boundaries that work: boundaries based on business capability, shared language, and domain ownership. When contexts are well-defined, teams genuinely own their domain. They can make decisions, ship changes, and evolve their models without waiting for alignment meetings.

Reduced key-person risk

In systems without models, knowledge concentrates in individuals. The developer who built the billing module understands it because they hold the mental model. When they leave, that understanding leaves with them. The next developer inherits code without context.

DDD captures understanding in the model itself: in the names, the structures, the relationships, and the rules encoded in the domain layer. A new developer joining a DDD-based system can read the domain model and understand what the system does in business terms, not just technical terms. The model is the documentation.

A foundation for AI-assisted development

This is worth calling out. As organizations adopt AI coding tools, the quality of AI output depends on the quality of the codebase it works with. An AI reading a well-modeled domain with clear boundaries, rules, and consistent language produces better results than an AI reading a tangled codebase with scattered logic and inconsistent naming.

DDD doesn’t just make the system better for humans. It makes it better for every tool that reads, generates, or modifies code. The investment in modeling pays dividends across both human and AI-assisted development.

The Reframing That Matters

The conversation about DDD often gets stuck in technical details: aggregates, value objects, event sourcing. These are implementation concepts that matter to developers, but they obscure the real value proposition for leadership.

The real value of Domain-Driven Design is this: it keeps your software aligned with your business as both evolve.

Without intentional modeling, software and business diverge over time. Changes get harder. Communication gets fuzzier. Costs compound. With DDD, the software is structured around business concepts, bounded by business capabilities, and expressed in business language. When the business changes, there’s a clear path to changing the software. When a new team member joins, the codebase tells them what the business does.

This is not a one-time architectural decision. It’s an ongoing practice, like code review or testing, that compounds in value over time.

What Adoption Looks Like

DDD is often perceived as heavyweight. It doesn’t have to be. Adoption is incremental, and even partial adoption delivers value.

Start with language. Before anything else, invest in getting business and engineering aligned on terminology. This costs nothing but meeting time and delivers immediate improvements in communication clarity. If your teams argue about what a “customer” is in different parts of the system, you already have a problem DDD can solve.

Draw boundaries around your biggest pain points. Identify the area of the system where changes are slowest and riskiest. Define a bounded context around it. Give a team clear ownership. Let them model the domain and evolve it independently. Measure the results.

Don’t boil the ocean. DDD doesn’t require rewriting your system. It doesn’t require adopting event sourcing or CQRS or microservices. It starts with understanding the domain, establishing shared language, and structuring code around business concepts. The advanced patterns are options, not prerequisites.

Evaluate on outcomes that matter. The right metrics aren’t DDD-specific. They’re delivery metrics: How long does a business rule change take from request to production? How much coordination is required for a feature that spans multiple teams? How quickly can a new developer contribute? How often do changes in one area break something in another?

The Bottom Line

It reduces the cost of change by making business rules locatable. It enables team autonomy by drawing boundaries that align with business capabilities. It reduces key-person risk by capturing understanding in the model, not in people’s heads. It keeps systems maintainable over time by providing a framework for intentional evolution rather than accidental accumulation.

Your software is a model of your business whether you designed it that way or not. The question is whether it’s a model you can reason about, communicate clearly, and change confidently, or one that has become an obstacle to the very business it was meant to support.

DDD is the discipline of making sure it’s the former.

 

This post comes from our software engineering practice, which specializes in refactoring application architecture and optimizing delivery to support modular teams, faster feedback, and continuous value delivery.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Live from Replit HQ Part 2

1 Share
Summary This post recaps part 2 of our "Live from HQ" livestream series covering the Agent 4 launch. Peter engineered Agent 4's parallel task system, enabling AI to resolve merge conflicts automatically 90% of the time. Haya Odeh, co-founder and design lead, built the Infinite Canvas to close the gap between design and engineering. Jacob's collaboration features give builders real-time visibility into who's working on what inside a shared project. Adi made branching feel instant using micro VMs, creating isolated task environments that spin up in seconds.

Read the whole story
alvinashcraft
16 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

019 - From Chatbots to Coworkers

1 Share

ai.u crew discuss the shift from prompt-response AI chatbots to “AI coworkers” or computer-use agents that perform multi-step work across apps, highlighting Anthropic’s Claude Cowork ($20–$200/month), Microsoft Copilot Tasks ($30/user/month), and Perplexity Computer ($200/month). They describe the interaction change from asking questions to delegating outcomes, with humans increasingly acting as supervisors who define context, monitor progress, and apply judgment, while noting concerns that convenience may erode competence and that many workflows require undocumented institutional knowledge. They debate whether automating tasks is always worth the setup and trust costs, and suggest processes and software may need redesign. They also examine Anthropic’s qualitative study using an AI interviewer for 81,000 participants, weighing scale and multilingual benefits against lost human connection and empathy.

00:00 Welcome And Topic Shift

01:11 New Coworker Tools Overview

02:36 From Prompts To Delegation

04:41 Agency And Real Examples

08:22 Matt Wants Automation

10:24 Supervisor Mindset And Skills

14:28 Convenience Versus Competence

22:01 Three Lanes Of Coworkers

24:56 Token Spend And Real Debugging

29:26 Autopilot Limits And Hidden Knowledge

32:03 Tools Need Skill

33:08 Prompting Meets Expertise

35:44 Tribal Knowledge Problem

38:11 Is Automation Worth It

38:49 Trust And Context Costs

41:03 New Companies Advantage

42:00 AI As Flourishing Tool

44:31 Claude Interviews Study

48:57 What Humans Add

50:45 Where AI Fits Best

54:11 Human Connection Matters

56:51 Wrap Up And Feedback



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aiunprompted.substack.com



Download audio: https://api.substack.com/feed/podcast/191894272/75d2de7bd239ea6a7d4d6f9969af5638.mp3
Read the whole story
alvinashcraft
37 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Priority Processing for Foundry Models

1 Share
From: MicrosoftAzure
Duration: 1:03
Views: 163

Priority Processing unlocks premium AI performance for your most latency‑sensitive workloads—without the friction of long‑term commitments. Designed for moments where speed matters, Priority Processing delivers SLA‑backed, low‑latency responses on a simple pay‑as‑you‑go model. With the same models and APIs you already use, you can instantly enable Priority Processing to stay within the user attention window, reduce time‑to‑first‑token, and deliver more consistent, predictable experiences at scale. Built‑in telemetry helps you monitor performance and optimize cost, while prioritized capacity ensures reliability when demand spikes. Whether you’re powering real‑time healthcare workflows, financial transactions, or digital‑native applications, Priority Processing keeps users engaged, sessions converting, and experiences fast—exactly when it matters most.

Learn more: https://msft.it/6051QsMR3

#Microsoft #PriorityProcessing #MicrosoftFoundry

Read the whole story
alvinashcraft
55 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Celebrating 30 Years of Microsoft Exchange

1 Share

It’s hard to believe, but Exchange Server is now 30 years old! A lot has changed since the first release of Exchange Server 4.0 in 1996: protocols, platforms, scale, and even what “email” means in the modern workplace.

To commemorate this milestone anniversary, we want to pause and reflect on how Exchange has shaped enterprise email as we know it today.

The start: email becomes enterprise messaging

Back in the mid-1990s, messaging solutions were fragmented, proprietary, and difficult to manage at scale. Businesses looking for a messaging system basically had two choices: host-based systems that were costly and didn’t integrate well with PC-based applications or LAN-based systems that did integrate with PC-based applications but were less scalable and reliable (although there were several companies that made software that allowed different email systems to communicate).

That changed when, after nearly four years of development, Microsoft Exchange Server 4.0 – “the e-mail server with integrated groupware that makes it easy to communicate” – was released on April 2, 1996. Or, it might have been March 1996. Or maybe June 1996. No one knows for sure because the first public build that was shipped was not the build on the gold master (the signed-off RTM version).

Nonetheless, Exchange Server had ambitions! From the start, it combined email and calendaring as well as an integrated centralized directory. Admin controls and native support of Internet standards like SMTP (via Internet Mail Connector) and X.400 kept it “modern.”

In addition to user productivity through email, Exchange provided admin controls for monitoring, managing, and troubleshooting messaging across an entire organization from a single system – an idea that now seems obvious, but was far from standard in 1996.

Exchange shapes the market

As Exchange evolved through the late 1990s and early 2000s, it kept raising the bar for business email. It was during this time that several major changes occurred:

  • Email and user identity became inseparable. This directly influenced the development of Active Directory (Active Directory was the direct descendant of the Exchange Directory Service).
  • Calendaring and scheduling were first-class workflows and not bolt-on experiences.
  • Reliability, scale, and disaster recovery became built-in
  • Administrators came to expect the ability to automate admin tasks.

Exchange Server became one of Microsoft’s first truly successful enterprise server products, helping establish us as a serious enterprise platform provider beyond the desktop.

The foundation of Exchange Online

When we set out to build Exchange Online (remember Exchange Labs?), the goal was to operate enterprise email as a global service.

Exchange Online inherited many years of lessons from Exchange Server as it extended to the service. That continuity of experience is one reason our customers were able to move from Exchange Server to Exchange Online more confidently as they worked with already familiar tools. Concepts such as mailboxes, the transport pipeline, policy enforcement, and compliance remained familiar, even as the operational model changed. Exchange quite literally became the backbone of Microsoft 365’s compute, routing, and storage (also known as the Substrate).

Exchange Server still matters in 2026

Three decades later, Exchange Server still matters. Conversations around digital sovereignty, regulatory compliance, and admin control continue. Many organizations like governments, regulated industries, and critical infrastructure providers must make choices about where their data is stored and who operates the infrastructure.

For customers that need it, Exchange Server remains valuable as an architectural choice. Continued investment in Exchange Server, including release of Exchange Subscription Edition (SE) which we are committed to supporting until at least the end of 2035 reflect the reality that enterprise messaging is not one-size-fits-all.

Cloud-first (where innovation is the fastest) does not need to mean cloud-only. Whether you want to run on-prem, hybrid, or cloud, Exchange is there for you.

Through it all, community helped shape Exchange

While this is a bit intangible, we want to acknowledge that feedback from Exchange admins, MVPs, partners, and customers influenced (and keep influencing) Exchange in real ways. Feedback via our blog (since the first post, back in 2004), support cases, and feedback given through conferences or Feedback portal over the years really matter. Some design changes happened specifically because the community spoke clearly. Our teams staying involved (via, for example, this blog) has been extremely valuable to us. Please keep giving us feedback!

How things are changing

Exchange backward compatibility was both a gift and a burden. For many years, we allowed customers to have coexistence of 3 major Exchange versions within the same organization. This helped reduce migration pain. But it also slowed down architectural cleanup and modernization as every version had to play nice with choices made years earlier. We are looking forward to the future in which we support only a single major version inside an organization – Exchange Subscription Edition (SE) – a requirement we are adding starting with Exchange SE CU2!

Security came into focus over the years. It is still in focus. Early Exchange was built for connectivity and collaboration. The threat model changed, with threat actors going after organizational email. It is more important than ever to stay up to date. We realize that some upcoming security changes mean that admins need to do additional work (for example upcoming hybrid security improvements), but the result will be your organization’s improved security posture.

With all the modes of communication that have become popular in business environments over the last 3 decades, the “end of email” has been predicted many times. Yet, email is still alive. And judging by our inboxes, it’s thriving!

We want to thank the admins, MVPs, partners, and customers who keep Exchange running and who’ve provided unfiltered feedback along the way. We are excited to continue this journey with you!

Here are a few fun Exchange historical posts that you might have missed over the years:

And a few technology-specific fun posts related to Exchange history:

The Exchange Team

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Code Optimizations for Azure App Service Now Available in VS Code

1 Share

Today we shipped a feature in the Azure App Service extension for VS Code that answers both questions: Code Optimizations, powered by Application Insights profiler data and GitHub Copilot.

The problem: production performance is a black box

You've deployed your .NET app to Azure App Service. Monitoring shows CPU is elevated, and response times are creeping up. You know something is slow, but reproducing production load patterns locally is nearly impossible. Application Insights can detect these issues, but context-switching between the Azure Portal and your editor to actually fix them adds friction.

What if the issues came to you, right where you write code?

What's new

The Azure App Service extension now adds a Code Optimizations node directly under your .NET web apps in the Azure Resources tree view. This node surfaces performance issues detected by the Application Insights profiler - things like excessive CPU or memory usage caused by specific functions in your code.

Each optimization tells you:

  • Which function is the bottleneck
  • Which parent function is calling it
  • What category of resource usage is affected (CPU, memory, etc.)
  • The impact as a percentage, so you can prioritize what matters

But we didn't stop at surfacing the data. Click Fix with Copilot on any optimization and the extension will:

  1. Locate the problematic code in your workspace by matching function signatures from the profiler stack trace against your local source using VS Code's workspace symbol provider
  2. Open the file and highlight the exact method containing the bottleneck
  3. Launch a Copilot Chat session pre-filled with a detailed prompt that includes the issue description, the recommendation from Application Insights, the full stack trace context, and the source code of the affected method

By including the stack trace, recommendation, impact data, and the actual source code, the prompt gives Copilot enough signal to produce a meaningful, targeted fix rather than generic advice. For example, the profiler might surface a LINQ-heavy data transformation consuming 38% of CPU in OrderService.CalculateTotals, called from CheckoutController.Submit. It then prompts copilot with the problem and it them offers a fix.

Prerequisites

  • A .NET web app deployed to Azure App Service
  • Application Insights connected to your app
  • The Application Insights profiler enabled (the extension will prompt you if it's not)

For Windows App Service plans

When creating a new web app through the extension, you'll now see an option to enable the Application Insights profiler. For existing apps, the Code Optimizations node will guide you through enabling profiling if it's not already active.

 

For Linux App Service plans

Profiling on Linux requires a code-level integration rather than a platform toggle. If no issues are found, the extension provides a prompt to help you add profiler support to your application code.

What's next

This is the first step toward bringing production intelligence directly into the inner development loop. We're exploring how to expand this pattern beyond .NET and beyond performance — surfacing reliability issues, exceptions, and other operational insights where developers can act on them immediately.

 

Install the latest  Azure App Service extension and expand the Code Optimizations node under any .NET web app to try it out. We'd love your feedback - file issues on the GitHub repo.

 

Happy Coding <3

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories