Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149984 stories
·
33 followers

Desktop web traffic overtakes mobile for the first time in 5 years

1 Share
Global web traffic has done something unexpected, with desktop devices overtaking smartphones for the first time in five years. Data presented by Jemlit shows desktops claimed a 49.7 percent share of worldwide browsing in October, placing them just ahead of smartphones on 48.98 percent. Mobile phones have held more than 60 percent of global browser traffic for much of the past three years, supported by faster networks and the way people increasingly use their phones for news, shopping, search, and entertainment. Even so, new StatCounter figures show the change began several months earlier, with desktop traffic showing a steady rise… [Continue Reading]
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How AI Is Quietly Reshaping the Software Development Lifecycle

1 Share

Over the last couple of years, AI coding assistants have changed the day-to-day reality of software engineers more than any single tool in the past decade. AI has not only accelerated how quickly developers write code, but it has also fundamentally changed how we build software. Tasks that once consumed hours of engineering time, such as writing unit tests, scaffolding APIs, generating boilerplate, validating integrations, and even debugging, can now be completed in minutes using AI coding assistants.

This efficiency has reshaped expectations across organisations: leaders assume delivery speed should increase, project timelines should shrink, and teams should “do more with less”. Program managers expect developers to deliver more features in the same amount of time, and executives assume engineering velocity should increase proportionally with AI adoption.

Yet despite this massive shift in capability, the underlying software development lifecycle (SDLC) used by most teams hasn’t evolved in decades. Developers are operating with new tools but old processes. As a result, teams often rush into implementation without enough design clarity, accumulating technical debt faster than ever. AI has changed the pace of software delivery, but the process has not caught up—and this mismatch is causing friction for developers, engineering managers, and product teams alike.

This article explores where the traditional SDLC falls short in the age of AI and proposes a more adaptive, realistic lifecycle that reflects how modern teams actually work.

\

Traditional Software Development Lifecycle

For years, the classic SDLC has been the industry standard—structured, predictable, and intended to reduce risk in complex software projects. The traditional cycle includes:

  1. Planning & Requirement Analysis

    Stakeholders collect business needs, identify constraints, define scope, and estimate effort.

  2. Defining Requirements

    Requirements become detailed functional and non-functional specifications. Everything is documented before design begins.

  3. Design

    Architects and senior engineers map out system components, data models, workflows, and integration points. Historically, this phase determined long-term system quality.

  4. Development

    Engineers implement the design, write code, build modules, and integrate components. This phase used to consume the majority of engineering time.

  5. Testing

    QA and developers validate functionality, performance, and reliability. Automated tests, manual testing, integration testing and user testing all happen here.

  6. Deployment

    Software is released to production environments, accompanied by monitoring, rollback strategies, and operational readiness.

  7. Maintenance

    Teams fix bugs, monitor system health, reduce technical debt, and refine the system over time.

    \

Where This Model Breaks in the Age of AI

While this lifecycle worked for years, AI has fundamentally disrupted two core assumptions:

  • Implementation is no longer the bottleneck: AI can generate large portions of code, tests, and documentation in minutes.
  • Design and requirements now lag behind execution: Developers jump into implementation faster than teams can refine requirements or create thoughtful designs.

Because the classic SDLC treats development as the slowest and most expensive phase, its structure fails when coding becomes fast. The result:

  • Rushed design decisions
  • Intentional shortcuts to meet deadlines
  • Increased technical debt
  • More rework during iteration cycles

Teams are building faster than they can plan.

\

Problems in Today’s AI-Accelerated SDLC

1. Increased Leadership Expectations

AI has created a perception that engineering should deliver dramatically more with fewer people. Goals have become more aggressive as organisations underestimate the cost of planning, design, and long-term system thinking.

2. Reduced Long-Term Vision

With pressure to deliver quickly, teams focus on the best achievable milestone rather than the long-term product vision. Systems become optimised for next month, not next year.

3. Less Emphasis on Design

Developers jump into implementation before deep design discussions. AI accelerates execution but does not replace architectural thinking. Updating design mid-development is now common and chaotic.

4. Accelerated Technical Debt

Short-term solutions pile up quickly. As teams implement features for immediate milestones, long-term stability becomes an afterthought.

\

Solutions to Rebalance Development in the AI Era

1. Set Proper Expectations Early

Engineering teams must push back where needed and provide realistic input during planning. Strong leaders factor engineering insights into timelines rather than assuming AI will solve all bottlenecks.

2. Align Ideas With Execution Speed

Execution is faster than ever, but ideation and requirement clarity are not. Product teams must adopt long-term thinking and avoid constantly pivoting mid-cycle, which leads to rework and wasted effort.

3. Update the SDLC Itself

The biggest change needed is structural: the SDLC should reflect how software is actually being built today.

AI has made Proof-of-Concept (POC) development an essential pre-design step. POCs are now necessary to test AI capabilities, validate feasibility, and explore user interactions before committing to long-term architecture.

However, POCs should inform feasibility and not dictate architecture.

\

The New AI-Era Software Development Lifecycle

AI has introduced two major additions to the SDLC: \n POC Development and Continuous Iteration.

New SDLC Flow

Planning → Requirement Definition → Minimal Design → POC Development → User Testing → Feedback Review → Iterate

Two Parallel Workstreams Now Exist

1. Productionizing the Working Prototype:

  • Take minimal but functional features
  • Deploy to production
  • Monitor, maintain, stabilise, and optimise

2. Iterating Based on User Feedback:

  • Update requirements
  • Improve design
  • Rebuild or refine implementation
  • Conduct more user testing

This loop continues until the product reaches maturity.

The Real Challenge

The biggest puzzle today is minimising churn, and how to iterate quickly without causing back-and-forth chaos across design, development, and product teams.

We're all still figuring out the right balance between speed and stability.

\

Conclusion

AI has dramatically accelerated software development, but our processes haven’t evolved at the same pace. The traditional SDLC assumes slow implementation and steady planning - an assumption that no longer holds true. By embracing POCs, iterative cycles, and realistic expectations, teams can turn AI from a source of chaos into a catalyst for better engineering.

The industry is still learning how to adapt, and so am I. \n The pursuit of a smooth, AI-aligned development lifecycle continues.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The Spotlight Failure That Taught a Silent Lesson About Recognition | Scott Smith

1 Share

Scott Smith: The Spotlight Failure That Taught a Silent Lesson About Recognition

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"Not everybody enjoys the limelight and being called out, even for great work." - Scott Smith

 

Scott was facilitating a multi-squad showcase with over 100 participants, and everything seemed to be going perfectly. Each squad had their five-minute slot to share achievements from the sprint, and Scott was coordinating the entire event. When one particular team member delivered what Scott considered fantastic work, he couldn't help but publicly recognize them during the introduction. 

It seemed like the perfect moment to celebrate excellence in front of the entire organization. But then his phone rang. The individual he had praised was unhappy—really unhappy. What Scott learned in that moment transformed his approach to recognition forever. The person was quiet, introverted, and conservative by nature. 

Being called out without prior notice or permission in front of 100+ people wasn't a reward—it was uncomfortable and unwelcome. Scott discovered that even positive recognition requires consent and awareness of individual preferences. Some people thrive in the spotlight, while others prefer their contributions to be acknowledged privately. The relationship continued well afterward, but the lesson stuck: check in with individuals before publicly recognizing them, understanding that great coaching means respecting how people want to be celebrated, not just that they should be celebrated.

 

Self-reflection Question: How do you currently recognize team members' achievements, and have you asked each person how they prefer to be acknowledged for their contributions?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Scott Smith

 

Scott Smith is a 53-year-old professional based in Perth, Australia. He balances a successful career with a strong focus on health and fitness, currently preparing for bodybuilding competitions in 2026. With a background in leadership and coaching, Scott values growth, discipline, and staying relevant in a rapidly changing world.

 

You can link with Scott Smith on LinkedIn.

 





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251201_Scott_Smith_M.mp3?dest-id=246429
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

If Leaving Your Team Hurts Then You Probably Did It Right

1 Share

What happens when you've built an amazing team and then have to leave? Bob Galen and Josh Anderson explore the guilt, emotion, and complexity of leadership transitions. Learn why good leaders struggle most with leaving while bad leaders walk away without a second thought. Josh shares his gut-wrenching experience leaving Dude Solutions, Bob discusses how to maintain relationships through transitions, and both hosts reframe departure around legacy and the coaching tree concept. If leaving your team hurts, you probably did it right. Essential listening for any leader facing a career transition.

Stay Connected and Informed with Our Newsletters

Josh Anderson's "Leadership Lighthouse"

Dive deeper into the world of Agile leadership and management with Josh Anderson's "Leadership Lighthouse." This bi-weekly newsletter offers insights, tips, and personal stories to help you navigate the complexities of leadership in today's fast-paced tech environment. Whether you're a new manager or a seasoned leader, you'll find valuable guidance and practical advice to enhance your leadership skills. Subscribe to "Leadership Lighthouse" for the latest articles and exclusive content right to your inbox.

Subscribe here

Bob Galen's "Agile Moose"

Bob Galen's "Agile Moose" is a must-read for anyone interested in Agile practices, team dynamics, and personal growth within the tech industry. The newsletter features in-depth analysis, case studies, and actionable tips to help you excel in your Agile journey. Bob brings his extensive experience and thoughtful perspectives directly to you, covering everything from foundational Agile concepts to advanced techniques. Join a community of Agile enthusiasts and practitioners by subscribing to "Agile Moose."

Subscribe here

Do More Than Listen:

We publish video versions of every episode and post them on our YouTube page.

Help Us Spread The Word: 

Love our content? Help us out by sharing on social media, rating our podcast/episodes on iTunes, or by giving to our Patreon campaign. Every time you give, in any way, you empower our mission of helping as many agilists as possible. Thanks for sharing!





Download audio: https://episodes.captivate.fm/episode/97303323-6147-412a-b667-9995a59d7e27.mp3
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

A guide to restarting pods in Kubernetes using kubectl

1 Share

This Member Blog was originally published on the Middleware blog and is republished here with permission.

kubectl is the command-line interface for managing Kubernetes clusters. It allows you to manage pods, deployments, and other resources from the terminal, helping you troubleshoot Kubernetes issues., check pod health, and scale applications easily. Most kubectl commands follow a simple structure.

For example, kubectl get pods lists running pods, and kubectl delete pod <pod-name> removes a pod.

Many users wonder how to restart a Kubernetes pod using kubectl. Contrary to popular belief, there is no direct kubectl restart pod command. Instead, Kubernetes expects you to work with higher-level objects, such as Deployments.

This guide covers the safest and most effective methods for restarting pods, including rollout restarts, deleting pods, scaling replicas, and updating environment variables, helping you manage pod restarts in a predictable and controlled way.

When should you restart a Kubernetes pod?

Knowing when to restart a Kubernetes pod is key to maintaining application stability and performance. Here are the most common scenarios that require a pod restart:

1. Configuration changes

When you update your application’s settings (such as environment variables or resource limits), the pod continues to use the old configurations. Restarting ensures the new settings take effect.

2. Recover from application failure

If your app crashes but the container stays in a “Running” state, or the pod shows as running but isn’t functioning, a restart forces a clean start to recover the service.

3. Debugging application issues

Restarting the pod helps resolve temporary issues or confirms persistent problems while troubleshooting why the application isn’t behaving as expected.

4. Pod stuck or not responding

A pod may stop responding to traffic while Kubernetes still reports it as healthy. Restarting resolves frozen states or resource leaks and restores responsiveness.

What are the different pod states in Kubernetes?

Understanding the different Kubernetes pod states enables you to monitor your application’s health and take the necessary actions when needed. Here are the key pod states you should know:

1. Pending

Kubernetes has approved the pod, but it is awaiting scheduling and launch. This occurs while Kubernetes is downloading container images or while it is still looking for a suitable node to run your pod. A prolonged pending pod typically indicates a configuration issue or insufficient resources.

2. Running

Your pod has at least one active container. The containers are working, but this doesn’t mean everything is functional. Your application may still have troubles despite the pod running.

3. Succeeded

You typically see this state with jobs or one-time tasks that are designed to run once and finish. It means all containers in the pod have completed their tasks successfully and won’t restart.

4. Failed

The failed state means one or more containers in the pod have stopped running, maybe due to an error, or the system terminated the containers. It indicates something went wrong with your application, or the container couldn’t restart correctly. Failed pods often need a restart.

5. Unknown

This indicates that the node where your pod should be running has lost contact with Kubernetes. Node failures, network problems, or other infrastructure issues may be the cause of this. It’s actually hard to tell what’s going on with your pod when you see this state.

How to restart pods in Kubernetes using kubectl

When you search for how to restart a Kubernetes pod using kubectl, the first thing that comes to mind is the command:

kubectl restart pod 

However, that command does not exist. Instead, there are several reliable methods to restart Kubernetes pods using kubectl. Below are the most effective and commonly used approaches:

1. Restart pods using Kubectl rollout restart

This is the commonly used method that follows Kubernetes best practices for restarting pods managed by a deployment. It performs a controlled restart without downtime by creating new pods and removing old ones.

Commands to use:

kubectl rollout restart deployment/my-app

This command replaces existing pods with new ones. It will remove the old pods after starting and waiting for the new ones. This approach keeps your app up during the restart.

To restart pods in a deployment within a specific namespace:

kubectl rollout restart deployment/my-app -n your-namespace

To check the status of your restart

kubectl rollout status deployment/my-app

Consider this strategy if you want minimal downtime, the safest alternative, or have deployment-managed pods.

2. Delete individual pods to force restart

With this method, you must delete pods to force Kubernetes to recreate them. It’s simpler than rollout restart, but you must watch which pods you remove.

If the pod is managed by a deployment, replica set, or equivalent controller, Kubernetes immediately creates a new one when you delete the existing one. However, this may temporarily disrupt service.

Here’s how to go about it:

# List all pods to see what you're working with

kubectl get pods

# To delete a specific pod

kubectl delete pod <pod-name>

# To delete multiple pods at once

kubectl delete pod <pod-1> <pod-2>

# To delete and wait to remove fully

kubectl delete pod <pod-name> --wait=true

# To force delete a stuck pod (use with caution)

kubectl delete pod <pod-name> --grace-period=0 --force

Delete only controller-managed pods. A standalone pod that isn’t managed by anything will never be restored if it is deleted.

3. Scale deployment replicas to restart pods

This strategy works by scaling your deployment down to zero replicas for a short time, which stops all the pods. Then it scales back up to the number you started with. Kubernetes lets you turn your program off and back again in a controlled way.

1. Check how many replicas you currently have

kubectl get deployment my-app

2. Scale down to zero

kubectl scale deployment my-app --replicas=0

3. Lastly, scale back up to your original number (creates new pods)

kubectl scale deployment my-app --replicas=3

When you scale down to zero, Kubernetes deletes all the pods in that deployment. When you scale back up, it creates new pods from scratch. This approach is more aggressive than rollout restart, but sometimes necessary when you need a complete fresh start.

4. Update environment variables to trigger a restart

This is yet another clever method for pod restarts. You will need to modify their configuration slightly. Kubernetes interprets changing environment variables in a deployment as a configuration change and restarts the pods automatically to implement the updated configuration.

The key here is that you don’t even have to alter your environment variables significantly. To initiate the restart, update a timestamp or add a dummy variable. 

For instance:

You can update an existing environment variable

kubectl set env deployment/my-app RESTART_TRIGGER=$(date +%s)

Or

You can also edit the deployment directly:

kubectl edit deployment my-app

Then add or modify any environment variable in the editor.

The benefit of using this approach is that it follows the same safe strategy as the rollout restart, and there’s no downtime during the restart process.

5. Replacing pods manually

Using the same configuration or an updated version, this method requires deleting particular pods and then manually creating new ones to replace them. You have total control over the creation and deletion processes with this method.

1. Get the pod configuration and save it

kubectl get pod -o yaml > pod-backup.yaml

2. Delete the existing pod

kubectl delete pod 

3. Create a new pod using the saved configuration

kubectl apply -f pod-backup.yaml

This method causes downtime because the old pod is removed before the new one starts. This method is only used with standalone pods; don’t do this with pods managed by deployments.

Conclusion

Restarting Kubernetes pods can help you in several practical ways, as discussed in this article, one of which is maintaining healthy applications. It doesn’t matter the method you choose to use; the key thing is to understand when and why you want to restart.

Monitoring pods also helps you in making the right restart decision. Instead of guessing what’s wrong, which you might be wrong about, proper observability shows you exactly when pods need attention.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Enterprise Patterns for ASP.NET Core Minimal API: Domain Model Pattern – When Your Core Rules Deserve Their Own Gravity

1 Share

Look at a typical enterprise ASP.NET Core application, and you often see the same pattern:

  • Controllers validating requests, calculating totals, and applying discounts
  • EF Core entities that are little more than property bags
  • Stored procedures that quietly decide which orders are valid

If you need to know how orders work, you do not open a single file. You read controllers, queries, and database scripts until your eyes blur. The truth about the business lives everywhere and nowhere.

The Domain Model is the pattern that reverses this arrangement.

Instead of clever controllers and dumb entities, you move the rules into rich objects. Entities and value objects enforce invariants. The application layer orchestrates use cases by telling those objects what to do.

This post shows what that looks like in C#, and why putting rules next to data changes how your system behaves over time.

What Domain Model Really Is

In Fowler’s terms, a Domain Model:

  • Represents the business domain with rich objects
  • Encapsulates rules and invariants inside those objects
  • Treats the framework, database, and transport as details at the edges

In practical .NET terms:

  • Your Order type knows what a valid order looks like
  • Your Customer type knows whether it is eligible for a specific feature
  • Controllers, message handlers, or background jobs call methods on those types

What it is not:

  • It is not simply having classes called Order and Customer with auto properties
  • It is not pushing every rule into a single God object
  • It is not a diagram alone, while the code keeps all the rules in the controllers

The whole point is to make the rules you care about first-class citizens in your code.

A Concrete Domain Model Slice

Here is a small, but real, Order aggregate with OrderLine in C#.

public class Order
{
    private readonly List<OrderLine> _lines = new();

    private Order(Guid customerId)
    {
        Id = Guid.NewGuid();
        CustomerId = customerId;
        Status = OrderStatus.Draft;
    }

    public Guid Id { get; }
    public Guid CustomerId { get; }
    public OrderStatus Status { get; private set; }
    public IReadOnlyCollection<OrderLine> Lines => _lines.AsReadOnly();
    public decimal TotalAmount => _lines.Sum(l => l.Total);

    public static Order Create(Guid customerId, IEnumerable<OrderLine> lines)
    {
        var order = new Order(customerId);

        foreach (var line in lines)
        {
            order.AddLine(line.ProductId, line.Quantity, line.UnitPrice);
        }

        if (!order._lines.Any())
        {
            throw new InvalidOperationException("Order must have at least one line.");
        }

        return order;
    }

    public void AddLine(Guid productId, int quantity, decimal unitPrice)
    {
        if (Status != OrderStatus.Draft)
        {
            throw new InvalidOperationException("Cannot change a non draft order.");
        }

        if (quantity <= 0)
        {
            throw new ArgumentOutOfRangeException(nameof(quantity));
        }

        if (unitPrice <= 0)
        {
            throw new ArgumentOutOfRangeException(nameof(unitPrice));
        }

        _lines.Add(new OrderLine(productId, quantity, unitPrice));
    }

    public void ApplyDiscount(decimal percent)
    {
        if (percent <= 0 || percent >= 50)
        {
            throw new ArgumentOutOfRangeException(nameof(percent));
        }

        foreach (var line in _lines)
        {
            line.ApplyDiscount(percent);
        }
    }

    public void Submit()
    {
        if (Status != OrderStatus.Draft)
        {
            throw new InvalidOperationException("Only draft orders can be submitted.");
        }

        if (!_lines.Any())
        {
            throw new InvalidOperationException("Cannot submit an empty order.");
        }

        Status = OrderStatus.Submitted;
    }
}

public class OrderLine
{
    public OrderLine(Guid productId, int quantity, decimal unitPrice)
    {
        ArgumentOutOfRangeException.ThrowIfNegativeOrZero(quantity);
        ArgumentOutOfRangeException.ThrowIfNegativeOrZero(unitPrice);

        ProductId = productId;
        Quantity = quantity;
        UnitPrice = unitPrice;
    }

    public Guid ProductId { get; }
    public int Quantity { get; }
    public decimal UnitPrice { get; private set; }
    public decimal Total => Quantity * UnitPrice;

    public void ApplyDiscount(decimal percent)
    {
        UnitPrice = UnitPrice * (1 - percent / 100m);
    }
}

public enum OrderStatus
{
    Draft = 0,
    Submitted = 1,
    Cancelled = 2
}

Notice what is happening here:

  • Creation is controlled through Order.Create, not through new Order() scattered everywhere
  • Order refuses to exist without at least one OrderLine
  • AddLine and ApplyDiscount validate arguments and enforce state transitions
  • Submit enforces that only draft orders are submitted and that empty orders are invalid

The rules live with the data. You no longer need to remember, in every controller, how discounts work or when an order may be modified.

Before Domain Model: Controller As Decision Maker

Most enterprise apps start closer to this shape:

app.MapPost("/orders", async (CreateOrderDto dto, AppDbContext db) =>
{
    if (dto.Lines is null || dto.Lines.Count == 0)
    {
        return Results.BadRequest("Order must have at least one line.");
    }

    var orderEntity = new OrderEntity
    {
        Id = Guid.NewGuid(),
        CustomerId = dto.CustomerId,
        Status = "Draft",
        CreatedAt = DateTime.UtcNow
    };

    foreach (var lineDto in dto.Lines)
    {
        if (lineDto.Quantity <= 0)
        {
            return Results.BadRequest("Quantity must be positive.");
        }

        orderEntity.Lines.Add(new OrderLineEntity
        {
            ProductId = lineDto.ProductId,
            Quantity = lineDto.Quantity,
            UnitPrice = lineDto.UnitPrice
        });
    }

    db.Orders.Add(orderEntity);
    await db.SaveChangesAsync();

    return Results.Created($"/orders/{orderEntity.Id}", new { orderEntity.Id });
});

And later, somewhere else, a discount endpoint:

app.MapPost("/orders/{id:guid}/discounts", async (
    Guid id,
    ApplyDiscountDto dto,
    AppDbContext db) =>
{
    var order = await db.Orders
        .Include(o => o.Lines)
        .SingleOrDefaultAsync(o => o.Id == id);

    if (order == null)
    {
        return Results.NotFound();
    }

    if (order.Status != "Draft")
    {
        return Results.BadRequest("Cannot change a non draft order.");
    }

    if (!order.Lines.Any())
    {
        return Results.BadRequest("Cannot discount an empty order.");
    }

    if (dto.Percent <= 0 || dto.Percent >= 50)
    {
        return Results.BadRequest("Discount percent out of range.");
    }

    foreach (var line in order.Lines)
    {
        line.UnitPrice = line.UnitPrice * (1 - dto.Percent / 100m);
    }

    await db.SaveChangesAsync();

    return Results.Ok(new { order.Id });
});

The same rules are repeated in different forms:

  • Draft status checks
  • Non-empty order checks
  • Discount percent range checks
  • Positive quantity rules

The code works until a new rule arrives and someone updates one endpoint but misses the others.

After Domain Model: Controller As Orchestrator

Now see how the controller changes when you let the domain model handle behavior.

Assume you already use the Order aggregate from earlier and have a repository.

public interface IOrderRepository
{
    Task AddAsync(Order order, CancellationToken cancellationToken = default);
    Task<Order?> GetByIdAsync(Guid id, CancellationToken cancellationToken = default);
}

Creating an Order

app.MapPost("/orders", async (
    CreateOrderDto dto,
    IOrderRepository orders,
    CancellationToken ct) =>
{
    var lines = dto.Lines.Select(l =>
        new OrderLine(l.ProductId, l.Quantity, l.UnitPrice));

    Order order;

    try
    {
        order = Order.Create(dto.CustomerId, lines);
    }
    catch (Exception ex) when (ex is ArgumentOutOfRangeException || ex is InvalidOperationException)
    {
        return Results.BadRequest(ex.Message);
    }

    await orders.AddAsync(order, ct);

    return Results.Created($"/orders/{order.Id}", new { order.Id });
});

public record CreateOrderDto(
    Guid CustomerId,
    List<CreateOrderLineDto> Lines);

public record CreateOrderLineDto(
    Guid ProductId,
    int Quantity,
    decimal UnitPrice);

The endpoint now:

  • Translates input DTOs into domain OrderLine objects
  • Delegates invariant to Order.Create and OrderLine constructors
  • Catches domain exceptions and maps them to HTTP responses

The logic that defines a valid order lives inside Order, not inside the endpoint.

Applying a Discount

app.MapPost("/orders/{id:guid}/discounts", async (
    Guid id,
    ApplyDiscountDto dto,
    IOrderRepository orders,
    CancellationToken ct) =>
{
    var order = await orders.GetByIdAsync(id, ct);

    if (order == null)
    {
        return Results.NotFound();
    }

    try
    {
        order.ApplyDiscount(dto.Percent);
    }
    catch (ArgumentOutOfRangeException ex)
    {
        return Results.BadRequest(ex.Message);
    }

    await orders.AddAsync(order, ct); // or SaveChanges via Unit of Work

    return Results.Ok(new { order.Id, order.TotalAmount });
});

public record ApplyDiscountDto(decimal Percent);

The discount rule is expressed once, in the domain model:

public void ApplyDiscount(decimal percent)
{
    if (percent <= 0 || percent >= 50)
    {
        throw new ArgumentOutOfRangeException(nameof(percent));
    }

    foreach (var line in _lines)
    {
        line.ApplyDiscount(percent);
    }
}

Controllers have one job:

  • Load the aggregate
  • Tell it what to do
  • Persist the result
  • Translate domain errors to responses

That is the essence of Domain Model in a web app.

Why Putting Rules Next To Data Matters

Shifting behavior into domain objects does more than make code “cleaner”. It changes several properties of your system.

One Place To Ask “What Is The Rule”

If a product owner asks:

What exactly are the conditions for applying a discount?

You can answer by opening Order.ApplyDiscount and related collaborators. There is no tour of controllers, repositories, and stored procedures.

Transport Independence

Imagine you want a background service that runs a nightly promotion:

  • It reads eligible orders from the database
  • It applies a discount to each
  • It sends confirmation emails

With a Domain Model, this worker calls the same ApplyDiscount method that your HTTP endpoint uses. If you switch to messaging or add a gRPC API, they all reuse the same behavior.

Stronger, Cheaper Tests

You can write unit tests directly against Order:

[Fact]
public void ApplyDiscount_Throws_WhenPercentOutOfRange()
{
    var order = Order.Create(
        Guid.NewGuid(),
        new[] { new OrderLine(Guid.NewGuid(), 1, 100m) });

    Assert.Throws<ArgumentOutOfRangeException>(() => order.ApplyDiscount(0));
    Assert.Throws<ArgumentOutOfRangeException>(() => order.ApplyDiscount(60));
}

[Fact]
public void Submit_SetsStatusToSubmitted_WhenDraftAndHasLines()
{
    var order = Order.Create(
        Guid.NewGuid(),
        new[] { new OrderLine(Guid.NewGuid(), 1, 100m) });

    order.Submit();

    Assert.Equal(OrderStatus.Submitted, order.Status);
}

No test server, no HTTP, no database. You can exhaustively test the behavior that matters while keeping integration tests focused on wiring.

Integrating Domain Model With Application And Infrastructure

Domain Model does not live alone. It cooperates with:

  • An application layer that coordinates use cases
  • An infrastructure layer that persists, aggregates, and talks to external systems

A typical setup in .NET:

  • MyApp.Domain
    • Entities, value objects, domain services and interfaces for repositories
  • MyApp.Application
    • Application services that orchestrate commands and queries
  • MyApp.Infrastructure
    • EF Core mappings, repository implementations, unit of work
  • MyApp.Web
    • Controllers or minimal APIs that call application services

Example application service using Order:

public interface IOrderApplicationService
{
    Task<Guid> CreateOrderAsync(CreateOrderCommand command, CancellationToken ct = default);
}

public class OrderApplicationService : IOrderApplicationService
{
    private readonly IOrderRepository _orders;

    public OrderApplicationService(IOrderRepository orders)
    {
        _orders = orders;
    }

    public async Task<Guid> CreateOrderAsync(CreateOrderCommand command, CancellationToken ct = default)
    {
        var lines = command.Lines.Select(l =>
            new OrderLine(l.ProductId, l.Quantity, l.UnitPrice));

        var order = Order.Create(command.CustomerId, lines);

        await _orders.AddAsync(order, ct);

        return order.Id;
    }
}

public record CreateOrderCommand(
    Guid CustomerId,
    IReadOnlyCollection<CreateOrderLineCommand> Lines);

public record CreateOrderLineCommand(
    Guid ProductId,
    int Quantity,
    decimal UnitPrice);

Controllers or endpoints call IOrderApplicationService, not Order directly. That keeps HTTP details and use case orchestration together, while the domain model stays focused on rules.

Signs You Are Pretending To Have A Domain Model

Many teams say, “We are doing DDD,” while their code tells a different story. Look for these patterns.

  • Entities with only auto properties and no behavior
  • Controllers or handlers performing status transitions and complex validations
  • Stored procedures implementing key rules, such as discount criteria or eligibility
  • Domain types that depend directly on DbContext or HttpContext

If any of those describe your system, you have building blocks for a domain model, not an actual model.

First Steps Toward A Real Domain Model

You do not need a significant rewrite. Start small.

  1. Pick one important concept
    Order, Subscription, Invoice, or any aggregate that matters to the business.
  2. Move a single rule into that entity
    For example, “order must have at least one line” or “cannot modify submitted orders”.
  3. Expose behavior, not just state
    Add methods like AddLine, ApplyDiscount, Submit, instead of letting the outside world mutate collections directly.
  4. Write tests against the entity
    Prove that the rules hold even when no controller or database is involved.
  5. Refactor controllers to call the domain model
    Remove duplicated checks, catch domain exceptions, map them to HTTP responses.

Repeat that in the parts of the system that hurt the most. Over time, the gravity of the domain model grows, and the framework falls into its proper role as plumbing.

If your core rules are worth money, they are worth a real home in your code. Treat them as the main asset, not as an afterthought squeezed into controllers and stored procedures.

The post Enterprise Patterns for ASP.NET Core Minimal API: Domain Model Pattern – When Your Core Rules Deserve Their Own Gravity first appeared on Chris Woody Woodruff | Fractional Architect.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories