Over the last couple of years, AI coding assistants have changed the day-to-day reality of software engineers more than any single tool in the past decade. AI has not only accelerated how quickly developers write code, but it has also fundamentally changed how we build software. Tasks that once consumed hours of engineering time, such as writing unit tests, scaffolding APIs, generating boilerplate, validating integrations, and even debugging, can now be completed in minutes using AI coding assistants.
This efficiency has reshaped expectations across organisations: leaders assume delivery speed should increase, project timelines should shrink, and teams should “do more with less”. Program managers expect developers to deliver more features in the same amount of time, and executives assume engineering velocity should increase proportionally with AI adoption.
Yet despite this massive shift in capability, the underlying software development lifecycle (SDLC) used by most teams hasn’t evolved in decades. Developers are operating with new tools but old processes. As a result, teams often rush into implementation without enough design clarity, accumulating technical debt faster than ever. AI has changed the pace of software delivery, but the process has not caught up—and this mismatch is causing friction for developers, engineering managers, and product teams alike.
This article explores where the traditional SDLC falls short in the age of AI and proposes a more adaptive, realistic lifecycle that reflects how modern teams actually work.
\
For years, the classic SDLC has been the industry standard—structured, predictable, and intended to reduce risk in complex software projects. The traditional cycle includes:
Planning & Requirement Analysis
Stakeholders collect business needs, identify constraints, define scope, and estimate effort.
Defining Requirements
Requirements become detailed functional and non-functional specifications. Everything is documented before design begins.
Design
Architects and senior engineers map out system components, data models, workflows, and integration points. Historically, this phase determined long-term system quality.
Development
Engineers implement the design, write code, build modules, and integrate components. This phase used to consume the majority of engineering time.
Testing
QA and developers validate functionality, performance, and reliability. Automated tests, manual testing, integration testing and user testing all happen here.
Deployment
Software is released to production environments, accompanied by monitoring, rollback strategies, and operational readiness.
Maintenance
Teams fix bugs, monitor system health, reduce technical debt, and refine the system over time.
\
While this lifecycle worked for years, AI has fundamentally disrupted two core assumptions:
Because the classic SDLC treats development as the slowest and most expensive phase, its structure fails when coding becomes fast. The result:
Teams are building faster than they can plan.
\
AI has created a perception that engineering should deliver dramatically more with fewer people. Goals have become more aggressive as organisations underestimate the cost of planning, design, and long-term system thinking.
With pressure to deliver quickly, teams focus on the best achievable milestone rather than the long-term product vision. Systems become optimised for next month, not next year.
Developers jump into implementation before deep design discussions. AI accelerates execution but does not replace architectural thinking. Updating design mid-development is now common and chaotic.
Short-term solutions pile up quickly. As teams implement features for immediate milestones, long-term stability becomes an afterthought.
\
Engineering teams must push back where needed and provide realistic input during planning. Strong leaders factor engineering insights into timelines rather than assuming AI will solve all bottlenecks.
Execution is faster than ever, but ideation and requirement clarity are not. Product teams must adopt long-term thinking and avoid constantly pivoting mid-cycle, which leads to rework and wasted effort.
The biggest change needed is structural: the SDLC should reflect how software is actually being built today.
AI has made Proof-of-Concept (POC) development an essential pre-design step. POCs are now necessary to test AI capabilities, validate feasibility, and explore user interactions before committing to long-term architecture.
However, POCs should inform feasibility and not dictate architecture.
\
AI has introduced two major additions to the SDLC: \n POC Development and Continuous Iteration.
Planning → Requirement Definition → Minimal Design → POC Development → User Testing → Feedback Review → Iterate
1. Productionizing the Working Prototype:
2. Iterating Based on User Feedback:
This loop continues until the product reaches maturity.
The biggest puzzle today is minimising churn, and how to iterate quickly without causing back-and-forth chaos across design, development, and product teams.
We're all still figuring out the right balance between speed and stability.
\
AI has dramatically accelerated software development, but our processes haven’t evolved at the same pace. The traditional SDLC assumes slow implementation and steady planning - an assumption that no longer holds true. By embracing POCs, iterative cycles, and realistic expectations, teams can turn AI from a source of chaos into a catalyst for better engineering.
The industry is still learning how to adapt, and so am I. \n The pursuit of a smooth, AI-aligned development lifecycle continues.
Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.
"Not everybody enjoys the limelight and being called out, even for great work." - Scott Smith
Scott was facilitating a multi-squad showcase with over 100 participants, and everything seemed to be going perfectly. Each squad had their five-minute slot to share achievements from the sprint, and Scott was coordinating the entire event. When one particular team member delivered what Scott considered fantastic work, he couldn't help but publicly recognize them during the introduction.
It seemed like the perfect moment to celebrate excellence in front of the entire organization. But then his phone rang. The individual he had praised was unhappy—really unhappy. What Scott learned in that moment transformed his approach to recognition forever. The person was quiet, introverted, and conservative by nature.
Being called out without prior notice or permission in front of 100+ people wasn't a reward—it was uncomfortable and unwelcome. Scott discovered that even positive recognition requires consent and awareness of individual preferences. Some people thrive in the spotlight, while others prefer their contributions to be acknowledged privately. The relationship continued well afterward, but the lesson stuck: check in with individuals before publicly recognizing them, understanding that great coaching means respecting how people want to be celebrated, not just that they should be celebrated.
Self-reflection Question: How do you currently recognize team members' achievements, and have you asked each person how they prefer to be acknowledged for their contributions?
[The Scrum Master Toolbox Podcast Recommends]
Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.
🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.
[The Scrum Master Toolbox Podcast Recommends]
About Scott Smith
Scott Smith is a 53-year-old professional based in Perth, Australia. He balances a successful career with a strong focus on health and fitness, currently preparing for bodybuilding competitions in 2026. With a background in leadership and coaching, Scott values growth, discipline, and staying relevant in a rapidly changing world.
You can link with Scott Smith on LinkedIn.
What happens when you've built an amazing team and then have to leave? Bob Galen and Josh Anderson explore the guilt, emotion, and complexity of leadership transitions. Learn why good leaders struggle most with leaving while bad leaders walk away without a second thought. Josh shares his gut-wrenching experience leaving Dude Solutions, Bob discusses how to maintain relationships through transitions, and both hosts reframe departure around legacy and the coaching tree concept. If leaving your team hurts, you probably did it right. Essential listening for any leader facing a career transition.
Josh Anderson's "Leadership Lighthouse"
Dive deeper into the world of Agile leadership and management with Josh Anderson's "Leadership Lighthouse." This bi-weekly newsletter offers insights, tips, and personal stories to help you navigate the complexities of leadership in today's fast-paced tech environment. Whether you're a new manager or a seasoned leader, you'll find valuable guidance and practical advice to enhance your leadership skills. Subscribe to "Leadership Lighthouse" for the latest articles and exclusive content right to your inbox.
Bob Galen's "Agile Moose"
Bob Galen's "Agile Moose" is a must-read for anyone interested in Agile practices, team dynamics, and personal growth within the tech industry. The newsletter features in-depth analysis, case studies, and actionable tips to help you excel in your Agile journey. Bob brings his extensive experience and thoughtful perspectives directly to you, covering everything from foundational Agile concepts to advanced techniques. Join a community of Agile enthusiasts and practitioners by subscribing to "Agile Moose."
Do More Than Listen:
We publish video versions of every episode and post them on our YouTube page.
Help Us Spread The Word:
Love our content? Help us out by sharing on social media, rating our podcast/episodes on iTunes, or by giving to our Patreon campaign. Every time you give, in any way, you empower our mission of helping as many agilists as possible. Thanks for sharing!
This Member Blog was originally published on the Middleware blog and is republished here with permission.
kubectl is the command-line interface for managing Kubernetes clusters. It allows you to manage pods, deployments, and other resources from the terminal, helping you troubleshoot Kubernetes issues., check pod health, and scale applications easily. Most kubectl commands follow a simple structure.
For example, kubectl get pods lists running pods, and kubectl delete pod <pod-name> removes a pod.
Many users wonder how to restart a Kubernetes pod using kubectl. Contrary to popular belief, there is no direct kubectl restart pod command. Instead, Kubernetes expects you to work with higher-level objects, such as Deployments.
This guide covers the safest and most effective methods for restarting pods, including rollout restarts, deleting pods, scaling replicas, and updating environment variables, helping you manage pod restarts in a predictable and controlled way.
Knowing when to restart a Kubernetes pod is key to maintaining application stability and performance. Here are the most common scenarios that require a pod restart:
When you update your application’s settings (such as environment variables or resource limits), the pod continues to use the old configurations. Restarting ensures the new settings take effect.
If your app crashes but the container stays in a “Running” state, or the pod shows as running but isn’t functioning, a restart forces a clean start to recover the service.
Restarting the pod helps resolve temporary issues or confirms persistent problems while troubleshooting why the application isn’t behaving as expected.
A pod may stop responding to traffic while Kubernetes still reports it as healthy. Restarting resolves frozen states or resource leaks and restores responsiveness.
Understanding the different Kubernetes pod states enables you to monitor your application’s health and take the necessary actions when needed. Here are the key pod states you should know:
Kubernetes has approved the pod, but it is awaiting scheduling and launch. This occurs while Kubernetes is downloading container images or while it is still looking for a suitable node to run your pod. A prolonged pending pod typically indicates a configuration issue or insufficient resources.
Your pod has at least one active container. The containers are working, but this doesn’t mean everything is functional. Your application may still have troubles despite the pod running.
You typically see this state with jobs or one-time tasks that are designed to run once and finish. It means all containers in the pod have completed their tasks successfully and won’t restart.
The failed state means one or more containers in the pod have stopped running, maybe due to an error, or the system terminated the containers. It indicates something went wrong with your application, or the container couldn’t restart correctly. Failed pods often need a restart.
This indicates that the node where your pod should be running has lost contact with Kubernetes. Node failures, network problems, or other infrastructure issues may be the cause of this. It’s actually hard to tell what’s going on with your pod when you see this state.
When you search for how to restart a Kubernetes pod using kubectl, the first thing that comes to mind is the command:
kubectl restart pod
However, that command does not exist. Instead, there are several reliable methods to restart Kubernetes pods using kubectl. Below are the most effective and commonly used approaches:
This is the commonly used method that follows Kubernetes best practices for restarting pods managed by a deployment. It performs a controlled restart without downtime by creating new pods and removing old ones.
Commands to use:
kubectl rollout restart deployment/my-app
This command replaces existing pods with new ones. It will remove the old pods after starting and waiting for the new ones. This approach keeps your app up during the restart.
To restart pods in a deployment within a specific namespace:
kubectl rollout restart deployment/my-app -n your-namespace
To check the status of your restart
kubectl rollout status deployment/my-app
Consider this strategy if you want minimal downtime, the safest alternative, or have deployment-managed pods.
With this method, you must delete pods to force Kubernetes to recreate them. It’s simpler than rollout restart, but you must watch which pods you remove.
If the pod is managed by a deployment, replica set, or equivalent controller, Kubernetes immediately creates a new one when you delete the existing one. However, this may temporarily disrupt service.
Here’s how to go about it:
# List all pods to see what you're working with
kubectl get pods
# To delete a specific pod
kubectl delete pod <pod-name>
# To delete multiple pods at once
kubectl delete pod <pod-1> <pod-2>
# To delete and wait to remove fully
kubectl delete pod <pod-name> --wait=true
# To force delete a stuck pod (use with caution)
kubectl delete pod <pod-name> --grace-period=0 --force
Delete only controller-managed pods. A standalone pod that isn’t managed by anything will never be restored if it is deleted.
This strategy works by scaling your deployment down to zero replicas for a short time, which stops all the pods. Then it scales back up to the number you started with. Kubernetes lets you turn your program off and back again in a controlled way.
1. Check how many replicas you currently have
kubectl get deployment my-app
2. Scale down to zero
kubectl scale deployment my-app --replicas=0
3. Lastly, scale back up to your original number (creates new pods)
kubectl scale deployment my-app --replicas=3
When you scale down to zero, Kubernetes deletes all the pods in that deployment. When you scale back up, it creates new pods from scratch. This approach is more aggressive than rollout restart, but sometimes necessary when you need a complete fresh start.
This is yet another clever method for pod restarts. You will need to modify their configuration slightly. Kubernetes interprets changing environment variables in a deployment as a configuration change and restarts the pods automatically to implement the updated configuration.
The key here is that you don’t even have to alter your environment variables significantly. To initiate the restart, update a timestamp or add a dummy variable.
For instance:
You can update an existing environment variable
kubectl set env deployment/my-app RESTART_TRIGGER=$(date +%s)
Or
You can also edit the deployment directly:
kubectl edit deployment my-app
Then add or modify any environment variable in the editor.
The benefit of using this approach is that it follows the same safe strategy as the rollout restart, and there’s no downtime during the restart process.
Using the same configuration or an updated version, this method requires deleting particular pods and then manually creating new ones to replace them. You have total control over the creation and deletion processes with this method.
1. Get the pod configuration and save it
kubectl get pod -o yaml > pod-backup.yaml
2. Delete the existing pod
kubectl delete pod
3. Create a new pod using the saved configuration
kubectl apply -f pod-backup.yaml
This method causes downtime because the old pod is removed before the new one starts. This method is only used with standalone pods; don’t do this with pods managed by deployments.
Restarting Kubernetes pods can help you in several practical ways, as discussed in this article, one of which is maintaining healthy applications. It doesn’t matter the method you choose to use; the key thing is to understand when and why you want to restart.
Monitoring pods also helps you in making the right restart decision. Instead of guessing what’s wrong, which you might be wrong about, proper observability shows you exactly when pods need attention.
Look at a typical enterprise ASP.NET Core application, and you often see the same pattern:
If you need to know how orders work, you do not open a single file. You read controllers, queries, and database scripts until your eyes blur. The truth about the business lives everywhere and nowhere.
The Domain Model is the pattern that reverses this arrangement.
Instead of clever controllers and dumb entities, you move the rules into rich objects. Entities and value objects enforce invariants. The application layer orchestrates use cases by telling those objects what to do.
This post shows what that looks like in C#, and why putting rules next to data changes how your system behaves over time.
In Fowler’s terms, a Domain Model:
In practical .NET terms:
Order type knows what a valid order looks likeCustomer type knows whether it is eligible for a specific featureWhat it is not:
Order and Customer with auto propertiesThe whole point is to make the rules you care about first-class citizens in your code.
Here is a small, but real, Order aggregate with OrderLine in C#.
public class Order
{
private readonly List<OrderLine> _lines = new();
private Order(Guid customerId)
{
Id = Guid.NewGuid();
CustomerId = customerId;
Status = OrderStatus.Draft;
}
public Guid Id { get; }
public Guid CustomerId { get; }
public OrderStatus Status { get; private set; }
public IReadOnlyCollection<OrderLine> Lines => _lines.AsReadOnly();
public decimal TotalAmount => _lines.Sum(l => l.Total);
public static Order Create(Guid customerId, IEnumerable<OrderLine> lines)
{
var order = new Order(customerId);
foreach (var line in lines)
{
order.AddLine(line.ProductId, line.Quantity, line.UnitPrice);
}
if (!order._lines.Any())
{
throw new InvalidOperationException("Order must have at least one line.");
}
return order;
}
public void AddLine(Guid productId, int quantity, decimal unitPrice)
{
if (Status != OrderStatus.Draft)
{
throw new InvalidOperationException("Cannot change a non draft order.");
}
if (quantity <= 0)
{
throw new ArgumentOutOfRangeException(nameof(quantity));
}
if (unitPrice <= 0)
{
throw new ArgumentOutOfRangeException(nameof(unitPrice));
}
_lines.Add(new OrderLine(productId, quantity, unitPrice));
}
public void ApplyDiscount(decimal percent)
{
if (percent <= 0 || percent >= 50)
{
throw new ArgumentOutOfRangeException(nameof(percent));
}
foreach (var line in _lines)
{
line.ApplyDiscount(percent);
}
}
public void Submit()
{
if (Status != OrderStatus.Draft)
{
throw new InvalidOperationException("Only draft orders can be submitted.");
}
if (!_lines.Any())
{
throw new InvalidOperationException("Cannot submit an empty order.");
}
Status = OrderStatus.Submitted;
}
}
public class OrderLine
{
public OrderLine(Guid productId, int quantity, decimal unitPrice)
{
ArgumentOutOfRangeException.ThrowIfNegativeOrZero(quantity);
ArgumentOutOfRangeException.ThrowIfNegativeOrZero(unitPrice);
ProductId = productId;
Quantity = quantity;
UnitPrice = unitPrice;
}
public Guid ProductId { get; }
public int Quantity { get; }
public decimal UnitPrice { get; private set; }
public decimal Total => Quantity * UnitPrice;
public void ApplyDiscount(decimal percent)
{
UnitPrice = UnitPrice * (1 - percent / 100m);
}
}
public enum OrderStatus
{
Draft = 0,
Submitted = 1,
Cancelled = 2
}
Notice what is happening here:
Order.Create, not through new Order() scattered everywhereOrder refuses to exist without at least one OrderLineAddLine and ApplyDiscount validate arguments and enforce state transitionsSubmit enforces that only draft orders are submitted and that empty orders are invalidThe rules live with the data. You no longer need to remember, in every controller, how discounts work or when an order may be modified.
Most enterprise apps start closer to this shape:
app.MapPost("/orders", async (CreateOrderDto dto, AppDbContext db) =>
{
if (dto.Lines is null || dto.Lines.Count == 0)
{
return Results.BadRequest("Order must have at least one line.");
}
var orderEntity = new OrderEntity
{
Id = Guid.NewGuid(),
CustomerId = dto.CustomerId,
Status = "Draft",
CreatedAt = DateTime.UtcNow
};
foreach (var lineDto in dto.Lines)
{
if (lineDto.Quantity <= 0)
{
return Results.BadRequest("Quantity must be positive.");
}
orderEntity.Lines.Add(new OrderLineEntity
{
ProductId = lineDto.ProductId,
Quantity = lineDto.Quantity,
UnitPrice = lineDto.UnitPrice
});
}
db.Orders.Add(orderEntity);
await db.SaveChangesAsync();
return Results.Created($"/orders/{orderEntity.Id}", new { orderEntity.Id });
});
And later, somewhere else, a discount endpoint:
app.MapPost("/orders/{id:guid}/discounts", async (
Guid id,
ApplyDiscountDto dto,
AppDbContext db) =>
{
var order = await db.Orders
.Include(o => o.Lines)
.SingleOrDefaultAsync(o => o.Id == id);
if (order == null)
{
return Results.NotFound();
}
if (order.Status != "Draft")
{
return Results.BadRequest("Cannot change a non draft order.");
}
if (!order.Lines.Any())
{
return Results.BadRequest("Cannot discount an empty order.");
}
if (dto.Percent <= 0 || dto.Percent >= 50)
{
return Results.BadRequest("Discount percent out of range.");
}
foreach (var line in order.Lines)
{
line.UnitPrice = line.UnitPrice * (1 - dto.Percent / 100m);
}
await db.SaveChangesAsync();
return Results.Ok(new { order.Id });
});
The same rules are repeated in different forms:
The code works until a new rule arrives and someone updates one endpoint but misses the others.
Now see how the controller changes when you let the domain model handle behavior.
Assume you already use the Order aggregate from earlier and have a repository.
public interface IOrderRepository
{
Task AddAsync(Order order, CancellationToken cancellationToken = default);
Task<Order?> GetByIdAsync(Guid id, CancellationToken cancellationToken = default);
}
app.MapPost("/orders", async (
CreateOrderDto dto,
IOrderRepository orders,
CancellationToken ct) =>
{
var lines = dto.Lines.Select(l =>
new OrderLine(l.ProductId, l.Quantity, l.UnitPrice));
Order order;
try
{
order = Order.Create(dto.CustomerId, lines);
}
catch (Exception ex) when (ex is ArgumentOutOfRangeException || ex is InvalidOperationException)
{
return Results.BadRequest(ex.Message);
}
await orders.AddAsync(order, ct);
return Results.Created($"/orders/{order.Id}", new { order.Id });
});
public record CreateOrderDto(
Guid CustomerId,
List<CreateOrderLineDto> Lines);
public record CreateOrderLineDto(
Guid ProductId,
int Quantity,
decimal UnitPrice);
The endpoint now:
OrderLine objectsOrder.Create and OrderLine constructorsThe logic that defines a valid order lives inside Order, not inside the endpoint.
app.MapPost("/orders/{id:guid}/discounts", async (
Guid id,
ApplyDiscountDto dto,
IOrderRepository orders,
CancellationToken ct) =>
{
var order = await orders.GetByIdAsync(id, ct);
if (order == null)
{
return Results.NotFound();
}
try
{
order.ApplyDiscount(dto.Percent);
}
catch (ArgumentOutOfRangeException ex)
{
return Results.BadRequest(ex.Message);
}
await orders.AddAsync(order, ct); // or SaveChanges via Unit of Work
return Results.Ok(new { order.Id, order.TotalAmount });
});
public record ApplyDiscountDto(decimal Percent);
The discount rule is expressed once, in the domain model:
public void ApplyDiscount(decimal percent)
{
if (percent <= 0 || percent >= 50)
{
throw new ArgumentOutOfRangeException(nameof(percent));
}
foreach (var line in _lines)
{
line.ApplyDiscount(percent);
}
}
Controllers have one job:
That is the essence of Domain Model in a web app.
Shifting behavior into domain objects does more than make code “cleaner”. It changes several properties of your system.
If a product owner asks:
What exactly are the conditions for applying a discount?
You can answer by opening Order.ApplyDiscount and related collaborators. There is no tour of controllers, repositories, and stored procedures.
Imagine you want a background service that runs a nightly promotion:
With a Domain Model, this worker calls the same ApplyDiscount method that your HTTP endpoint uses. If you switch to messaging or add a gRPC API, they all reuse the same behavior.
You can write unit tests directly against Order:
[Fact]
public void ApplyDiscount_Throws_WhenPercentOutOfRange()
{
var order = Order.Create(
Guid.NewGuid(),
new[] { new OrderLine(Guid.NewGuid(), 1, 100m) });
Assert.Throws<ArgumentOutOfRangeException>(() => order.ApplyDiscount(0));
Assert.Throws<ArgumentOutOfRangeException>(() => order.ApplyDiscount(60));
}
[Fact]
public void Submit_SetsStatusToSubmitted_WhenDraftAndHasLines()
{
var order = Order.Create(
Guid.NewGuid(),
new[] { new OrderLine(Guid.NewGuid(), 1, 100m) });
order.Submit();
Assert.Equal(OrderStatus.Submitted, order.Status);
}
No test server, no HTTP, no database. You can exhaustively test the behavior that matters while keeping integration tests focused on wiring.
Domain Model does not live alone. It cooperates with:
A typical setup in .NET:
MyApp.Domain
MyApp.Application
MyApp.Infrastructure
MyApp.Web
Example application service using Order:
public interface IOrderApplicationService
{
Task<Guid> CreateOrderAsync(CreateOrderCommand command, CancellationToken ct = default);
}
public class OrderApplicationService : IOrderApplicationService
{
private readonly IOrderRepository _orders;
public OrderApplicationService(IOrderRepository orders)
{
_orders = orders;
}
public async Task<Guid> CreateOrderAsync(CreateOrderCommand command, CancellationToken ct = default)
{
var lines = command.Lines.Select(l =>
new OrderLine(l.ProductId, l.Quantity, l.UnitPrice));
var order = Order.Create(command.CustomerId, lines);
await _orders.AddAsync(order, ct);
return order.Id;
}
}
public record CreateOrderCommand(
Guid CustomerId,
IReadOnlyCollection<CreateOrderLineCommand> Lines);
public record CreateOrderLineCommand(
Guid ProductId,
int Quantity,
decimal UnitPrice);
Controllers or endpoints call IOrderApplicationService, not Order directly. That keeps HTTP details and use case orchestration together, while the domain model stays focused on rules.
Many teams say, “We are doing DDD,” while their code tells a different story. Look for these patterns.
DbContext or HttpContextIf any of those describe your system, you have building blocks for a domain model, not an actual model.
You do not need a significant rewrite. Start small.
AddLine, ApplyDiscount, Submit, instead of letting the outside world mutate collections directly.Repeat that in the parts of the system that hurt the most. Over time, the gravity of the domain model grows, and the framework falls into its proper role as plumbing.
If your core rules are worth money, they are worth a real home in your code. Treat them as the main asset, not as an afterthought squeezed into controllers and stored procedures.
The post Enterprise Patterns for ASP.NET Core Minimal API: Domain Model Pattern – When Your Core Rules Deserve Their Own Gravity first appeared on Chris Woody Woodruff | Fractional Architect.