Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150718 stories
·
33 followers

AI-assisted coding for teams that can't get away with vibes

1 Share

Status: Living document based on production experience
Last updated: 5-Jun-2025

AI should be adopted by serious engineering teams that want to build thoughtful, well-crafted products. This requires skillful usage of these tools. Our obsession with building high-quality software for over a decade has driven us to figure out how this new way of building can result in better products.

Building with AI is fast. The gains in velocity are important, because when harnessed correctly, it allows teams to tighten feedback loops with users faster and make better products.

Yet, AI tools are tricky to use. Hold it wrong, and you can generate underwhelming results, worse still, slow down your velocity by drowning your project in slop and technical debt.

This living playbook is based on our experience working with AI tools in the messy trenches of production software, where no one can afford to get away with vibes. I hope other teams can learn and benefit from our findings.

AI is a multiplier

To make AI good, get good yourself. AI is a multiplier. If you are a small coefficient, you won’t see much gain. If you are a negative coefficient, expect negative gains.

What I have noticed is that the best and most experienced engineers are able to extract a lot more out of AI tools. There are several reasons for this:

  • They are extremely good at communicating technical ideas.
  • They have a keen calibration and feel for what leads to a good system and can steer LLMs accordingly, i.e., they have what I like to call “the mechanic’s touch”.
  • They have strong fundamentals, so they immediately get up to speed with new tools and systems where knowledge, not skill is the bottleneck.
  • AI is still sensitive to language and style and will often mirror the tastes and sensibilities of the prompter. Highly skilled engineers have really sharpened taste and instinct for what works and what doesn’t.

Therefore, embody the care of a craftperson. At the end of the day, you should produce artifacts you are proud of, even if the AI assisted in making it. This has translated well into the output I am seeing from these systems.

Here’s an example. This prompt is not unreasonable but not particularly thoughtful:

Write a Python rate limiter that limits users to 10 requests per minute.

I would expect this prompt to give okay results, but also miss some edge cases, good practices and quality standards. This is how you might see someone at nilenso prompt an AI for the same task:

Implement a token bucket rate limiter in Python with the following requirements:

- 10 requests per minute per user (identified by `user_id` string)
- Thread-safe for concurrent access
- Automatic cleanup of expired entries
- Return tuple of (allowed: bool, retry_after_seconds: int)

Consider:
- Should tokens refill gradually or all at once?
- What happens when the system clock changes?
- How to prevent memory leaks from inactive users?

Prefer simple, readable implementation over premature optimization. Use stdlib only (no Redis/external deps).

Guess which prompt is going to better achieve the program designer’s intent?

A technique that has worked well for us is metaprompting. I prompt the model with a simple task and ask it to help surface tradeoffs and edge cases. Then I turn it into a tech spec and hand it off to another LLM agent to execute. Even the “better prompt” I shared above is a result of asking the AI to come up with a good prompt. From my experience, models have become good at prompting themselves.

The mechanics of what works for these tools are in flux, but one robust principle is to really work on yourself to be a good engineer. Your habits will quickly pass on to the AI systems you work with. The reason this works is because what helps the human helps the AI.

What helps the human helps the AI

I shall clarify what software engineering is, something I found worth revisiting in the light of disruption from AI advancements.

Software engineering is not about writing code. Or at least, that’s not the defining characteristic, much like how writing is not wrist exercises with ink on paper.

To me software engineering is the art and science of maintaining a large body of well-defined mental models that achieve a business or economic need. Much of the work is around crafting and curating these large, complex sociotechnical systems, and code is just one representation of these systems.

Until AI is good enough to engulf this whole sociotechnical system and expel out all the humans cultivating it, it has to participate and benefit from this very system. In simpler words: AI thrives far, far better in an environment in which a human would also thrive. Which means your team’s software fundamentals should be strong.

A system in which AI thrives is one with markers of a high quality team and codebase. These are:

  • Good test coverage, with useful assertions
  • Automated linting, formatting and test checks before code merges
  • Continuous integration and deployment
  • Well documented changes, tech specs, ADRs with good commit messages
  • Consistent styles and patterns, enforced through a formatter
  • Simple, concise, well-organised code
  • Clearly defined features, broken down into multiple small story cards

Today’s AI can and will make use of all these things to make things “just work”. When I give a coding agent a task, it keeps correcting itself in the agentic loop by running the test cases and static analysis tools. This greatly reduces the hand-holding and intervention needed to get work done.

A rich environment and context helps the AI work better.

Here’s an anecdote: when I was working at a project that had two services, one of them had all of the things I described above—good tests, well-documented changes, consistent patterns in the code, lots of checks and guardrails. The other service was messier and had none of the above. Our AI coding assistant struggled to complete a task of equal difficulty on the latter codebase when compared to the former! This is likely because the messier codebase was as confusing for the AI as it would be for a human. There were mixed signals about the right way to do things.

Now that I have outlined the general strategy, here are some tactics that have helped me.

Use the best frontier AI models, don’t cheap out.

  • Use the best coding model available. Do not try to save credits and cost by using a worse model. The goodness of a good model compounds. All the tactics that I present ahead will work far better when you have a strong coding model to begin with.

Be excellent at providing context.

  • The effectiveness of AI-assisted coding is strongly dependent on how skillfully you can provide the right context to the LLM.
  • Use an “agentic” coding tool. These are tools that are able to read and analyse files, run shell commands, fetch docs, create plans and execute on those plans, needing no human intervention (except maybe approvals). Our current recommendation for tools that do this are Claude Code, Windsurf, Cursor, Cline.
  • LLMs can get distracted and fall into rabbitholes if given irrelevant or a cluttered context. Focus its attention by only @-mentioning files that are relevant and linking only to documentation that helps the task.
  • Encode coding standards and practices in a RULES.md file. Symlink this file to agent specific rules files such as .cursorrules, .windsurfrules, claude.md, agents.md etc
    • This file should have information about the tech stack, how to use the dev tooling and run the linter, coding standard and patterns, and cover for common mistakes that the LLMs have made when working with the code. Here’s an example.

Implementing a new feature or refactor

  • Break down the problem. AI works better the more specific you are. Remember, you can also use the AI to reduce the tedium of making your prompts better written and more specific. Reasoning models are great at this!
  • If you are working on a big feature, break it down into small tasks, and feed the tasks one by one, making a commit at the end of each task. If you do these with your stories, the story card description with the task list is often a very helpful description for the AI.
  • Supply tech specs and relevant documentation about the product and feature. Don’t just ask it to write code without broader context of the product. Also feed it documentation on how to use the libraries you are using. Pasting links to documentation often works with most tools. Some libraries provide a llms.txt for coding agents to use.
  • Another pattern that has worked well for us is to break down the feature into “planning” and “execution” stages. Some coding agents already do this kind of a breakdown for you.
  • Do not take AI suggestions for granted. Ask it to justify its choices, present alternatives and think about advantages and drawbacks.

Debugging

  • Use AI to debug errors in its generation. Always paste the error context most relevant for the LLM to help it understand the issue (I prefer to delineate the error logs or output in a separate XML tag).
  • Explain what you have tried, and additional observations to help the model generate correct hypotheses and eliminate bad ones. Provide lots of context.

Use AI to grow your own skills and knowledge

  • LLMs are an infinitely patient teacher with massive world knowledge (and more recently, ability to research effectively). Aggressively use them to learn things and demystify any new code or stack. Relentlessly dig. Figure out the best practices. Ensure you are learning correctly by getting the LLM to cite high quality sources.

Create extensive documentation

  • Create lots of detailed documentation easily by feeding codebases to the LLM. Egs:
    • Explain functionality, create a knowledge base
    • Summarise all the current metrics being collected
    • Identify missing test cases more intelligently

There’s a good reason to do this—documentation is now cheap to generate and feeds back into making your LLMs (and humans) on the project a lot more effective.

Microfriction lubricants

LLMs greatly reduce the cost of creating lubricants for all the minor friction points that teams run into on a daily basis.

  • Use them to create mockservers to coordinate and unblock work between frontend and backend teams. All that is needed is agreeing on a contract.
  • Create runbooks and guides for infra deployments, common types of troubleshooting and more by supplying shell history sessions to the LLM.
  • Feed existing runbooks and guides to an LLM to make them into scripts automating common tasks.

Code review

  • Have a template for Pull Requests, feed the code diff (git log -p <range>) of each feature to the AI to explain the changes and how to deploy them. Some tools can already do this for you.
  • To reduce time to first PR review, use a code reviewing bot for the first part. But do not replace human review!
  • Use LLMs to explain a change that you don’t fully understand as a reviewer. Ask it for clarification, and then ask the implementer after gathering the necessary context.

Debugging and monitoring live applications

  • Use researching capabilities of LLMs to help find solutions to uncommon errors. Follow the advice of debugging in the editor to debug outside it. Provide as much context as you can.
  • LLMs are quite decent at writing queries and alerting rules for observability tools. They also are good at crunching data and performing analyses by writing custom python code.

Performance optimisations

  • Use LLMs to help you optimise databases and tune configuration. When doing so provide context on the infrastructure and hardware. Share query plans.
  • This is an example of such an interaction I had recently.

Implications on how AI changes craft

This is a huge shift in how we write software, and I believe it warrants some changes to ideas that were previously considered common wisdom.

Firstly, It’s less valuable to spend too much time looking for and building sophisticated abstractions. DRY is useful for ensuring patterns in the code don’t go out of sync, but there are costs to implementing and maintaining an abstraction to handle changing requirements. LLMs make some repetition palatable and allow you to wait a bit more and avoid premature abstraction.

Redoing work is now extremely cheap. Code in the small is less important than structural patterns and organisation of the code in the large. You can also build lots of prototypes to test an idea out. For this, vibe-coding is great, as long as the prototype is thrown away and rewritten properly later.

Working with LLMs also lets you take advantage of the generator-verifier gap. Often it’s easier to verify and fix things than it is to produce them from scratch. This reduces activation energy to try new things.

Tests are non-negotiable, and AI removes all excuses to not write them because of how fast they can belt them out. But always review the assertions!

  • Deploying autonomous agents like Devin/Jules/Claude Code and using them well
  • AI tooling for writing queries, performing data analysis
  • Concerns with leaking proprietary code, hosted LLM options, etc
  • Building a culture of sharing prompts, patterns and templates
  • Effective ways of driving AI adoption in teams

References

Programming with LLMs - David Crawshaw

Here’s how I use LLMs to help me write code - Simon Willison

How I use “AI” - Nicholas Carlini

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Agentic Coding Recommendations

1 Share

written on June 12, 2025

There is currently an explosion of people sharing their experiences with agentic coding. After my last two posts on the topic, I received quite a few questions about my own practices. So, here goes nothing.

Preface

For all intents and purposes, here’s what I do: I predominently use Claude Code with the cheaper Max subscription for $100 a month 1. That works well for several reasons:

  • I exclusively use the cheaper Sonnet model. It’s perfectly adequate for my needs, and in fact, I prefer its outputs over the more expensive Opus model.

  • I optimize my tool usage to be token efficient. I avoid screenshots and browser interactions wherever possible. More on that later.

My general workflow involves assigning a job to an agent (which effectively has full permissions) and then waiting for it to complete the task. I rarely interrupt it, unless it’s a small task. Consequently, the role of the IDE — and the role of AI in the IDE — is greatly diminished; I mostly use it for final edits. This approach has even revived my usage of Vim, which lacks AI integration.

One caveat: I expect this blog post to age very poorly. The pace of innovation here is insane; what was true a month ago barely holds true today. That’s why I’m sticking to concepts I believe have staying power.

If you want to a small session of me working on an Open Source library with it, I have a recording you can watch.

The Basics

I disable all permission checks. Which basically means I run claude --dangerously-skip-permissions. More specifically I have an alias called claude-yolo set up. Now you can call that irresponsible and there are definitely risks with it, but you can manage those risks with moving your dev env into docker. I will however say that if you can watch it do its thing a bit, it even works surprisingly well without dockerizing. YMMV.

MCP. This is a term you cannot avoid. It basically is a standardized protocol to give agents access to more tools. Honestly: at this point I barely use it, but I do use it. The reason I barely use it is because Claude Code is very capable of just running regular tools. So MCP for me is really only needed if I need to give Claude access to something that finds too hard to use otherwise. A good example for this is the playwright-mcp for browser automation. I use it because I haven’t found anything better yet. But for instance when I want my agent to poke around in my database, I just uses whatever it finds to be available. In my case it loves to use psql and that’s more than good enough.

In general I really only start using MCP if the alternative is too unreliable. That’s because MCP servers themselves are sometimes not super reliable and they are an extra thing that can go wrong. Trying to keep things very simple. My custom tools are normal scripts that it just runs.

Choice Of Language

I’ve evaluated agent performance across different languages my workload, and if you can choose your language, I strongly recommend Go for new backend projects. Several factors strongly favor Go:

  • Context system: Go provides a capable copy-on-write data bag that explicitly flows through the code execution path, similar to contextvars in Python or .NET’s execution context. Its explicit nature greatly simplifies things for AI agents. If the agent needs to pass stuff to any call site, it knows how to do it.

  • Test caching: Surprisingly crucial for efficient agentic loops. In Rust, agents sometimes fail because they misunderstand cargo test‘s invocation syntax. In Go, tests run straightforwardly and incrementally, significantly enhancing the agentic workflow. It does not need to figure out which tests to run, go does.

  • Go is sloppy: Rob Pike famously described Go as suitable for developers who aren’t equipped to handle a complex language. Substitute “developers” with “agents,” and it perfectly captures why Go’s simplicity benefits agentic coding.

  • Structural interfaces: interfaces in Go are structural. If a type has the methods an interface expects, then it conforms. This is incredibly easy for LLMs to “understand”. There is very little surprise for the agent.

  • Go has low eco-system churn: Go’s entire ecosystem embraces backwards compatiblity and explicit version moves. This greatly reduces the likelihood of AI generating outdated code — starkly contrasting JavaScript’s fast-moving ecosystem for instance.

For comparison, Python — my initial choice — often poses significant challenges. Agents struggle with Python’s magic (eg: Pytest’s fixture injection) or complex runtime challenges (eg: wrong event loop when working with async), frequently producing incorrect code that even the agentic loop has challenges resolving. Python also has practical performance problems. I don’t mean that it writes slow code, i mean that the agent loop is really slow. That’s because the agent loves to spawn processes and test scripts, and it can take quite a while for the interpreter to boot up and initialize the entire application.

On the frontend I settled on tailwind, react with tanstack’s query and router as well as vite. I’m not amazingly happy with it, but I found it better than the alternatives. Tailwind and vite are great, no complaints there. Tanstack’s file based router does not make me super happy. In parts because it likes to have dollar signs in the file names and those really like to confuse the agent. For instance it’s quite common that it tries to edit $param.tsx but it edits the file .tsx instead because it to confused by shell interpolation. It’s a minor thing, but a very annoying thing.

Tools, Tools, Tools

Regardless of language, effective tooling is paramount. Key rules:

  • Anything can be a tool. A shell script can be a tool, an MCP server can be a tool, a log file can be a tool. If the agent can interact or observe it, it counts.

  • Tools need to be fast. The quicker they respond (and the less useless output they produce) the better. Crashes are tolerable; hangs are problematic.

  • Tools need to be user friendly! Tools must clearly inform agents of misuse or errors to ensure forward progress.

  • Tools need to be protected against an LLM chaos monkey using them completely wrong. There is no such things as user error or undefined behavior!

  • Tools need to provide the right debuggability and observability.

Example: I place critical tools into a Makefile. A make dev command starts my services via a process manager. I made two key changes to what I normally do:

  1. I had to protect the process manager against being spawend a second time when it’s already running. I use a fork of shoreman which writes a pidfile. When spanwed a second time it errors and logs “services already running”. Why does that matter? Because the agent sometimes does not know if the server is already running and otherwise happily spawns it a second time resulting in two version of the service bound to the same port.

  2. I am not only printing the output to my terminal, I’m also always logging it to a file. This allows the agent to read the log file to diagnose what’s happening.

Here an example agentic interaction that demonstrates this rather well:

⏺ Great! The formatting looks good. Now let me run the development server to test the
   dynamic onboarding:

⏺ Bash(make dev)
  ⎿  Error: error: services are already running
     make: *** [dev] Error 1

⏺ The development server is already running. Let me check the logs to see the frontend URL:

⏺ Bash(make tail-log)
  ⎿ 01:06:55 backend    | Starting server on localhost:8080
    01:07:39 backend    | [Running: sh -c 'cd backend && go run cmd/server/main.go serve']
    01:07:39 backend    | Starting server on localhost:8080
    08:45:34 backend    | "GET <a href="http://localhost:8080/api/users/me" rel="nofollow">http://localhost:8080/api/users/me</a> HTTP/1.1" from 127.0.0.1:61804 - 200
    39.252666ms
    … +96 lines (ctrl+r to expand)

⏺ Perfect! I can see the frontend is running on <a href="http://localhost:3000/" rel="nofollow">http://localhost:3000/</a> and the latest changes
   have been hot-reloaded. Let me test the dynamic onboarding functionality using playwright:

In general logging is super important. For instance my app currently has a sign in and register flow that sends an email to the user. In debug mode (which the agent runs in), the email is just logged to stdout. This is crucial! It allows the agent to complete a full sign-in with a remote controlled browser without extra assistence. It knows that emails are being logged thanks to a CLAUDE.md instruction and it automatically consults the log for the necessary link to click.

Does it count as a tool? In my book it does.

It’s All About Speed

Agentic coding’s inefficiency largely arises from inference cost and suboptimal tool usage. Let me reiterate: quick, clear tool responses are vital. What we did not talk about yet is that some tools are “emergent,” temporarily written by agents themselves. Quick compilation and execution significantly boost productivity of the agent. So how can we help it?

With the right instructions it must be possible for the AI to create a new tool, by following existing conventions very quickly. This is necessary because you want the AI to write some new code, and run it. There is a big difference in the quality and speed of the flow, if that tool takes 3ms to run vs it compiles for 5 seconds and then needs another minute to boot and connect to database and kafka broker and 100 lines of nonsensical log output.

If your stuff is indeed slow, then consider vibe-coding a daemon that you can dynamically load stuff into. As an example Sentry takes too long to reload code and it takes too long to restart. To trial some agentic coding there my workaround was a module that watches a file system location and just imports and executes all python modules placed there, then writes the outputs into a log it can cat. That’s not perfect, but it was a significant help for the agent to evaluate some basic code in the context of the application.

Balancing log verbosity is crucial: informative yet concise logs optimize token usage and inference speed, avoiding unnecessary costs and rate limits. If you cannot find the balance, provide some easy to turn knobs for the AI to control.

In an idea setup you get useful log output as a natural byproduct of the agent writing code. Getting observability from the first shot of code generation beats writing code, failing to run it and only then going back to a debug loop where debug information is added.

Stability and Copy/Paste

Stable ecosystems are what you really want. LLMs are great with Go and they love to use Flask, because those are quite stable ecosystems with little churn. The same thing is true for your codebase. The AI likes to leave all kinds of breadcrumbs lying around when writing code that can turn into confusion later. For instance I have seen the agents leave useful comments about why it chose one path over another. If you nilly-willy let the AI upgrade libraries where some of those decisions no longer make sense, you now might have the AI continue making a now outdated pattern.

In theory this should be the same for agents and humans, but the reality is that agents make upgrades so “cheap” that it’s tempting to just let the AI do it and see if tests still pass. I do not find this to be a successful path at all. Be even more conservative about upgrades than before.

Likewise with AI I strongly prefer more code generation over using more dependencies. I wrote about why you should write your own code before, but the more I work with agentic coding, the more I am convinced of this.

Write Simple Code

Simple code significantly outperforms complex code in agentic contexts. I just recently wrote about ugly code and I think in the context of agents this is worth re-reading. Have the agent do “the dumbest possible thing that will work”.

  • Prefer functions with clear, descriptive and longer than usual function names over classes.

  • Avoid inheritance and overly clever hacks.

  • Use plain SQL. I mean it. You get excellent SQL out of agents and they can match the SQL they write with the SQL logs. That beats them min-maxing your ORM’s capabilities and getting lost in the SQL output in a log.

  • Keep important checks local. You really want to make sure that permission checks are very clear to the AI, and that they are taking place where it AI can see it. Hiding permission checks in another file or some config file will amost guarantee you that the AI will forget to add permission checks in when adding new routes.

Make It Parallelizable

Agents aren’t exceptionally fast individually, but parallelization boosts overall efficiency. Find a way to manage shared states like the file system, databases, or Redis instances so that you can run more than one. Avoid them, or find a way to quickly segment stuff out.

Your initial shared state is just the file system and a second check-out will do. But really I don’t have an amazing solution yet. There are some good initial attempts. For instance one of the tools to watch is container-use. It’s an MCP server that instructs Claude or other agents to run their experiments entirely in Docker.

Then there are tools like Cursor’s background agents and Codex which are moving this entire stuff into CI which will be interesting. So far, I don’t this is working for me yet, but let’s see again in a month.

Learn To Refactor

Agentic coding alters refactoring priorities. Agents handle tasks effectively until project complexity surpasses some manageable thresholds. Too big here is defined by the total amount of stuff that it has to consider. So for instance you can vibe code your frontend together for a while, but eventually you reach the point where you absolutely need to tell it to make a component library. Why? Because if the total tailwind class mess is splitered across 50 files you will find it very hard to get the AI to make redesigns or extract components without major regressions.

An agentic workflow encourages good code maintenance and refactoring at the right moment. You don’t want to do it too early and you definitely do not want to do it too late.

What Next?

Agentic coding is rapidly evolving, and my workflow today may look dramatically different tomorrow. What’s clear though is that integrating agents into your development process can unlock significant productivity gains. I encourage you to keep experimenting. The tools and techniques will evolve, but the core principles — simplicity, stability, observability and smart parallelization — will remain essential.

Ultimately, the goal is not just to leverage agents to write code faster, but to write better, more maintainable, and resilient code. Already today the code looks nothing like the terrible slop from a few months ago. Stay adaptable, and happy coding!

This entry was tagged ai and thoughts

copy as / view markdown

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Using Strategy Pattern with Dependency Injection in ASP.NET Core

1 Share

This blog post is originally published on https://blog.elmah.io/using-strategy-pattern-with-dependency-injection-in-asp-net-core/

Selection logic is a prominent part of many applications. Whether you add a simple environment toggle, a UI mode decision, or apply a discount, you have to rely on user input. Sometimes, simply using an intuitive if-else or a switch case can work. However, when conditions are growing or a complex algorithm selection is required, simple conditional statements can't work. Your code becomes exhaustive and hard to maintain. The Strategy pattern rescues the situation, adheres to the open/closed principle, and keeps the logic maintainable. This article walks you through a practical, straightforward example of the strategy pattern: choosing between Regular, VIP, and Student discount strategies at runtime.

Using Strategy Pattern with Dependency Injection in ASP.NET Core

What is the Strategy pattern?

The Strategy design pattern is a behavioral pattern used when you need to switch between different algorithms at runtime. The strategy pattern encapsulates algorithms and selects the right one when needed, usually based on an input. This pattern provides a flexible, maintainable solution to an algorithm-selection problem, keeping the code cleaner and easier to extend. If you need to add a new algorithm, just add another class instead of touching the existing logic, adhering to the open/closed principle.

What is the problem without the Strategy pattern?

To understand the usability of the strategy pattern, we need to identify the problems we may face without it. Suppose we offer different discounts to different users based on their membership. A naive solution is to use an if-else statement or a switch case. Let's do it and evaluate the implementation.

Step 1: Create a Console application

dotnet new console -n StrategyPatternDemo
cd StrategyPatternDemo

Step 2: Create DiscountService class

In the service, we will define discount calculation with a conditional statement.

public class DiscountService
{
    public decimal GetDiscount(string customerType, decimal amount)
    {
        if (customerType.ToLower() == "regular")
        {
            return amount * 0.05m;
        }
        else if (customerType.ToLower() == "vip")
        {
            return amount * 0.20m;
        }
        else
        {
            return 0;
        }
    }
}

Step 3: Use the service in the Strategy Pattern Sword.cs

using StrategyPatternDemo;

Console.Write("Enter customer type (regular/vip): ");
var type = Console.ReadLine();

Console.Write("Enter amount: ");
var amount = decimal.Parse(Console.ReadLine());

var service = new DiscountService();
var discount = service.GetDiscount(type, amount);
var final = amount - discount;

Console.WriteLine($"Discount: {discount}");
Console.WriteLine($"Final Price: {final}");

Step 4: Run and test

Let's test it

dotnet run

Output

Output

It works as expected. But the code contains design and maintainability flaws.

  • The solution violates the Open/Closed principle. Adding a new membership will require changes to the core method, such as adding an else-if block.
  • All the discount logic is tightly coupled in a single class and lacks separation of concerns or single responsibility.
  • Conjoined code makes testing harder. To ensure the functionality, you have to test the monster every time.
  • As the conditions grow, you can fall into a spiral of conditions. Imagine if you have 20 memberships, that will be a nightmare for maintainability.

Implementing the strategy pattern in a console application

In our example, let's address the above issues using the Strategy Pattern.

Step 1: Define Strategy Interface

Adding the discount strategy interface

public interface IDiscountStrategy
{
    decimal ApplyDiscount(decimal amount);
}

Step 2: Add concrete strategies

Adding separate implementations of each algorithm

public class RegularDiscount : IDiscountStrategy
{
    public decimal ApplyDiscount(decimal amount) => amount * 0.05m;
}

For Vip

public class VipDiscount : IDiscountStrategy
{
    public decimal ApplyDiscount(decimal amount) => amount * 0.20m;
}

Notice that none of the strategies implement validation or error handling. In real-world code, you would probably want to look into that. This part has been left out of this post since the focus is around splitting the business logic out in strategies.

Step 3: Define context class

public class DiscountService
{
    private readonly IDiscountStrategy _strategy;

    public DiscountService(IDiscountStrategy strategy)
    {
        _strategy = strategy;
    }

    public decimal GetDiscount(decimal amount) => _strategy.ApplyDiscount(amount);
}

The Context class in the strategy pattern holds a reference to a strategy interface (IDiscountStrategy in our case). It receives a strategy from outside. It does not implement logic itself, instead, it delegates work to the strategy, while the concrete classes define their logic.

Step 4: Use the strategy in the Program.cs


Console.WriteLine("Enter customer type (regular/vip): ");
string type = Console.ReadLine()?.ToLower();

IDiscountStrategy strategy;

// Manually picking strategy — no switch needed, but you *can* if you want.
if (type == "vip")
    strategy = new VipDiscount();
else
    strategy = new RegularDiscount();

var service = new DiscountService(strategy);

Console.Write("Enter amount: ");
decimal amount = decimal.Parse(Console.ReadLine());

var discount = service.GetDiscount(amount);
var finalPrice = amount - discount;

Console.WriteLine($"Discount applied: {discount}");
Console.WriteLine($"Final price: {finalPrice}");

Output

Output

We understand basic principles of the strategy pattern. We can proceed with our primary target: implementing the strategy pattern in ASP.NET Core.

Implementing the strategy pattern in an ASP.NET Core API

Step 1: Create a .NET Core api

Run the following command in the terminal

dotnet new webapi -n StrategyPatternApi
cd StrategyPatternApi

Step 2: Add concrete strategies

Adding separate implementations of each algorithm

public class RegularDiscount : IDiscountStrategy
{
    public decimal ApplyDiscount(decimal amount) => amount * 0.05m;
}

For Vip

public class VipDiscount : IDiscountStrategy
{
    public decimal ApplyDiscount(decimal amount) => amount * 0.20m;
}

Step 3: Define context class

public class DiscountService
{
    private readonly Func<string, IDiscountStrategy> _strategyFactory;

    public DiscountService(Func<string, IDiscountStrategy> strategyFactory)
    {
        _strategyFactory = strategyFactory;
    }

    // public API: ask for a discount by customer type
    public decimal GetDiscount(string customerType, decimal amount)
    {
        var strategy = _strategyFactory(customerType);
        return strategy.ApplyDiscount(amount);
    }
}

DiscountService plays the context role in the strategy pattern. DiscountService has a property Func<string, IDiscountStrategy> _strategyFactory that holds a factory delegate. The Func delegate returns an appropriate implementation of IDiscountStrategy based on the given type. Func allows the service to request a strategy at runtime by name/key without knowing the DI container internals or concrete types.

Step 4: Add a controller with the endpoint

[ApiController]
[Route("api/[controller]")]
public class PricingController : ControllerBase
{
    private readonly DiscountService _pricingService;

    public PricingController(DiscountService pricingService)
    {
        _pricingService = pricingService;
    }

    [HttpGet]
    public IActionResult Get([FromQuery] string type, [FromQuery] decimal amount)
    {
        var discount = _pricingService.GetDiscount(type, amount);
        var final = amount - discount;
        return Ok(new { type = type ?? "regular", amount, discount, final });
    }
}

Step 5: Configure Program.cs

Add the concrete services in dependency injection (DI) in the Program.cs file

services.AddTransient<RegularDiscount>();
services.AddTransient<VipDiscount>();

They are transient because discount strategies are stateless, so creating a new instance each time is fine. Note that I haven't injected them with IDiscountStrategy any implementing service because ASP.NET Core decides this automatically. Hence, the final code will look like this:

using StrategyPatternApi;

var builder = WebApplication.CreateBuilder(args);
var services = builder.Services;

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

// Register concrete strategy types so they can be resolved by the factory
services.AddTransient<RegularDiscount>();
services.AddTransient<VipDiscount>();

services.AddSingleton<Func<string, IDiscountStrategy>>(sp => key =>
{
    var k = (key ?? "").Trim().ToLowerInvariant();
    return k switch
    {
        "vip" => sp.GetRequiredService<VipDiscount>(),
        // add more cases if you add more strategies
        _ => sp.GetRequiredService<RegularDiscount>()
    };
});

// Register the service that uses the factory
services.AddScoped<DiscountService>();

// Add controllers (or leave for minimal endpoints)
services.AddControllers();

var app = builder.Build();

if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}
app.MapControllers();
app.Run();

In DI, the decisive part is:

services.AddSingleton<Func<string, IDiscountStrategy>>(sp => key =>
{
    var k = (key ?? "").Trim().ToLowerInvariant();
    return k switch
    {
        "vip" => sp.GetRequiredService<VipDiscount>(),
        // add more cases if you add more strategies
        _ => sp.GetRequiredService<RegularDiscount>()
    };
});

As explicitly stated, the switch condition resolves the appropriate concrete strategy via DI based on the type value. If any condition does not match, I made a default choice to get RegularService.

Step 6: Run and test

dotnet run

Now running the project

/api/Pricing endpoint
Response

Extension of algorithms in the ASP.NET Core strategy pattern

The Open/Close principle is one of the benefits of the Strategy Pattern. Let's continue with our example of how we can add a new discount within the bounds of the Open/Close principle.

Step 1: Add the Student discount's concrete strategy

public class StudentDiscount : IDiscountStrategy
{
    public decimal ApplyDiscount(decimal amount) => amount * 0.10m;
}

Step 2: Register a new service

services.AddTransient<StudentDiscount>();

Step 3: Update factory switch

services.AddSingleton<Func<string, IDiscountStrategy>>(sp => key =>
{
    var k = (key ?? "").Trim().ToLowerInvariant();
    return k switch
    {
        "vip" => sp.GetRequiredService<VipDiscount>(),
        "student" => sp.GetRequiredService<StudentDiscount>(),   
        _ => sp.GetRequiredService<RegularDiscount>()
    };
});

To add a new strategy implementation, we simply need to add the strategy code and inject it via dynamic DI.

Step 4: Run and test

dotnet run
/api/Pricing endpoint
Response

By default value

Default value
Response

Conclusion

Writing long if-else or cases is tiring. Every time you need to add a condition, you have to dive into the well and add one condition. The same happens while debugging. The strategy pattern provides a modular solution that keeps the code intact while dynamically allowing you to extend conditions. In this blog post, I highlighted the need for a strategy pattern and showed how to implement it in an ASP.NET Core API.

Example 1: https://github.com/elmahio-blog/StrategyPatternDemo

Example 2: https://github.com/elmahio-blog/StrategyPatternApi



Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Podcast: Platform Engineering for AI: Scaling Agents and MCP at LinkedIn

1 Share

QCon AI New York Chair Wes Reisz talks with LinkedIn’s Karthik Ramgopal and Prince Valluri about enabling AI agents at enterprise scale. They discuss how platform teams orchestrate secure, multi-agentic systems, the role of MCP, the use of foreground and background agents, improving developer experience, and reducing toil.

By Karthik Ramgopal, Prince Valluri
Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete

'Food and Fossil Fuel Production Causing $5 Billion of Environmental Damage an Hour'

1 Share
An anonymous reader quotes a report from the Guardian: The unsustainable production of food and fossil fuels causes $5 billion of environmental damage per hour, according to a major UN report. Ending this harm was a key part of the global transformation of governance, economics and finance required "before collapse becomes inevitable," the experts said. The Global Environment Outlook (GEO) report, which is produced by 200 researchers for the UN Environment Program, said the climate crisis, destruction of nature and pollution could no longer be seen as simply environmental crises. "They are all undermining our economy, food security, water security, human health and they are also [national] security issues, leading to conflict in many parts of the world," said Prof Robert Watson, the co-chair of the assessment. [...] The GEO report is comprehensive -- 1,100 pages this year -- and is usually accompanied by a summary for policymakers, which is agreed by all the world's countries. However, strong objections by countries including Saudi Arabia, Iran, Russia, Turkey and Argentina to references to fossil fuels, plastics, reduced meat in diets and other issues meant no agreement was reached this time. [...] The GEO report emphasized that the costs of action were much less than the costs of inaction in the long term, and estimated the benefits from climate action alone would be worth $20 trillion a year by 2070 and $100 trillion by 2100. "We need visionary countries and private sector [companies] to recognize they will make more profit by addressing these issues rather than ignoring them," Watson said. [...] One of the biggest issues was the $45 trillion a year in environmental damage caused by the burning of coal, oil and gas, and the pollution and destruction of nature caused by industrial agriculture, the report said. The food system carried the largest costs, at $20 trillion, with transport at $13 trillion and fossil-fuel powered electricity at $12 trillion. These costs -- called externalities by economists -- must be priced into energy and food to reflect their real price and shift consumers towards greener choices, Watson said: "So we need social safety nets. We need to make sure that the poorest in society are not harmed by an increase in costs." The report suggests measures such as a universal basic income, taxes on meat and subsidies for healthy, plant-based foods. There were also about $1.5 trillion in environmentally harmful subsidies to fossil fuels, food and mining, the report said. These needed to be removed or repurposed, it added. Watson noted that wind and solar energy was cheaper in many places but held back by vested interests in fossil fuel. The climate crisis may be even worse than thought, he said: "We are likely to be underestimating the magnitude of climate change," with global heating probably at the high end of the projections made by the Intergovernmental Panel on Climate Change. Removing fossil fuel subsidies could cut emissions by a third, the report said.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
12 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing the release of SQL Server Management Studio 22.1

1 Share

Welcome to December and another release of SQL Server Management Studio (SSMS)! Today we released SSMS 22.1, which has a few improvements related to GitHub Copilot, as well as several fixes for both GitHub Copilot and SSMS.

Please visit the release notes for a full list of what's changed. For those of you looking to try out GitHub Copilot, we now have a walk through available that you can access from the Copilot badge (from the dropdown select GitHub Copilot Walkthrough).  And for those of you are already using GitHub Copilot, we fixed that pesky issue of Run ValidateGeneratedTSQL showing up repeatedly, among a few other things.

 

 

While you're here, I'd like to take a moment to clear up some misconceptions about GitHub Copilot that I’ve seen floating around in blog posts and on social media. I'm covering these in a FAQ type format because I think it's easier (and faster) for folks to read.  If you have specific questions, feel free to add a comment. If you want to see specific features in either SSMS or GitHub Copilot in SSMS, please create feedback items on the site (if you put a request in the comment, I’ll ask you to create it as a feedback item 😊). As always, please search first!

 

Does GitHub Copilot use my prompts and its responses to train or retrain models?

No. Understand that not all models are hosted by Microsoft, but those that are hosted elsewhere also make the same commitment.  This is clearly stated in the GitHub doc covering model hosting and includes links to the hosting provider’s commitment statement.

 

Are prompts and responses encrypted?

Yes. We encourage all customers to visit the GitHub Copilot Trust Center to get answers to questions about privacy and security. 

 

Can I see what queries GitHub Copilot executes?

Yes.  Use Extended Events and filter on an Application Name of Microsoft SQL Server Management Studio – GitHub Copilot.

 

Can I use other models with GitHub Copilot?
Yes. Bring your own model (BYOM) is now available in SSMS 22.1 for personal subscriptions. You should use models that support tool calling.

 

Do I get the same answer from every model in GitHub Copilot?

No. The models available in GitHub Copilot in SSMS are available because they support tool calling, but they are different based on training, reasoning, temperature, and more. As such, responses to prompts can and will vary across models.  

 

Can I attach an execution plan for GitHub Copilot to analyze?

Yes. Either open a saved plan, or after capturing a plan in SSMS (using Query > Include Actual Execution Plan), right-click on the plan and select Show Execution Plan in New Tab. With the query editor tab as the active tab, in the chat window use # to bring up the open document list.  Select the execution plan from the list, then enter your prompt ("analyze the execution plan and make recommendations for improvements").

 

Can GitHub Copilot run destructive commands against my database?

Currently, no.  As of this release (22.1) GitHub Copilot in SSMS only supports Ask mode, and all queries executed against your database are SELECT statements (read-only), as noted here

We are planning to bring support for Agent mode to SSMS.  With Agent mode, users will be able to allow GitHub Copilot to run queries on their behalf, but it will require them to approve query execution (read/write with approval).

All queries executed by GitHub Copilot execute in the context of the user that is connected, based on their permissions. If a user has permission to run destructive queries (e.g. DELETE, UPDATE, DROP, etc.) and approves the request from GitHub Copilot to execute a destructive query, then the query will run.

We recommend the following: Ensure users have appropriate permissions (principle of least privilege).  This is a standard recommendation for any user that connects to a database.  There are database roles available that allow reading of data and prevent modification of data.

 

What if I don’t want to allow Agent mode?

You can leverage administrative controls for GitHub Copilot to disable Agent mode, once it’s available in SSMS.  Visual Studio, and thus SSMS, use Administrative Templates  for Group Policies.

 

What if I don’t want to allow my users to use GitHub Copilot in SSMS?

Again, Administrative Templates are the answer.  You can disable Copilot SKUs entirely, or for individual accounts.

Read the whole story
alvinashcraft
12 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories