Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150995 stories
·
33 followers

Blazor SaaS Starter Kits Compared: When to Choose Brick Starter for Full‑Stack C#

1 Share

Blazor SaaS starter kits give .NET teams a faster path to multi‑tenant, subscription‑based applications, but they differ a lot in focus, features, and how much they handle beyond UI. Brick Starter sits in the category of full‑stack C# SaaS foundations, combining a Blazor UI option with a feature‑rich ASP.NET Core backend built specifically for SaaS and multi‑tenancy.

Why Blazor SaaS starter kits exist

Blazor lets developers build rich web UIs in C# instead of JavaScript, which is attractive to .NET teams who want full‑stack C# across client and server. However, building a serious SaaS app still demands multi‑tenant architecture, authentication, billing, localization, admin tools, and deployment plumbing—far beyond what “File → New Blazor App” provides.​

Blazor‑focused SaaS starter kits exist to package those repetitive capabilities into reusable templates, so teams can start from a running Blazor + ASP.NET Core SaaS skeleton instead of reinventing every infrastructure piece.

Types of Blazor SaaS starter kits

Most Blazor SaaS kits fall into three broad types.​

  • Blazor UI‑first templates: focus on page layouts, components, and auth for single‑tenant apps; ideal for internal tools and basic CRUD but light on multi‑tenancy and billing.
  • Blazor‑centric multi‑tenant kits: add tenant awareness, localization, and better authorization on top of Blazor, often with opinionated architectures like Clean Architecture.​
  • Full SaaS boilerplates: combine Blazor (optionally among other UIs) with a mature .NET backend that includes tenant management, recurring payments, MFA, email templates, background jobs, and more.​

Brick Starter fits into the third category, where the goal is to ship production SaaS, not just a nice Blazor front end.​

Notable Blazor SaaS starter kits

Several Blazor‑based SaaS kits are frequently mentioned in .NET and SaaS communities.​

  • BlazorPlate: a multi‑tenant and multilingual Blazor template that targets SaaS scenarios with support for Blazor Server and WebAssembly, MudBlazor UI, authentication/authorization, and shared database multi‑tenancy.​​
  • Clean Architecture‑style Blazor kits (including samples and open templates): focus on DDD, modularity, and clean layering with Blazor front ends, but often require you to add billing, tenant lifecycle, and operational features yourself.​
  • Custom Blazor SaaS templates on GitHub and marketplaces: many offer auth, basic roles, and Stripe integration, but coverage of admin, email, localization, and multi‑tenant configuration varies significantly.​

These can be excellent for teams comfortable extending infrastructure, but they still expect you to fill gaps, especially around multi‑tenant billing and operations.​

Brick Starter: full‑stack C# boilerplate with a Blazor option

Brick Starter is a .NET SaaS boilerplate that supports multiple front‑end stacks—including Blazor—on top of a single, feature‑rich ASP.NET Core backend. The same backend powers Blazor, Angular, React, Vue, Next.js, and Razor, so C# teams can stay in .NET on both client and server while choosing the best UI for each project.​

Out of the box, Brick provides SaaS‑critical building blocks:​

  • Multi‑tenancy: tenant creation, isolation, subdomain‑based tenant routing, and a full tenant management panel.
  • Authentication and authorization: email, social, and Entra ID sign‑in; role and permission framework; multi‑factor authentication via email OTP and authenticator apps.
  • Billing and subscriptions: integrated Stripe‑based recurring payments with tenant‑level plans and automated handling of renewals, cancellations, and failures.
  • Operational features: email template management, multi‑language UI, database data encryption, background jobs, and admin dashboards for users, tenants, and settings.

All of this ships with full source code so teams can extend patterns, integrate with their own services, and audit everything.​

Blazor‑specific benefits in Brick Starter

When you choose the Blazor option in Brick Starter, you get a Blazor front end that is designed to sit on top of that SaaS‑ready backend rather than being a one‑off UI. That means your Blazor components immediately benefit from tenant context, permission checks, billing state, and localization that are already implemented server‑side.​

Advantages for full‑stack C# teams include:​

  • Single language end‑to‑end: C# for Blazor components, business logic, and backend services, reducing context switching and making it easier to share models and validation.
  • Consistent patterns across clients: if you later add a React or Angular client, they call the same APIs and reuse the same multi‑tenant logic, making Brick a long‑term foundation rather than a Blazor‑only experiment.
  • Faster onboarding: Blazor and .NET developers can work within familiar patterns while leveraging Brick’s opinionated modules for security, tenants, and payments.

How Brick compares to other Blazor SaaS kits

Placed alongside other Blazor SaaS templates, Brick can be summarized like this.

Kit / template Primary focus Multi‑tenant & SaaS depth Front‑end scope
BlazorPlate Blazor‑only multi‑tenant template Strong Blazor‑centric multi‑tenancy and localization; you add more SaaS ops as needed.​ Blazor WebAssembly/Server
Clean‑arch Blazor kits Architecture and code quality Clean layering; enterprise SaaS features mostly DIY.​ Blazor only
Custom GitHub Blazor SaaS templates Niche SaaS use cases or demos Varies; often Stripe + auth, but limited admin and tenant tooling.​ Blazor only
Brick Starter (Blazor) Full SaaS boilerplate with multi‑front‑end support Tenant management, auth/MFA, Stripe billing, email templates, localization, encryption, admin panels.​ Blazor plus Angular, React, Vue, Next.js, Razor​

For teams that want not just a UI template but a reusable SaaS platform, Brick’s broader scope and shared backend architecture are important differentiators.​

When to choose Brick Starter for full‑stack C

Brick Starter is usually the right Blazor SaaS kit when:​

  • You want full‑stack C# but do not want to design multi‑tenant, subscription, and security infrastructure yourself.
  • You may need to support additional clients (SPA, mobile, or another JS framework) later, and you want a backend that is already built for that.
  • You are a founder, product team, or agency that needs to standardize on a single .NET SaaS foundation across multiple apps, with predictable architecture and commercial support.​

In those cases, Brick Starter’s combination of Blazor front end, multi‑tenant SaaS backend, and full source code makes it a strong choice among Blazor SaaS starter kits for 2026 and beyond.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Taming the Brown Field with F#: Interactive Refactoring for Mature Codebases – Informedica

1 Share

Introduction

If you’ve read the previous Informedica post on F# Interactive (FSI) and why it’s a powerful tool in your development toolbox, then this post takes the next step. The first post introduced what FSI is and why it’s useful. This post shows how you can harness it practically on a brown-field project such as GenPRES

Most of us don’t work in green-field environments where code is written fresh and clean. Instead, we struggle with brown-field codebases—systems that have grown over years, accreted complexity, and become mission-critical. Changing code in these environments is often intimidating: any modification risks unexpected consequences somewhere else in the system.

This is exactly where F# Interactive (FSI) becomes a superpower.

Building on that, here I show how you can:

  • Extract and evolve existing production code, not just green-field prototypes
  • Bring entire modules into a script, refactor them interactively
  • Reuse your actual test suite in FSI
  • Validate new features (like parallelism) without risking the main codebase

FSI lets us copy existing production code into a script file, make changes, and evaluate those changes instantly. Instead of performing risky refactors inside the main project, we isolate the code, try new ideas interactively, and only move changes back once we’re confident they work.

Tips & Tricks for Working with F# Interactive (FSI)

Here I list some tips and tricks I learned using the FSI for real production work.

1. Always Set the Current Directory

Environment.CurrentDirectory <- __SOURCE_DIRECTORY__


This ensures that relative paths work exactly as expected when loading or referencing other scripts or test assets.

2. Load Project Context Automatically

Use a bootstrap file like:

#load "load.fsx"

The bootstrap file loads all the dependent libraries, either local using source files or compiled libraries, or using nuget as shown below.

3. Reference NuGet Packages Inline

FSI handles this gracefully:

#r "nuget: Expecto, 9.0.4"

This lets you run tests, do property checks, and more without a full project file. It’s perfect for lightweight explorations.

4. Copy Only the Code You Need (Incrementally)

Instead of dragging entire modules into a script, start with just the functions you plan to modify or inspect — e.g., in the below example: the solve, solveAll, and helper functions from the Solver module.

5. Reuse Your Existing Tests

As shown above, you can #load your actual test files (e.g., from your tests directory) and run them directly in FSI. That’s an excellent way to verify behavior before touching the production code.

6. Modularize Your Script

Break your script into logical regions (helpers, solver refactor, tests) with comments. That makes it easier to jump around interactively.

7. Use Partial Evaluation

FSI lets you evaluate only parts of a script — use this to validate small functions or new approaches without reloading the whole file. Just select part of a script and send it to the FSI.

8. Keep FSI Sessions Alive

Rather than invoking FSI afresh each time, keep a session running so you build up state interactively. This is where tools like MCP integration become even more interesting.

9. Using FSI with MCP (Model Context Protocol) for Enhanced Interactive Workflows

You can go beyond the ordinary FSI experience by integrating it with an MCP-enabled server such as the open-source fsi-mcp-server. This tool is a drop-in replacement for the default fsi.exe that exposes F# Interactive sessions over a Model Context Protocol (MCP) interface. 

This unlocks powerful workflows:

  • AI-assisted interactive sessions Your AI assistant (e.g., Claude Code, Copilot, or another MCP-compatible agent) can programmatically send code to be evaluated in the same FSI session you’re using, inspect outputs, run tests, and more — without the classic friction of copy-paste or external execution. 
  • Seamless IDE Integration Replace your standard FSI executable in your editor (VS Code, Rider, etc.) with the MCP-enabled server, and the AI assistant becomes a first-class collaborative partner in your REPL workflow.
  • Shared REPL State Across Interfaces Both you (at the console or editor send-to-REPL) and the AI agent share the same state — the definitions, modules, and script context are visible and modifiable by both sides. 

This MCP integration keeps the tight interactive feedback loop of FSI alive even in more connected and collaborative workflows, especially useful when experimenting with large codebases or refactors like those in this blog.


A Real World Example: GenPRES

In the open-source project GenPRES, the Solver.fs module is a great example. It contains the core logic for evaluating product and sum equations—dense, interconnected, and essential. Before modifying it, we want a safe environment where we can experiment.

Below is the relevant part of the original solver.

Original solve Function (excerpt from Solver.fs)

let solve onlyMinIncrMax log sortQue var eqs =

    let solveE n eqs eq =
        try
            Equation.solve onlyMinIncrMax log eq
        with
        | Exceptions.SolverException errs ->
            (n, errs, eqs)
            |> Exceptions.SolverErrored
            |> Exceptions.raiseExc (Some log) errs
        | e ->
            writeErrorMessage $"didn't catch {e}"
            failwith "unexpected"

    let rec loop n que acc =
        match acc with
        | Error _ -> acc
        | Ok acc ->
            let n = n + 1
            if n > (que @ acc |> List.length) * Constants.MAX_LOOP_COUNT then
                (n, que @ acc)
                |> Exceptions.SolverTooManyLoops
                |> Exceptions.raiseExc (Some log) []

            let que = que |> sortQue onlyMinIncrMax

            match que with
            | [] ->
                match acc |> List.filter (Equation.check >> not) with
                | [] -> Ok acc
                | invalid ->
                    invalid
                    |> Exceptions.SolverInvalidEquations
                    |> Exceptions.raiseExc (Some log) []

            | eq :: tail ->
                let q, r =
                    if eq |> Equation.isSolvable |> not then
                        tail, Ok (eq :: acc)
                    else
                        match eq |> solveE n (acc @ que) with
                        | eq, Changed cs ->
                            let vars = cs |> List.map fst
                            acc |> replace vars |> fun (rpl, rst) ->
                            let que =
                                tail
                                |> replace vars
                                |> fun (es1, es2) -> es1 @ es2 @ rpl
                            que, Ok (eq :: rst)

                        | eq, Unchanged ->
                            tail, Ok (eq :: acc)

                        | eq, Errored m ->
                            [], Error (eq :: acc @ que, m)

                loop n q r

    match var with
    | None -> eqs, []
    | Some var -> eqs |> replace [var]
    |> fun (rpl, rst) ->
        match rpl with
        | [] -> Ok eqs
        | _  ->
            rpl |> Events.SolverStartSolving |> Logger.logInfo log
            loop 0 rpl (Ok rst)

This function works—but it’s intricate, highly recursive, and difficult to evolve safely.

Updated Solver in an .fsx Script (focused on differences)

To experiment safely, we moved a local copy of the solver into an F# script and add an alternative to the solving loop:

/// Solve equations in parallel.
/// Still an experimental feature.
/// Parallel distribution is cyclic
let parallelLoop onlyMinIncrMax log sortQue n rpl rst =

    let solveE n eqs eq =
        try
            Equation.solve onlyMinIncrMax log eq
        with
        | Exceptions.SolverException errs ->
            (n, errs, eqs)
            |> Exceptions.SolverErrored
            |> Exceptions.raiseExc (Some log) errs
        | e ->
            let msg = $"didn't catch {e}"
            writeErrorMessage msg

            msg |> failwith

    let rec loop n que acc =
        match acc with
        | Error _ -> acc
        | Ok acc  ->
            let n = n + 1
            let c = que @ acc |> List.length
            if c > 0 && n > c * Constants.MAX_LOOP_COUNT then
                writeErrorMessage $"too many loops: {n}"

                (n, que @ acc)
                |> Exceptions.SolverTooManyLoops
                |> Exceptions.raiseExc (Some log) []

            match que with
            | [] ->
                match acc |> List.filter (Equation.check >> not) with
                | []      -> acc |> Ok
                | invalid ->
                    writeErrorMessage "invalid equations"

                    invalid
                    |> Exceptions.SolverInvalidEquations
                    |> Exceptions.raiseExc (Some log) []

            | _ ->
                let que, acc =
                    que
                    |> List.partition Equation.isSolvable
                    |> function
                    | que, unsolv -> que, unsolv |> List.append acc
                // make sure that the equations with the lowest cost
                // are prioritezed
                let que = que |> sortQue onlyMinIncrMax
                // apply parallel equation solving to the
                // first number of optimal parallel workers 
                let rstQue, (rpl, rst) =
                    let queLen = que |> List.length
                    // calculate optimal number of workers
                    let workers =
                        if Parallel.totalWorders > queLen then queLen
                        else Parallel.totalWorders
                    // return remaining que and calculate
                    // in parallel the worker que
                    if workers >= queLen then []
                    else que |> List.skip workers
                    ,
                    que
                    |> List.take workers
                    |> List.map (fun eq ->
                        async {
                            return eq |> solveE n (acc @ que)
                        }
                    )
                    |> Async.Parallel
                    |> Async.RunSynchronously
                    |> Array.toList
                    |> List.partition (snd >> function | Changed _ -> true | _ -> false)

                let rst, err =
                    rst
                    |> List.partition (snd >> function | Errored _ -> true | _ -> false)
                    |> function
                    | err, rst ->
                        rst |> List.map fst,
                        err
                        |> List.choose (fun (_, sr) -> sr |> function | Errored m -> Some m | _ -> None)
                        |> List.collect id

                if err |> List.isEmpty |> not then (que |> List.append acc, err) |> Error
                else
                    let rpl, vars =
                        rpl
                        |> List.unzip

                    let vars =
                        vars
                        |> List.choose (function
                            | Changed vars -> Some vars
                            | _ -> None
                        )
                        |> List.collect id
                        |> List.fold (fun (vars : Variable list) (var, _) ->
                            match vars |> List.tryFind (Variable.eqName var) with
                            | None -> var::vars
                            | Some v ->
                                let vNew = v |> Variable.setValueRange var.Values
                                vars |> List.replace (Variable.eqName vNew) vNew
                        ) []
                    // make sure that vars are updated with changed vars 
                    // in the remaining que
                    let rstQue = 
                        rstQue
                        |> replace vars
                        |> function
                        | es1, es2 -> es1 |> List.append es2
                    // calculate new accumulator and que
                    let acc, que =
                        acc
                        |> List.append rst
                        |> replace vars
                        |> function
                        | es1, es2 ->
                            es2 |> Ok,
                            es1
                            |> List.append rpl
                            |> List.append rstQue

                    loop n que acc

    loop n rpl rst

By copying the solver module into a script:

  • You can refactor or extend the code incrementally without touching production.
  • You can run the solver interactively on sample equations.
  • You can introduce features (like parallelism) without fear.
  • You gain instant feedback from FSI.
  • When satisfied, you copy the improved code back into the main project.

Running the Existing Tests from the Same Script

The really nice part is that you don’t just get to run ad-hoc experiments: you can also pull in your existing test suite and reuse it directly from the script.

In the same .fsx file, we load the existing tests from the main solution and define a small test module that compares the behavior of the sequential and parallel solvers:

// load the existing tests
#load "../../../tests/Informedica.GenSOLVER.Tests/Tests.fs"

open MathNet.Numerics
open Expecto
open Expecto.Flip
open Informedica.GenSolver.Tests
open Informedica.GenUnits.Lib

module Tests =

    module ParallelTests =

        open Informedica.Logging.Lib
        open Informedica.GenSolver.Lib

        let logger =
            fun (_ : string) -> ()
            |> SolverLogging.create

        /// Solve equations sequentially
        let solveSequential onlyMinMax eqs =
            Informedica.GenSolver.Lib.Solver.solveAll
                false       // useParallel = false (sequential)
                onlyMinMax
                logger
                eqs

        /// Solve equations in parallel
        let solveParallel onlyMinMax eqs =
            Informedica.GenSolver.Lib.Solver.solveAll
                true        // useParallel = true
                onlyMinMax
                logger
                eqs

        /// Helper to compare equation results
        let eqsAreEqual eqs1 eqs2 =
            match eqs1, eqs2 with
            | Ok eqs1, Ok eqs2 ->
                let s1 =
                    eqs1
                    |> List.map (Equation.toString true)
                    |> List.sort
                let s2 =
                    eqs2
                    |> List.map (Equation.toString true)
                    |> List.sort
                s1 = s2
            | Error _, Error _ -> true
            | _ -> false

        let mg = Units.Mass.milliGram
        let day = Units.Time.day
        let kg = Units.Weight.kiloGram
        let mgPerDay = CombiUnit(mg, OpPer, day)
        let mgPerKgPerDay = (CombiUnit (mg, OpPer, kg), OpPer, day) |> CombiUnit

        let tests = testList "Parallel vs Sequential Solving" [

            test "simple product equation gives same results" {
                let eqs =
                    [ "a = b * c" ]
                    |> TestSolver.init
                    |> TestSolver.setMinIncl Units.Count.times "a" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "a" 100N
                    |> TestSolver.setMinIncl Units.Count.times "b" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "b" 10N
                    |> TestSolver.setMinIncl Units.Count.times "c" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "c" 10N

                let seqResult = eqs |> solveSequential true
                let parResult = eqs |> solveParallel true

                eqsAreEqual seqResult parResult
                |> Expect.isTrue "sequential and parallel should give same results"
            }

            test "sum equation gives same results" {
                let eqs =
                    [ "total = a + b + c" ]
                    |> TestSolver.init
                    |> TestSolver.setMinIncl Units.Count.times "total" 10N
                    |> TestSolver.setMaxIncl Units.Count.times "total" 100N
                    |> TestSolver.setMinIncl Units.Count.times "a" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "a" 50N
                    |> TestSolver.setMinIncl Units.Count.times "b" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "b" 50N
                    |> TestSolver.setMinIncl Units.Count.times "c" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "c" 50N

                let seqResult = eqs |> solveSequential true
                let parResult = eqs |> solveParallel true

                eqsAreEqual seqResult parResult
                |> Expect.isTrue "sequential and parallel should give same results for sum"
            }

            test "multiple equations give same results" {
                let eqs =
                    [ "ParacetamolDoseTotal = ParacetamolDoseTotalAdjust * Adjust" ]
                    |> TestSolver.init
                    |> TestSolver.setMinIncl mgPerDay "ParacetamolDoseTotal" 180N
                    |> TestSolver.setMaxIncl mgPerDay "ParacetamolDoseTotal" 3000N
                    |> TestSolver.setMinIncl mgPerKgPerDay "ParacetamolDoseTotalAdjust" 40N
                    |> TestSolver.setMaxIncl mgPerKgPerDay "ParacetamolDoseTotalAdjust" 90N
                    |> TestSolver.setMaxIncl kg "Adjust" 100N

                let seqResult = eqs |> solveSequential true
                let parResult = eqs |> solveParallel true

                eqsAreEqual seqResult parResult
                |> Expect.isTrue "sequential and parallel should give same results for complex equation"
            }

            test "chained equations give same results" {
                let eqs =
                    [
                        "x = a * b"
                        "y = x * c"
                        "z = y * d"
                    ]
                    |> TestSolver.init
                    |> TestSolver.setMinIncl Units.Count.times "a" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "a" 10N
                    |> TestSolver.setMinIncl Units.Count.times "b" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "b" 10N
                    |> TestSolver.setMinIncl Units.Count.times "c" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "c" 10N
                    |> TestSolver.setMinIncl Units.Count.times "d" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "d" 10N
                    |> TestSolver.setMinIncl Units.Count.times "z" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "z" 10000N

                let seqResult = eqs |> solveSequential true
                let parResult = eqs |> solveParallel true

                eqsAreEqual seqResult parResult
                |> Expect.isTrue "sequential and parallel should give same results for chained equations"
            }

            test "with value sets gives same results" {
                let eqs =
                    [ "dose = qty * freq" ]
                    |> TestSolver.init
                    |> TestSolver.setMinIncl Units.Count.times "dose" 10N
                    |> TestSolver.setMaxIncl Units.Count.times "dose" 100N
                    |> TestSolver.setValues Units.Count.times "freq" [1N; 2N; 3N; 4N]
                    |> TestSolver.setMinIncl Units.Count.times "qty" 1N
                    |> TestSolver.setMaxIncl Units.Count.times "qty" 50N

                let seqResult = eqs |> solveSequential false  // use full solving with value sets
                let parResult = eqs |> solveParallel false

                eqsAreEqual seqResult parResult
                |> Expect.isTrue "sequential and parallel should give same results with value sets"
            }
        ]

        let run () =
            tests
            |> runTestsWithCLIArgs [] [| "--summary" |]

With this in place, you can now:

  • Edit the solver implementation in the same script.
  • Run Tests.ParallelTests.run() in FSI.
  • Immediately verify that your new parallel implementation behaves the same as the existing sequential one across realistic scenarios taken from your domain.

This is the essence of taming a brown field with F#: you don’t just change code—you bring the code, the tests, and the runtime together in a single interactive environment, and let FSI give you fast, tight feedback loops while you evolve a mature codebase.

Conclusion: From Green Field Ideals to Brown Field Reality

Green-field projects are where we learn languages and frameworks. Brown-field projects are where we actually use them.

In a green-field setting, everything is possible: you control the architecture, the abstractions, and the direction of the code. In brown-field systems—like GenPRES—you inherit real constraints: production behavior that must not change, performance characteristics that matter, and code that has accumulated knowledge over time. The challenge is not writing new code, but changing existing code safely.

This is where F# Interactive truly distinguishes itself.

By lifting real production modules into an .fsx script, you create a safe experimental zone inside a mature system. You can refactor, optimize, and even introduce new execution models—such as parallel solving—while continuously validating behavior against the actual test suite. The feedback loop is immediate, the risk is contained, and confidence grows with every evaluated expression.

Seen in this light, FSI is not just a REPL or a learning tool. It is a brown-field engineering instrument.

If the earlier Informedica post showed why FSI is such a powerful tool for exploration and understanding, this follow-up shows how to apply it pragmatically to evolve a real, non-trivial codebase. And with newer approaches—such as running FSI through an MCP-enabled server—you can even extend that interactive loop to include AI-assisted workflows without sacrificing the shared state and immediacy that make FSI so effective.

Brown-field development doesn’t have to mean slow, risky, or opaque change. With F#, FSI, and a disciplined interactive workflow, it becomes iterative, observable, and surprisingly enjoyable.

That, ultimately, is how you tame the brown field.

34 total views , 30 views today

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Dynamically Changing Decimal & Thousand Separators At Runtime

1 Share

This article is part of the 2025 C# Advent Calendar.

I participated in the 2024 C# Advent and posted “Customizing Object Equality In C# & .NET” and “Caching In .NET Applications & The Hybrid Cache

Localization and other regional concerns are among the problems you will frequently run into when developing applications for use across different countries and locales.

Take, for example, this scenario.

You have a simple API that generates an export of some data, in a custom format dictated by the client.

The type the API deals with is as follows:

public sealed class Person
{
  public required string FirstName { get; init; }
  public required string Surname { get; init; }
  public required decimal Salary { get; init; }
}

This is exposed via a simple API as follows:

using System.Text;
using Bogus;

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet("/Generate", () =>
{
    var faker = new Faker<Person>()
        .RuleFor(person => person.FirstName, faker => faker.Person.FirstName)
        .RuleFor(person => person.Surname, faker => faker.Person.LastName)
        .RuleFor(person => person.Salary, faker => faker.Random.Decimal(10_000, 99_000));
    var sb = new StringBuilder();
    foreach (var person in faker.Generate(10).ToList())
    {
        sb.AppendLine($"| {person.FirstName} | {person.Surname} | {person.Salary.ToString("0.00")} |");
    }

    return Results.Text(sb.ToString());
});

app.Run();

Here I am using the Bogus library to generate 10 Person instances, and I am using a seed of 0 so we always get the same results. (You can use any other number as a seed)

If we run this code, we get the following results:

InitialGenerated

Here I am using the Yaak client.

| Ernestine | Runte | 25142.71 |
| Angelina | Erdman | 44455.19 |
| Debbie | Lakin | 96077.93 |
| Pat | Hane | 31505.43 |
| Betsy | King | 11155.94 |
| Theodore | Spencer | 13234.54 |
| Steven | Larson | 14903.13 |
| Michele | Zulauf | 73789.01 |
| Sophie | Pfannerstill | 87684.21 |
| Lola | West | 19545.73 |

Suppose the consumer then gives you the following information:

We have recently onboarded a new vendor for an upstream system, and they require the salary to use the comma as the decimal separator and the space as the thousand separator.

There are a number of solutions to this problem.

  1. Find an existing culture that uses this format and use that in our code
  2. Explicitly write code to do this heavy lifting for us.

Use an existing culture

From a quick search, the following countries indeed use this format:

  • France
  • Germany
  • Switzerland
  • Russia
  • Sweden
  • Finland
  • Norway

Let us pick France.

We can update our code to use the French locale internally like this:

app.MapGet("/v2/Generate", () =>
{
  // Create an instance of the french locale
  var french = new CultureInfo("fr-FR");
  var faker = new Faker<Person>().UseSeed(0)
    .RuleFor(person => person.FirstName, faker => faker.Person.FirstName)
    .RuleFor(person => person.Surname, faker => faker.Person.LastName)
    .RuleFor(person => person.Salary, faker => faker.Random.Decimal(10_000, 99_000));
  var sb = new StringBuilder();
  foreach (var person in faker.Generate(10).ToList())
  {
      // Format the salary
      sb.AppendLine($"| {person.FirstName} | {person.Surname} | {person.Salary.ToString("0,0.00", french)} |");
  }

  return Results.Text(sb.ToString());
});

This will output the following:

FrenchFormatted

So far, so good.

There are, however, some problems with this approach.

  1. If the separators change, you will need to find another locale that meets those requirements.
  2. Even if the separators remain the same, if additional fields are added in the future, like Date and Time, you are stuck with the French date and time formatting.

A better option is to do it ourselves.

Explicit custom formatting

Here we have two options:

  1. Create a completely new locale and configure it as appropriate
  2. Take an existing locale and change just the bits that we need.

The second option is simpler.

First, we will need to create a class that stores our settings.

public sealed class Settings
{
    [Required(AllowEmptyStrings = true)] public string DecimalSeparator { get; set; } = ".";
    [Required(AllowEmptyStrings = true)] public string ThousandSeparator { get; set; } = ",";
}

We are adding the Required attribute so that the runtime will validate that they are provided at startup. We are setting AllowEmptyStrings to true; otherwise, the runtime will reject a space.

Next, we configure our ASP.NET pipeline to register this class as an injectable option.

var builder = WebApplication.CreateBuilder(args);
// Register our settings
builder.Services.AddOptions<Settings>()
    .Bind(builder.Configuration.GetSection(nameof(Settings)))
    .ValidateDataAnnotations()
    .ValidateOnStart();

Next, we update our endpoint signature to inject the options.

app.MapGet("/v3/Generate", (IOptions<Settings> options) =>
{
  // Fetch the settings into a variable
  var settings = options.Value;
  //
  // Code
  //
});

Next, we update the endpoint body to clone an existing locale and configure our behaviour using the injected Settings.

app.MapGet("/v3/Generate", (IOptions<Settings> options) =>
{
  var settings = options.Value;
  // We are cloning an existing one instead of creating a new one
  // to avoid the need to specify all the settings.
  var numberFormatInfo = (NumberFormatInfo)
  CultureInfo.InvariantCulture.NumberFormat.Clone();
  // Set the formats
  numberFormatInfo.NumberDecimalSeparator = settings.DecimalSeparator;
  numberFormatInfo.NumberGroupSeparator = settings.ThousandSeparator;
  var faker = new Faker<Person>().UseSeed(0)
    .RuleFor(person => person.FirstName, faker => faker.Person.FirstName)
    .RuleFor(person => person.Surname, faker => faker.Person.LastName)
    .RuleFor(person => person.Salary, faker => faker.Random.Decimal(10_000, 99_000));
  var sb = new StringBuilder();
  foreach (var person in faker.Generate(10).ToList())
  {
    sb.AppendLine(
    $"| {person.FirstName} | {person.Surname} | {person.Salary.ToString("0,0.00", numberFormatInfo)} |");
  }

  return Results.Text(sb.ToString());
});

Finally, we update the appsettings.json to add our new settings.

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "Settings": {
    "DecimalSeparator": ".",
    "ThousandSeparator": ","
  },
  "AllowedHosts": "*"
}

This is the new section:

LocaleSettings

If we run this code, we should get the same result:

FinalFomatting

The benefit of this technique is that if we subsequently need to handle the formatting of things like dates, the same technique can be leveraged, only this time we will me modifying the DateTimeInfo.

TLDR

You can control the formatting of values for exotic requests very flexibly by modifying existing locales to suit your purposes.

The code is in my Github.

Happy hacking!

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

16 Tips for Writing AI-Ready C# Code

1 Share

This article is my entry as part of C# Advent 2025. Visit CSAdvent.Christmas for more articles in the series by other authors.

Over the past year, I’ve been using an increasing amount of AI assistance across various tools to write and refine C# codebases. I wanted to distill some of the key takeaways from that development as well as insights gleaned from several AI workshops I led this week with Leading EDJE into an article to help you write and maintain C# codebases that are optimized for AI agents.

In this article we’ll walk through 16 of my current best practices for building C# codebases that are as easy as possible to use from AI tools like Copilot, Cursor, or Claude Code. These tips will be broken down from the more foundational concepts that could apply to any language to structural patterns relevant to C# applications to very specific C# language and library capabilities that help decrease the amount of time AI agents spend churning while trying to generate solutions.

Editing note: don’t worry; I’m not turning into one of those clickbait list people. I came up with my list of points while outlining this article, discovered I had 14 and found an excuse for a 15th while writing it, then found an additional one during final edits. I remain committed to quality content authored by human hands (with a little AI assistance thrown in for editing)

Foundational

Let’s start by discussing some foundational ways of improving your experience working with AI agents in .NET codebases at a high-level, regardless of which AI tooling you’re using.

Tuning behavior through Agents.md

A common standard for AI tooling is to use an Agents.md file located in the root of your repository. This file should contain common instructions that the AI agent should receive for any request you might make of the agent.

A sample Agents.md file might start something like this:

You are a senior .NET developer and designer skilled in writing reliable, testable C# code to solve a variety of problems in a modern production system. Your code should be easy to understand and easy to test. You are competent in a variety of libraries and languages and follow established .NET best practices.

If you’re not comfortable with writing your own Agents.md file, I recommend looking at community curated ones on Cursor.Directory or searching for an Agents.md file you like on GitHub (provided its repository’s license meets your needs).

The role of your Agents.md file is to set the stage for good interactions with your agent for a variety of common tasks. The agents file is one that you’ll likely want to reuse between different repositories so it shouldn’t contain customizations for your specific project - that’s something that more suitably belongs in a README.md file or in custom context rules as we’ll discuss later.

Documenting common operations in your README.md

Many AI agents will look at your README.md file when getting oriented on your code. README.md files often contain common context on what your codebase represents as well as standard practices for configuring, building, deploying, and testing your application.

I strongly recommend that your README.md file include a brief section on building and testing your application, for example:

This application relies on .NET 10 and can be built using dotnet build. Unit tests can be run via dotnet test and are important to run all tests in the solution in this way before considering any change complete.

While this may seem trivial and common sense, I’ve seen enough instances where models will not attempt to build the code, will use incorrect syntax, or - more frequently - will omit running tests or will not run ALL tests - just ones related to their specific change.

One practice you may want to consider with your README.md file is to keep it as small and focused as possible and to have it link to additional markdown files in the repository as needed. For example, if new developers need to install and configure a number of dependencies when setting up the repository, that’s important information to document, but it’s going to be extra noise to your AI agent because they’ll be operating in a pre-configured environment (either your local machine or a cloud agent configured via dockerfile). Take these extra configuration steps and put them into a GettingStarted.md file, then link to this file from your README.md.

You can do the same practice with larger pieces of content as well, such as detailed architecture or database schema information. By including links to the content in your README.md you keep your file small and focused while enabling the AI agent to discover and search those files if it feels their contents could be relevant to the tasks they’re trying to achieve.

Warnings, Analyzers, and Errors

Compiler warnings are not “pieces of flair” for your codebase. Rather, your compiler warnings should indicate that something is potentially wrong with something in your codebase and should be addressed.

What I’ve found is that if I have a codebase with many pre-existing warnings, AI agents will do what developers do and ignore warnings on existing code, missing any warnings introduced by their changes as well.

When warnings are addressed (which AI can help with), your AI agents are more likely to pay attention to them, but not guaranteed to do so. If you absolutely want your AI agents to pay attention to warnings, you can always modify your projects to treat C# warnings as compiler errors, but you’ll need to get to 0 warnings for the project in order to do so, and this may make developers on your team less productive and more grumpy in the short term. You can always omit certain warnings as well. See the Microsoft documentation for more details on the various compiler options for handling all or specific warnings as errors and muting warnings your team doesn’t care about.

You can augment this further by adding in additional code analyzers that will identify additional potential issues and standards deviations in your codebase, helping AI self-govern and your team identify and correct it when AI fails to do so itself. Some analyzers you might consider include:

Depending on the libraries your team works with there may be some additional analyzers to consider and include as well.

Formatting AI code

Some AI models and tools generate code that is oddly formatted and does not match the rest of your codebase. These code files might have no indentation whatsoever or deviate from your team’s standards for curly brace placement or other common practices you’ve adopted. This can make AI generated code feel even more alien in your codebase.

You can combat this using formatting tools baked into your IDEs or through standardized commands such as dotnet format.

You can even set a pre-commit hook in git to run dotnet format before a commit is created, or as part of the build by making a custom step in Directory.build.targets as shown here:

<Project>
  <!--
    Auto-formatting during build.
    Set EnableAutoFormat=false to disable auto-formatting (e.g., in CI/CD pipelines where formatting is checked separately).
  -->
  <PropertyGroup>
    <EnableAutoFormat Condition="'$(EnableAutoFormat)' == ''">true</EnableAutoFormat>
    <_FormatLockFile>$(MSBuildThisFileDirectory).formatting.lock</_FormatLockFile>
  </PropertyGroup>

  <!--
    Format code before compilation at the solution level.
    This ensures all code is consistently formatted according to .editorconfig rules.
    Uses a lock file to ensure formatting only runs once per build session.
  -->
  <Target Name="FormatCode" 
          BeforeTargets="Build" 
          Condition="'$(EnableAutoFormat)' != 'false' AND !Exists('$(_FormatLockFile)')">
    <Touch Files="$(_FormatLockFile)" AlwaysCreate="true" />
    <Message Text="Formatting C# code files in solution..." Importance="normal" />
    <Exec Command="dotnet format --include-generated --verbosity minimal --no-restore" 
          WorkingDirectory="$(MSBuildThisFileDirectory)"
          ContinueOnError="true" 
          IgnoreExitCode="true" />
  </Target>

  <!--
    Clean up the lock file after build completes.
    This ensures formatting can run again in the next build.
  -->
  <Target Name="CleanupFormatLock" 
          AfterTargets="AfterBuild"
          Condition="Exists('$(_FormatLockFile)')">
    <Delete Files="$(_FormatLockFile)" ContinueOnError="true" />
  </Target>
</Project>

Note: if you follow this example, make sure the lock file is ignored in your .gitignore file.

By taking a proactive approach and ensuring that you either manually format your code using tools or have code auto-formatted on build or commit, you ensure that the code in your solution meets your team’s standards - even when an AI team member writes it.

Structural patterns for AI assisted C# development

Now that we’ve covered some of the broader and more generic foundational aspects of AI assisted development, let’s dive into the realm of .NET code by talking about broader patterns in C# code that can help AI agents across your codebase.

Global using statements as a safety net for AI agents

One problem I often encounter with AI agents is that they generate code that is correct but fails to bring in the using statements to support it. Agents will either miss this entirely or spend cycles (and tokens) running builds and trying to identify and resolve the issue. You can circumvent this in many cases by adding a C# file that contains your global using statements.

This file is often named GlobalUsings.cs and is typically placed in a project’s root directory and contains entries like this:

global using System.Diagnostics;
global using System.Diagnostics.Metrics;
global using System.Text;
global using System.Text.Json;
global using Microsoft.Agents.AI;
global using Microsoft.EntityFrameworkCore;
global using Microsoft.Extensions.AI;
global using Microsoft.Extensions.Logging;

Because you’ve centralized using statements into a single shared location within each project, AI agents no longer need to worry about including using statements when bringing capabilities into each file. Sometimes your AI agents will need to add namespaces that are not yet part of your global using statements, but that’s a less frequent occurrence than the case I observe where a using statement you already employ elsewhere in your code is now needed in a different file.

One caution here: namespaces exist for a reason, and the more namespaces you include in your GlobalUsings.cs file, the greater the chance that you’ll bump into new compiler errors due to types sharing the same name but living in different namespaces. In these cases you (or your AI agents) will need to disambiguate between the two and perhaps specify that decision through a using statement as well.

Directory-based package management

On a related note, if your solution has many projects that all rely on similar NuGet packages, you may benefit from centralized package version management in a Directory.Packages.props file.

These files allow you to specify the version of a dependency in a single file and free up your individual projects to simply state that they depend on this dependency without having to specify which version of it they depend on.

Here’s a snippet from a Directory.Packages.props file for reference:

<Project>
  <PropertyGroup>
    <ManagePackageVersionsCentrally>true</ManagePackageVersionsCentrally>
  </PropertyGroup>
  <ItemGroup>
    <PackageVersion Include="Aspire.Hosting" Version="13.0.2" />
    <PackageVersion Include="Aspire.Hosting.Azure" Version="13.0.2" />
    <PackageVersion Include="Aspire.Hosting.PostgreSQL" Version="13.0.2" />
    <PackageVersion Include="Aspire.Hosting.AppHost" Version="13.0.2" />
    <PackageVersion Include="Aspire.Hosting.Testing" Version="13.0.2" />
 </ItemGroup>
</Project>

Centralizing package management prevents issues that could arise from one project relying on an old version of a dependency and then an AI agent adding a NuGet reference to a different version of that package, resulting in inconsistencies and version mismatches throughout your project.

If you do find yourself moving to centralized package management, I do strongly recommend you define a NuGet.config file for your solution as well, like the following one:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <clear />
    <add key="<a href="http://nuget.org" rel="nofollow">nuget.org</a>" value="<a href="https://api.nuget.org/v3/index.json" rel="nofollow">https://api.nuget.org/v3/index.json</a>" protocolVersion="3" />
  </packageSources>
</configuration>

By explicitly declaring the package source your project depends on, you eliminate potential issues running your code on machines that have more than one package source installed on them.

Standardized dependency injection discovery with Scrutor

If I had a nickel for every time AI created a new service and interface definition and then failed to register that service correctly for dependency injection somewhere, I’d have a lot of nickels.

While you can try to mitigate AI making these mistakes via integration tests or specialized prompts, I’ve found an easier path is often to use convention-based service discovery via a library like Scrutor.

Scrutor lets you define rules for discovering and registering services in a service collection and how those services are registered and defined.

Here’s a sample Scrutor invocation from a project I’m working on at the moment:

services.Scan(scan => scan
     .FromAssemblyOf<GameService>()
     .AddClasses(classes => classes.InNamespaceOf<GameService>())
     .AsSelfWithInterfaces()
     .WithScopedLifetime());

This code finds all classes in a specific services namespace and then registers those classes as scoped instances in the dependency injection container. This example does this not just for the class themselves but for all the interfaces each service implements.

The net result of this relatively straightforward block of code is that if I come along and add a new interface and new service class in a standardized area I’ve configured Scrutor to look at, Scrutor will pick up the new definitions without requiring any additional dependency injection configuration. This means that AI (and our developers) can just focus on following established patterns and don’t need to worry about forgetting DI registration on their new services.

Prefer single-file components

One thing I’ve noticed about AI tooling is that it doesn’t always pull in all the files relevant to a task unless I’ve specifically mentioned them or referenced their existence.

For example, an AI agent may understand that my Blazor components have .razor files, but it may not think of checking the related .razor.cs or .razor.css files for logic or styling. However, if I instead keep all of my logic in the same .razor file, AI agents will look through that full file when they pull in the file for analysis or modification and will be more likely to update the relevant code behind or styling related to the visual layout.

Depending on how much you find AI struggling with your files and components, it may be worth moving away from these separate files and more towards single-file components which can be easier for AI agents to ingest and process. Of course, doing so also increases the size of those files and increases the amount of context heading the agent’s way, which is not always a good thing. However, if you do have files that are simply growing too large, this could be a good motivator to pull logic out into other components or into shared helper methods elsewhere.

Providing additional context for specific areas of your code

Some AI tools such as Cursor allow you to define additional context for specific areas of your code, whether that’s directories, file types, or wildcard pattern matching.

While different tools have different ways of doing this and different levels of capabilities, I’ve found that this capability is very helpful for providing additional instructions that are only relevant for certain areas of code.

For example, you might use your AI toolset to define additional context for things like:

  • Directing tests to use the Arrange / Act / Assert pattern, Shouldly for assertions, and use InlineData where possible to combine different tests into the same test method.
  • Defining patterns and anti-patterns for things like Entity Framework - as well as reinforcing commands for common operations like adding new migrations.
  • Detailing your requirements for documentation, error handling, authentication, rate limiting, and paths on controller endpoints.

To help illustrate this concept, here’s a sample Cursor rule that gets applied to any .razor file:

---
globs: *.razor
alwaysApply: false
---

Pages that have interactive components must include a @rendermode InteractiveAuto directive.

Using this rule we can add in an extra piece of context only when it is likely to be needed (when working with .razor files) and otherwise omit it, keeping our agents focused on only the most relevant context.

While different tools have different capabilities, syntax, and approaches, I am overall very excited about the ability to gain additional control over our AI agents working in our codebases.

C# language features

In this final section we’ll look at some C# language features that make things easier for AI to get right - or easier for them to discover mistakes they’ve made in early attempts.

Prefer the var keyword

If you’ve ever worked with me or read some of my prior articles or books, you may have noticed how I tend to avoid the var keyword in C# and instead favor the C# target-typed new syntax with lines like this:

GameBuilder builder = new();

Or, when working with a method result:

GameBuilder builder = GameFactory.CreateBuilder();

While I think this is great for human readability, I’ve noticed that AI agents struggle more when having to specify the exact type name for something - particularly when multiple generic type parameters are involved. This relates somewhat to the point on using statements earlier in that one of the common issues they introduce is failing to add the right using statement for the type they reference.

By using var instead of using the exact type name, it makes it easier for your AI agents to generate compliant code and make structural changes in a codebase, since the logic is simplified down to lines like the following:

var builder = GameFactory.CreateBuilder();

I personally do not like this style because it obscures the type of object being created. While most IDEs will show you the inferred type as a tooltip or as an extra label on the editor surface, this is not yet present in most code review or diffing tools, meaning this type context - including possible nullability - is potentially lost during code review.

All the same, in my observation, var is more efficient for AI agents making changes to code, so it’s up to you and your team to determine if you optimize for human readability in code review or in terms of development or maintenance time productivity.

Required keyword and nullability analysis

In C# you can define properties as required using syntax like this:

public required string Name {get; init;}

This line of code states that the object in question has a Name property that is a string, that string is non-nullable (otherwise it’d be a string?), the value can only be set during object initialization (init keyword), and that it is not legal to instantiate the object in question without providing the Name property (the required keyword).

While I’ve been defining more and more objects with immutable required properties like these anyway from a quality perspective, I’ve noticed that the C# compiler’s enforcement of the required keyword is really helpful for AI systems, as they’ll try to instantiate objects without fully populating their properties (even if the agent created the property to begin with), but the presence of a compiler error related to the property helps anchor the agent towards working solutions and prevents certain types of defects from even being possible.

While you don’t need to use init-only properties or nullability analysis to take advantage of the required keyword, I’ve found myself generally happier and more productive when working with immutable objects than I am when I work with objects that may slowly mutate into bad states over time if not carefully controlled.

Nowadays nullability analysis should be your default for new projects and you should be working to bring older .NET solutions into compliance with analysis bit by bit over time, because the ability to know if something may be null can save you from writing tedious boilerplate code in many places and help prevent critical exceptions in others.

Using the with keyword for modifying records

The with keyword is fantastic when working with record types because it lets you quickly clone an object with slightly different characteristics without needing to fully represent the properties of that object at the time of instantiation.

For example, the following code represents a new Point that is located near the original Point, but at a slightly different X position:

Point newPos = originalPos with { X = originalPos.X + 1 };

This syntax helps focus us on what’s changed about an object and it minimizes the amount of information about the object an AI agent needs to worry about.

For places where record types are not viable, you may be able to get similar mileage via dedicated constructors or factory methods, but I’ve not done as much work in this area.

Mapping with Mapperly

One thing I have done work with is using libraries like Mapperly to build boilerplate translation code to copy properties from entity objects into DTOs (and vice versa) in a minimal and efficient manner.

For example, if I had a Player entity defined like this:

public class Player
{
    public required string Id { get; set; }
    public required string RulesetId { get; set; }
    public string? Description { get; set; }
    public required string Name { get; set; }
    public Ruleset? Ruleset { get; set; }
    public ICollection<Game> Games { get; set; } = new List<Game>();
}

And a PlayerDto object defined as follows:

public class PlayerDto
{
    public required string Id { get; init; }
    public required string RulesetId { get; init; }
    public string? Description { get; init; }
    public required string Name { get; init; }
}

I could write a PlayerMapper using Mapperly like this:

[Mapper]
public static partial class PlayerMapper
{
    [MapperIgnoreSource(nameof(Player.Ruleset))]
    [MapperIgnoreSource(nameof(Player.Games))]
    public static partial PlayerDto ToDto(this Player player);

    public static partial PlayerDto[] ToDto(this IEnumerable<Player> players);
}

Mapperly will then automatically match properties between the two objects and generate reflection-free code that copies from one object to another in an efficient manner.

The net result of this is that when an AI agent needs to modify an entity, there’s a good chance that the code to map from the entity to a DTO will be generated automatically by Mapperly. In cases where the AI agent makes a modification to one object but not the other (or the property names don’t match) Mapperly will generate build warnings that will help alert you and the AI agent to this and let you quickly resolve the issue, guiding your code in the right direction.

Preferring manual mocks over mocking frameworks

One area where I’ve seen AI agents struggle significantly is with complicated setup logic for mocking libraries like Moq. In these libraries, the agents generate what looks like valid mocking code and the code even compiles, but in reality the code will frequently encounter issues where it’s setting up for something slightly different than what’s actually happening or key parameters are ignored or misplaced, resulting in test-time failures.

I’m currently searching for a better answer to this problem or better libraries that are easier for AI to work with, but at the moment I’m wondering if it’s more efficient and maintainable for AI to manually create the test doubles than to get them to work correctly with mocking libraries without strong compile-time error detection.

One of the significant reasons for using a mocking framework to begin with was saving developer time in creating and maintaining mock objects. In today’s AI-powered world, this may no longer be relevant and it might be worth revisiting our assumptions and seeing if guiding AI towards creating and maintaining mock objects results in cleaner tests that AI struggles less with.

If you have thoughts on this topic or alternative solutions, I’d love to hear from you, but this is where I am currently leaning at the moment.

Naming considerations

The final tip I want to touch on is that what you name things matters. A variable named approvedEmailAddresses is more clearly understood than a variable named approvedList or simply approved. Even if the context of something is obvious to a human, an AI may struggle with things. I’m not saying that we start requiring our variable names be six words long, but I do think that what we name things should provide context into what they actually represent in order to save ourselves from costly mistakes stemming from AI misunderstanding business logic.

Of course, you can insulate yourself from this a little by including comments in your methods detailing the high-level business logic. For example, the following comments help provide additional context to the AI agent and can prevent misunderstandings of code:

// In testing and UAT environments we have a set of allowed email addresses we're allowed to contact.
// If the list has at least one address, we need to ensure we're only sending to approved addresses to avoid spamming users during testing.

The other nice thing this type of comment does is that it helps the code show up in AI searches against indexed data in editors like Cursor that generate embeddings for each file in your solution and can help the AI agent find relevant code faster.

Of course, this only helps if your comments are accurate and reflect the ground truth of what your methods actually do.

Variable names and tokenization

The final thing I want to state in this section is that I am starting to reflect on old habits of mine around naming fields and other private class-level data using an underscore prefix. While this practice has been helpful to me as a developer for many decades, I feel its time may be at an end - both in terms of usefulness as our tools evolve and in terms of the potential cost of it.

To illustrate what I mean, let’s look at two different variable declarations in the lovely token visualizer at https://santoshkanumuri.github.io/token-visualizer/.

The first one uses my previously preferred underscore prefix for private fields:

Tokenized Variables with an underscore

As you can see, the tokenizer assigned a unique token for the underscore character and separated that from the rest of the variable name.

If I had used a more modern scheme and omitted the underscore, that token would be gone and the result would be:

Tokenized Variables without an underscore

Each tokenizer operates differently, but by reducing legacy junk prefixes and suffixes that no longer deliver sufficient value, we can actually help optimize our systems slightly by reducing the volume of tokens required for the same code.

I’ve not done any dedicated experimentation on the long-term effect of this on cost or on accuracy improvements, but I believe that every little bit matters and I’m no longer convinced that my oldschool-style prefixes are contributing development value to humans versus the slight optimizations I can see from potentially removing them.

If you’ve done any research in this area or know of any, please let me know as I’d be eager to see it.

Conclusions

While 16 tips seems like a significant amount, I feel like this article is just scratching the surface on the topic of improving our codebases to make them more easily digestible to AI systems. While this might not matter to you initially, making it easier to work with AI in our codebases can result in less human frustration, faster and cheaper iteration cycles, and greater degrees of success adopting AI productivity tools.

However, the real struggle comes from reviewing and testing the output of such systems in ways that minimize risk to production systems, and that struggle is perhaps central to what lies ahead for us as developers.

While I have more to say on that particular topic, our time is at an end for this article, so I would encourage you to identify the tips that resonate the most with you, look at the tooling that you currently use and the different ways you can inject additional context into it, and consider the things AI systems are currently struggling the most with in your codebase. Once you’ve figured that out, then look for the tips and tricks that best help improve that experience.

For now, goodbye and happy coding in this strange new world we’ve found ourselves in.

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

WhatsApp is logging out native app users — and forcing them onto a slower, resource-heavy web wrapper, sparking…

1 Share
WhatsApp is about to get an upgrade that adds some features at the expense of using significantly more RAM. (Image credit: Future)

WhatsApp just took another step toward pushing PC users to its new web app. If you open WhatsApp for Windows, you'll see a vague warning message that mentions "changes are coming to WhatsApp for Windows."

The warning message mentions the update starting on December 9, but it appears to be rolling out gradually.

Latest Videos From Windows Central

Mandragora | Xbox Game Explained

Mandragora: Whispers of the Witch Tree is a dark new Metroidvania with soulsian inspriations and absolutely stunning art. Here's what you need to know about Mandragora, available now. Learn more: <a href="https://www.windowscentral.com/gaming/xbox/mandragora-whispers-of-the-witch-tree-xbox-launch" rel="nofollow">https://www.windowscentral.com/gaming/xbox/mandragora-whispers-of-the-witch-tree-xbox-launch</a>

0 seconds of 2 minutes, 59 secondsVolume 0%

The new version of WhatsApp can use an astronomical amount of RAM. Windows Latest performed testing earlier this month and saw the WhatsApp web wrapper consume up to seven times more RAM than its native counterpart.

I haven't seen spikes quite that high, but RAM usage can vary depending on how many active conversations you have.

The new WhatsApp is powered by WebView2, which is a factor in why it uses more RAM.

WhatsApp will log people out on PCs to swap to a wrapped web app. (Image credit: Future)

By WhatsApp's own admission, its native app is better than its web wrapper. The company used to have a support document that said (emphasis added):

"To improve the WhatsApp experience for desktop users, we've developed native apps for Windows and Mac operating systems. The Windows and Mac apps provide increased performance and reliability, more ways to collaborate, and features to improve your productivity."

That document has since been updated and no longer explains why a native app is better than a web app. Instead, WhatsApp simply explains the general benefits of WhatsApp for Windows, such as being able to share photos, send messages, and join Communities.

The main drawback of the new WhatsApp is that it's a wrapped web app, which means it uses more RAM than a native application. While it's perfectly fine to use RAM — unused memory is wasted memory — your experience can worsen if your PC is at its limit.

If many of your apps are inefficient or you want to use a more demanding app while also having WhatsApp open, it would be better if your messaging app used fewer system resources.

Considering the price of RAM these days, upgrading a PC with more memory is not a viable solution to WhatsApp switching to a web wrapper.

There are some improvements that come with the new version of WhatsApp, such as an improved Community experience. We'll have to wait until more people use the latest version of WhatsApp to properly gauge people's feelings about the update.

Let us know in the comments what you think of Meta's latest move for Windows users and what you think of the new webapp WhatsApp!

Over to you!

Do you like the new WhatsApp webapp for Windows?

Voting closes at 05:05 AM on Jan 2, 2026

Yes, I use it all the time, no issues.

No, it's a downgrade from the original.

I don't use WhatsApp as Meta is a terrible company.

Thanks for voting!

Please log in or register to see the results!

Join the community

Join the Windows Central Family! The best way to keep in touch and to be informed of our latest quizzes and competitions, as well as news and offers.

Already have an account? Log in

My Details

Update your details below...

Keep in the Know

Would you like to be kept informed about new quizzes and offers from Future and its partners?

Validate Your Mobile No.

We have sent a code to . Please enter it below to verify your account.

Update Your Mobile No.

You may enter a new mobile number below. You will be sent a verification code to the phone number you provide.

embed-poll.hint_heading

embed-poll.hint_subheading

Validate Your Email Address

We have sent a code to . Please enter it below to verify your account.

Update Your Email Address

You may enter a new email address below. You will be sent a verification code to the address you provide.

Create a Username

This will be publicly viewable so make it something you like!

Reset your password

Enter your email address below. If it is registered with us, we will email you a code that will allow you to reset your password.

Check your inbox

If your email address was found in our system, you should receive an email in the next few minutes containing a code. Enter that code below to reset your password.

Set new password

Please enter your new password below.


Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!


Sean Endicott is a tech journalist at Windows Central, specializing in Windows, Microsoft software, AI, and PCs. He's covered major launches, from Windows 10 and 11 to the rise of AI tools like ChatGPT. Sean's journey began with the Lumia 930, leading to strong ties with app developers. Outside writing, he coaches American football, utilizing Microsoft services to manage his team. He studied broadcast journalism at Nottingham Trent University and is active on X @SeanEndicott_ and Threads @sean_endicott_.

Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Tinker: General Availability and Vision Input

2 Shares

Today we are announcing four updates to Tinker:

  • No more waitlist
  • New reasoning model: Kimi K2 Thinking
  • New inference interface that is compatible with the OpenAI API
  • Vision input support with Qwen3-VL

General availability#

The waitlist is over! Everybody can use Tinker now; sign up here to get started. See the Tinker homepage for available models and pricing, and check out the Tinker cookbook for code examples.

More reasoning with Kimi K2 Thinking#

Users can now fine-tune Kimi K2 Thinking on Tinker. With a trillion parameters, Kimi K2 is the largest model in our lineup so far. It is built for long chains of reasoning and tool use.

OpenAI API-compatible sampling#

Tinker has a standard function for inference:

prompt = types.ModelInput.from_ints(tokenizer.encode("The capital of France is",))
params = types.SamplingParams(max_tokens=20, temperature=0.0, stop=["\n"])
future = sampling_client.sample(prompt=prompt, sampling_params=params)

With this release, we have added OpenAI API-compatible scaffolding for quickly sampling from a model by specifying a path, even while it’s still training. This also means Tinker can now plug-and-play with any OpenAI API-compatible platform. See more information in our Tinker documentation.

response = openai_client.completions.create(
    model="tinker://0034d8c9-0a88-52a9-b2b7-bce7cb1e6fef:train:0/sampler_weights/000080",
    prompt="The capital of France is",
    max_tokens=20,
    temperature=0.0,
    stop=["\n"],
)

Vision input with Qwen3-VL#

We’ve added two vision models to Tinker: Qwen3-VL-30B-A3B-Instruct and Qwen3-VL-235B-A22B-Instruct. With these, users can process pictures, screenshots, and diagrams for a variety of applications.

To input images, just interleave together an ImageChunk – consisting of your image, saved as bytes – with text chunks. For example:

model_input = tinker.ModelInput(chunks=[
  tinker.types.ImageChunk(data=image_data, format="png"),
  tinker.types.EncodedTextChunk(tokens=tokenizer.encode("What is this?")),
])

These vision inputs can be used in a variety of applications out-of-the-box, including SFT and RL finetuning.

To demonstrate vision understanding in action, we are sharing a new cookbook recipe for fine-tuning VLMs as image classifiers. Qwen3-VL-235B-A22B-Instruct obtains reasonable accuracy even given just one example per class; performance improves with more labeled data.

Training image classifiers with Tinker#

To showcase Tinker’s new vision capabilities, we finetuned Qwen3-VL-235B-A22B-Instruct to classify images on four classic datasets:

Since Qwen3-VL is a language model, we frame classification as text generation: given an image, the model outputs the class name. We compare this approach against a traditional vision baseline of finetuning a vision-only model — DINOv2-base. DINOv2 is a self-supervised vision transformer that was trained to encode images, and is commonly used as a backbone for pure computer vision tasks. For DINOv2, we add a classification head that predicts a distribution over all N classes. Both models are fine-tuned with LoRA.

Labeled image data is scarce for many real-world use cases, so data efficiency is the primary measure we look at. We show the classification accuracy when sweeping across the number of labeled examples per class, starting with just a single one.

Comparison of fine-tuned Qwen3-VL-235-A22B and DINOv2 performance on simple image classification tasks.

In the limited-data regime, Qwen3-VL-235-A22B outperforms DINOv2. Not only is it a bigger model, but as a VLM, it also comes with language knowledge out-of-the-box (i.e. what a “golden retriever” or “sunflower” is). This general language-and-vision capability of Qwen3-VL makes it readily available for vision tasks beyond classification.

Happy Holidays#

Tinker exists to enable builders and researchers to train and customize state-of-the-art models. As always, we look forward to seeing what you build with Tinker. Happy holidays!

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories