Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150984 stories
·
33 followers

Rust in Linux's Kernel 'is No Longer Experimental'

1 Share
Steven J. Vaughan-Nichols files this report from Tokyo: At the invitation-only Linux Kernel Maintainers Summit here, the top Linux maintainers decided, as Jonathan Corbet, Linux kernel developer, put it, "The consensus among the assembled developers is that Rust in the kernel is no longer experimental — it is now a core part of the kernel and is here to stay. So the 'experimental' tag will be coming off." As Linux kernel maintainer Steven Rosted told me, "There was zero pushback." This has been a long time coming. This shift caps five years of sometimes-fierce debate over whether the memory-safe language belonged alongside C at the heart of the world's most widely deployed open source operating system... It all began when Alex Gaynor and Geoffrey Thomas at the 2019 Linux Security Summit said that about two-thirds of Linux kernel vulnerabilities come from memory safety issues. Rust, in theory, could avoid these by using Rust's inherently safer application programming interfaces (API)... In those early days, the plan was not to rewrite Linux in Rust; it still isn't, but to adopt it selectively where it can provide the most security benefit without destabilizing mature C code. In short, new drivers, subsystems, and helper libraries would be the first targets... Despite the fuss, more and more programs were ported to Rust. By April 2025, the Linux kernel contained about 34 million lines of C code, with only 25 thousand lines written in Rust. At the same time, more and more drivers and higher-level utilities were being written in Rust. For instance, the Debian Linux distro developers announced that going forward, Rust would be a required dependency in its foundational Advanced Package Tool (APT). This change doesn't mean everyone will need to use Rust. C is not going anywhere. Still, as several maintainers told me, they expect to see many more drivers being written in Rust. In particular, Rust looks especially attractive for "leaf" drivers (network, storage, NVMe, etc.), where the Rust-for-Linux bindings expose safe wrappers over kernel C APIs. Nevertheless, for would-be kernel and systems programmers, Rust's new status in Linux hints at a career path that blends deep understanding of C with fluency in Rust's safety guarantees. This combination may define the next generation of low-level development work.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Why AI Advantage Compounds

1 Share
From: AIDailyBrief
Duration: 11:45
Views: 1,058

AI advantage compounds as organizations integrate GenAI into workflows and scale beyond isolated experiments. Surveys reveal widespread productivity and financial gains, attribution challenges, and gaps between expected and actual AI investment. Reinvestment into AI capabilities and a shift from time-saving tasks to decision-making, revenue generation, and autonomous agents creates a self-reinforcing flywheel with non-linear ROI.

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Dont' Sleep on GPT-5.2, It's a coding BEAST!

1 Share


Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Dynamic data-driven scrollable button menu construction kit for Snap Spectacles part 2 - how it works

1 Share

In part 1 I described how this component can be used, and I promised to go deeper into the details about how it worked in a follow-up post. This is that post.

The ScrollMenu prefab

I usually build up my prefabs in such a way that the top-level SceneObject has a kind of controller script, while there is always one child SceneObject that holds the actual visible part (in this case, a menu). That way, I can let the controller script handle the actual display state and the way things work by calling a script method, without having to mess with the actual internal structure of the prefab, potentially turning off parts that have vital controlling scripts on it, messing up the workings of the app, and potentially creating issues that way.

The controller script in this case is called - very originally - UIKitScrollMenuController. It features a few input fields. You can change the first three (in fact, you must do so with the Scroll Button Prefab field after you dragged it onto your scene, as I explained in part 1). The last three should best be left undisturbed.

The first field is the vertical size a button uses (including padding), the second the horizontal size. The control makes buttons in two columns and as many rows as necessary. If you want more columns, you will have to adapt the code. Since you will have to press them by finger, I don’t anticipate much narrower buttons, so I guess you don’t have to change Column Size that often, but Y Offset you might. It is now tailored toward my sample button.

The MenuFrame component contains the Frame script showing the UI canvas, as well as the HeadLock and the Billboard script keeping the UI more or less in view.

One thing of note - if you are using Billboard, please remember to disable “Allow translation”, otherwise you can still grab and move the floating window, but you will more or less be fighting the HeadLock and the Billboard scripts, which is not desirable. Either the user decides where a window goes, or the system - but not both.

Some other details:

  • ScrollWindowAnchor determines where on the floating screen the scroll window will appear. This you can use mostly to decide the vertical starting point, should you require to change that.
  • ScrollWindow itself decides the actual size of the scroll area.
  • Scrollbar determines the vertical position of the scrollbar.
  • Slider determines the size of the scrollbar.

If you change either ScrollWindowAnchor or ScrollWindow, be prepared to fiddle with Scrollbar and ScrollbarSlider until it all fits nicely together again, with sizes aligning visually, etc.

Scripts

The whole thing works using only three custom scripts:

So let’s start with the easy part:

BaseUIKitScrollButtonController

This is a fairly simple script, but still requires some explanation. Let’s start with the header and the events.

@component
export class BaseUIKitScrollButtonController extends BaseScriptComponent {
    @input buttonText: Text;
    @input uiKitButton: BaseButton;

    private onButtonPressedEvent = new Event<BaseScrollButtonData>();
    public readonly onButtonPressed = this.onButtonPressedEvent.publicApi();

    public onHoveredEvent = new Event<boolean>();
    public onHovered = this.onHoveredEvent.publicApi();

Remember this can be used as a parent class component for your own button script. Here you can see what it does behind the curtains.

  • Text should contain the button’s text to be set by the data fed to this button’s setButtonData method (as explained before)
  • It exposes an onButtonPressed event that is triggered when the button is pressed, and returns the BaseScrollButtonData that was used to create this button in the first place.
  • It also exposes an event onHovered that tells the interested listener whether the button is hovered over by the user.

Although both events are public, they are typically only used internally, by the UIKitScrollMenuController, as will become clear later.

The setButtonData is used by UIKitScrollMenuController to feed the actual button data to the button that is to be created:

public setButtonData(scrollButtonData: BaseScrollButtonData): void {
    if (this.uiKitButton != null) {
        this.uiKitButton.onHoverEnter.add(() => this.onHoveredEvent.invoke(true));
        this.uiKitButton.onHoverExit.add(() => this.onHoveredEvent.invoke(false));
        this.uiKitButton.onTriggerDown.add(() => this.onButtonPressedEvent.invoke(scrollButtonData));
        this.buttonText.text = scrollButtonData.buttonText;
        this.applyCustomSettings(scrollButtonData);
    }
}

protected applyCustomSettings(scrollButtonData: BaseScrollButtonData): void {
}

It wires up the button’s internal events to the BaseUIKitScrollButtonController’s onButtonPressed and onHovered, both of which will be consumed by the UIKitScrollMenuController. It also sets the button’s text, then finally calls the (here empty) applyCustomSettings method that you can override in a child class should you need to do so, to perform some custom actions for your custom button. I showed an example of that here.

UIKitScrollMenuController

This is basically the magic wand that all ties it together. The start is simple enough:

@component
export class UIKitScrollMenuController extends BaseScriptComponent {
    @input yOffset: number = 5;
    @input columnSize: number = 4;
    @input scrollButtonPrefab: ObjectPrefab;
    @input scrollWindow: ScrollWindow;
    @input menuRoot: SceneObject;
    @input closeButton: BaseButton;

    private onButtonPressedEvent = new Event<BaseScrollButtonData>();
    public readonly onButtonPressed = this.onButtonPressedEvent.publicApi();
    private scrollArea: SceneObject;

On top we see the six inputs already discussed before, then again a onButtonPressed that can inform interested listeners what button in the list was pressed (as shown here). The scrollArea we will need for a peculiar thing later.

Next is the setting up in onAwake:

private onAwake(): void {
    this.scrollArea = this.scrollWindow.getSceneObject();
    this.setMenuVisible(false);
    const delayedEvent = this.createEvent("DelayedCallbackEvent");
    delayedEvent.bind(() => {
        this.initializeUI();
    });
    delayedEvent.reset(0.1);
}

We get a reference to the actual scrollWindow, we hide the menu for now, then start a delayed event for initializing the UI. This is necessary because for some reason the close button is not awake at onAwake yet, so you get the “Component not yet awake” error otherwise.

The methods for initializing the UI, as well as opening and closing the menu are as follows:

protected initializeUI(): void {
    this.closeButton.onTriggerDown.add(() => this.closeMenu());
}

closeMenu() {
    const delayedEvent = this.createEvent("DelayedCallbackEvent");
    delayedEvent.bind(() => {
        this.setMenuVisible(false);
    });
    delayedEvent.reset(0.25);
    this.setMenuVisible(false);
}

public setMenuVisible(visible: boolean): void {
    this.menuRoot.enabled = visible;
}

In the closeMenu I keep a standard 0.25 seconds delay so the ‘click’ sound of the button has time to play; otherwise it will not play or be clipped as menuRoot SceneComponent, that holds all the UI, is hidden. The menuRoot component should be set to the first child in the prefab, as said before:

Otherwise the UIKitScrollMenuController will essentially disable itself.

The meat of the matter is the createButtons method, which essentially creates all buttons and the structure to support events to the outside world. Your own code should call it, feeding it an array of BaseScrollButtonData (or a child class of that). It starts as follows:

public createButtons(scrollButtonData: BaseScrollButtonData[]): void {
    var lines = Math.ceil(scrollButtonData.length / 2);
    var initOffset = lines % 2 != 0 ? this.yOffset : this.yOffset / 2;
    var yStart = Math.ceil(lines / 2) * this.yOffset - initOffset;
    var line = 0;
    this.scrollWindow.onInitialized.add(() => {
        this.scrollWindow.setScrollDimensions(new vec2(0, lines * this.yOffset));
    });
    this.setMenuVisible(true);

It first calculates how many rows of buttons are required, then the initial offset from the center, and with that at what y coordinate the buttons need to start (this will be used later). Then the scroll window scroll dimensions is set to the vertical size times the number of lines. Since we are only scrolling vertically, we only need to set the y part of it.

In hindsight I should have called yOffset “rowSize”, but what the heck.

for (let i = 0; i < scrollButtonData.length; i++) {
    var button = this.scrollButtonPrefab.instantiate(this.scrollArea);
    var buttonTransform = button.getTransform();
    var xPos = (i % 2 == 0) ? -this.columnSize : this.columnSize;
    buttonTransform.setLocalPosition(
      new vec3(xPos, yStart - this.yOffset * line, 0.1));
    button.enabled = true;
    if (i % 2 != 0) {
        line++;
    }

    const buttonController =
       getComponent<BaseUIKitScrollButtonController>(button,
                   BaseUIKitScrollButtonController);
    buttonController.setButtonData(scrollButtonData[i]);
    buttonController.onHovered.add((p) => {
        this.scrollWindow.vertical = !p;
    });
    buttonController.onButtonPressed.add((data) =>
       this.onButtonPressedEvent.invoke(data));
}
this.updateScrollPosition();

So, for every entry in scrollButtonData this code:

  • Creates a button
  • Places it on the left or right side of the list based on whether it’s an odd or even button
  • Enables the button
  • Increases the line after every two buttons
  • Gets a reference to BaseUIKitScrollButtonController
  • Feeds the BaseScrollButtonData entry to its setButtonData method - this will hook up the button
  • Makes sure the window scrolling is disabled when you hover over a button
  • Routes a button’s pressed event to the outside so you can listen to all buttons in one place
  • And finally calls updateScrollPosition

The third-to-last thing deserves a bit of explanation. I noticed it was pretty hard to press a button on a scrollable list, especially in not very good lighting conditions, as you tend to accidentally drag/move the list if you just miss the buttons, or the Spectacles camera misses you trying to press it. This makes it a lot more usable.

updateScrollPosition now is also a bit of a hacky thing. Because if you fill up a UIKit scroll list, it tends to set its scroll button halfway down the list. Why that is, I don’t know.

private updateScrollPosition(): void {
    const delayedEvent = this.createEvent("DelayedCallbackEvent");
    delayedEvent.bind(() => {
        this.scrollWindow.scrollPositionNormalized = new vec2(0, 1);
        this.menuRoot.getTransform().setLocalScale(new vec3(1, 1, 1));
    });
    delayedEvent.reset(1);
}

It basically sets the scrollPositionNormalized to 0, 1, which translates to “vertical top scroll position”. 0, 0 is vertical center, 0, -1 is vertical bottom. If you don’t add updateScrollPosition, instead of the desired left screen, you get the right screen.

Conclusion

So that’s kind of it. I do hope it’s useful for you and also gives you some insights into how you can cajole some UIKit elements into the shape you want. There is actually another little helper class in here, but I will deal with that later in yet another blog post.

The demo project is (still) here at GitHub.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Why most enterprise AI coding pilots underperform (Hint: It's not the model)

1 Share

Gen AI in software engineering has moved well beyond autocomplete. The emerging frontier is agentic coding: AI systems capable of planning changes, executing them across multiple steps and iterating based on feedback. Yet despite the excitement around “AI agents that code,” most enterprise deployments underperform. The limiting factor is no longer the model. It’s context: The structure, history and intent surrounding the code being changed. In other words, enterprises are now facing a systems design problem: They have not yet engineered the environment these agents operate in.

The shift from assistance to agency

The past year has seen a rapid evolution from assistive coding tools to agentic workflows. Research has begun to formalize what agentic behavior means in practice: The ability to reason across design, testing, execution and validation rather than generate isolated snippets. Work such as dynamic action re-sampling shows that allowing agents to branch, reconsider and revise their own decisions significantly improves outcomes in large, interdependent codebases. At the platform level, providers like GitHub are now building dedicated agent orchestration environments, such as Copilot Agent and Agent HQ, to support multi-agent collaboration inside real enterprise pipelines.

But early field results tell a cautionary story. When organizations introduce agentic tools without addressing workflow and environment, productivity can decline. A randomized control study this year showed that developers who used AI assistance in unchanged workflows completed tasks more slowly, largely due to verification, rework and confusion around intent. The lesson is straightforward: Autonomy without orchestration rarely yields efficiency.

Why context engineering is the real unlock

In every unsuccessful deployment I’ve observed, the failure stemmed from context. When agents lack a structured understanding of a codebase, specifically its relevant modules, dependency graph, test harness, architectural conventions and change history. They often generate output that appears correct but is disconnected from reality. Too much information overwhelms the agent; too little forces it to guess. The goal is not to feed the model more tokens. The goal is to determine what should be visible to the agent, when and in what form.

The teams seeing meaningful gains treat context as an engineering surface. They create tooling to snapshot, compact and version the agent’s working memory: What is persisted across turns, what is discarded, what is summarized and what is linked instead of inlined. They design deliberation steps rather than prompting sessions. They make the specification a first-class artifact, something reviewable, testable and owned, not a transient chat history. This shift aligns with a broader trend some researchers describe as “specs becoming the new source of truth.”

Workflow must change alongside tooling

But context alone isn’t enough. Enterprises must re-architect the workflows around these agents. As McKinsey’s 2025 report “One Year of Agentic AI” noted, productivity gains arise not from layering AI onto existing processes but from rethinking the process itself. When teams simply drop an agent into an unaltered workflow, they invite friction: Engineers spend more time verifying AI-written code than they would have spent writing it themselves. The agents can only amplify what’s already structured: Well-tested, modular codebases with clear ownership and documentation. Without those foundations, autonomy becomes chaos.

Security and governance, too, demand a shift in mindset. AI-generated code introduces new forms of risk: Unvetted dependencies, subtle license violations and undocumented modules that escape peer review. Mature teams are beginning to integrate agentic activity directly into their CI/CD pipelines, treating agents as autonomous contributors whose work must pass the same static analysis, audit logging and approval gates as any human developer. GitHub’s own documentation highlights this trajectory, positioning Copilot Agents not as replacements for engineers but as orchestrated participants in secure, reviewable workflows. The goal isn’t to let an AI “write everything,” but to ensure that when it acts, it does so inside defined guardrails.

What enterprise decision-makers should focus on now

For technical leaders, the path forward starts with readiness rather than hype. Monoliths with sparse tests rarely yield net gains; agents thrive where tests are authoritative and can drive iterative refinement. This is exactly the loop Anthropic calls out for coding agents. Pilots in tightly scoped domains (test generation, legacy modernization, isolated refactors); treat each deployment as an experiment with explicit metrics (defect escape rate, PR cycle time, change failure rate, security findings burned down). As your usage grows, treat agents as data infrastructure: Every plan, context snapshot, action log and test run is data that composes into a searchable memory of engineering intent, and a durable competitive advantage.

Under the hood, agentic coding is less a tooling problem than a data problem. Every context snapshot, test iteration and code revision becomes a form of structured data that must be stored, indexed and reused. As these agents proliferate, enterprises will find themselves managing an entirely new data layer: One that captures not just what was built, but how it was reasoned about. This shift turns engineering logs into a knowledge graph of intent, decision-making and validation. In time, the organizations that can search and replay this contextual memory will outpace those who still treat code as static text.

The coming year will likely determine whether agentic coding becomes a cornerstone of enterprise development or another inflated promise. The difference will hinge on context engineering: How intelligently teams design the informational substrate their agents rely on. The winners will be those who see autonomy not as magic, but as an extension of disciplined systems design:Clear workflows, measurable feedback, and rigorous governance.

Bottom line

Platforms are converging on orchestration and guardrails, and research keeps improving context control at inference time. The winners over the next 12 to 24 months won’t be the teams with the flashiest model; they’ll be the ones that engineer context as an asset and treat workflow as the product. Do that, and autonomy compounds. Skip it, and the review queue does.

Context + agent = leverage. Skip the first half, and the rest collapses.

Dhyey Mavani is accelerating generative AI at LinkedIn.

Read more from our guest writers. Or, consider submitting a post of your own! See our guidelines here.



Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Clean Architecture for Blazor with DDD & CQRS

1 Share

Aliaksandr Marozka

Learn how to structure Blazor apps with Clean Architecture, DDD, and CQRS. Clear layers, EF Core mapping, and tested handlers.

Press enter or click to view image in full size

Be honest: do your Blazor components still talk straight to EF Core and sprinkle business rules in event handlers? That “quick fix” is why the code gets hard to test, hard to change, and slow to ship. Let’s fix that with a clear structure you can apply today.

Why Blazor needs Clean Architecture

Blazor (Server or WebAssembly) makes UI work simple, but it’s easy to let components grow into mini “god objects”. Typical smells:

  • Data access inside components (new DbContext or heavy injected services)
  • Duplicated validation spread across UI and API
  • Stateful logic that is hard to cover with tests
  • Tight coupling between UI and persistence

Clean Architecture gives you guardrails:

  • Separation of concerns: UI shows data; Application orchestrates use cases; Domain holds rules; Infrastructure talks to the world (DB, HTTP, queues).
  • Dependency inversion: outer layers depend on inner abstractions, never the other way around.
  • Domain modeling: entities, value objects, and domain events keep rules close to the data that owns them.
  • CQRS: reads and writes follow different paths, which reduces accidental coupling and makes flows clear.

Solution layout that works

A tiny but complete folder layout for a Blazor app that scales:

src/
BlazorApp/ # UI: Blazor Server or WASM host + Razor components
Application/ # Use cases, CQRS handlers, DTOs, validation
Domain/ # Entities, Value Objects, Aggregates, Events
Infrastructure/ # EF Core, Repositories, Email/SMS, Outbox

tests/
Domain.Tests/
Application.Tests/
Infrastructure.Tests/
BlazorApp.Tests/ # bUnit or UI tests where needed

References (one-way)

BlazorApp → Application → Domain
BlazorApp → Domain (for shared contracts like primitive Value Objects)
Infrastructure → Application, Domain

# Startup wiring happens in BlazorApp, but implementations live in Infrastructure

This keeps the Domain and Application projects free of UI and database concerns.

If this post helped, you’ll love the rest of my Blazor .Net Tips content:
✅Read more: .Net Code Chronicles
✅Get new posts: Subscribe on Medium

The layers at a glance

Domain

  • Core language of the business: Entities, Value Objects, Aggregates
  • Domain Events, business invariants
  • No EF Core, no HTTP, no logging abstractions needed here

Application

  • Use cases as Commands and Queries
  • Transaction boundaries, validation, mapping to DTOs
  • Interfaces (e.g., ITodoRepository, IEmailSender)

Infrastructure

  • EF Core DbContext, repository implementations
  • Email/SMS/HTTP clients, file storage, event outbox

UI (Blazor)

  • Razor components, minimal logic: send Commands/Queries, render state

Domain modeling: simple, strict, testable

We’ll build a tiny Todo feature. Core rules:

  • A TodoList owns many TodoItem entries (aggregate root is TodoList).
  • Title must be non‑empty and trimmed. Duplicate titles in the same list are not allowed.
  • Completing an item raises a domain event TodoItemCompleted.

Domain/ValueObjects/Title.cs

namespace CleanBlazor.Domain.ValueObjects;

public sealed record Title
{
public string Value { get; }

private Title(string value) => Value = value;

public static Title From(string? input)
{
var value = (input ?? string.Empty).Trim();
if (string.IsNullOrWhiteSpace(value))
throw new ArgumentException("Title cannot be empty.");
if (value.Length > 120)
throw new ArgumentException("Title is too long (max 120).");
return new Title(value);
}

public override string ToString() => Value;
}

Domain/Events/TodoItemCompleted.cs

namespace CleanBlazor.Domain.Events;

public sealed record TodoItemCompleted(Guid ListId, Guid ItemId, DateTime OccurredAtUtc);

Domain/Entities/TodoItem.cs

using CleanBlazor.Domain.Events;
using CleanBlazor.Domain.ValueObjects;

namespace CleanBlazor.Domain.Entities;

public class TodoItem
{
public Guid Id { get; private set; } = Guid.NewGuid();
public Title Title { get; private set; }
public bool IsDone { get; private set; }

private readonly List<object> _events = new();
public IReadOnlyList<object> Events => _events;

public TodoItem(Title title)
{
Title = title;
}

public void Complete()
{
if (IsDone) return;
IsDone = true;
_events.Add(new TodoItemCompleted(default, Id, DateTime.UtcNow));
}
}

Domain/Entities/TodoList.cs

using CleanBlazor.Domain.ValueObjects;

namespace CleanBlazor.Domain.Entities;

public class TodoList
{
private readonly List<TodoItem> _items = new();

public Guid Id { get; private set; } = Guid.NewGuid();
public string Name { get; private set; }
public IReadOnlyCollection<TodoItem> Items => _items.AsReadOnly();

public TodoList(string name)
{
Name = string.IsNullOrWhiteSpace(name) ? throw new ArgumentException("Name required") : name.Trim();
}

public TodoItem AddItem(Title title)
{
if (_items.Any(i => i.Title.Value.Equals(title.Value, StringComparison.OrdinalIgnoreCase)))
throw new InvalidOperationException($"Item with title '{title}' already exists.");
var item = new TodoItem(title);
_items.Add(item);
return item;
}
}

Tip: keep Domain clean of framework ties. No annotations, no EF types, no MediatR. Plain C#.

Application layer with CQRS

Define contracts that the UI and handlers use. You can use MediatR or a minimal interface of your own. I’ll show a plain version (easy to swap later).

Application/Abstractions/ITodoRepository.cs

using CleanBlazor.Domain.Entities;
using CleanBlazor.Domain.ValueObjects;

namespace CleanBlazor.Application.Abstractions;

public interface ITodoRepository
{
Task<TodoList?> GetListAsync(Guid listId, CancellationToken ct);
Task<Guid> CreateListAsync(string name, CancellationToken ct);
Task<Guid> AddItemAsync(Guid listId, Title title, CancellationToken ct);
Task CompleteItemAsync(Guid listId, Guid itemId, CancellationToken ct);

Task<IReadOnlyList<TodoItemDto>> GetItemsAsync(Guid listId, CancellationToken ct);
}

public sealed record TodoItemDto(Guid Id, string Title, bool IsDone);

Commands

Application/Todos/AddItem/AddItemCommand.cs

namespace CleanBlazor.Application.Todos.AddItem;

public sealed record AddItemCommand(Guid ListId, string Title);

public interface ICommandHandler<TCommand>
{
Task Handle(TCommand command, CancellationToken ct);
}

Application/Todos/AddItem/AddItemHandler.cs

using CleanBlazor.Application.Abstractions;
using CleanBlazor.Domain.ValueObjects;

namespace CleanBlazor.Application.Todos.AddItem;

public sealed class AddItemHandler : ICommandHandler<AddItemCommand>
{
private readonly ITodoRepository _repo;
public AddItemHandler(ITodoRepository repo) => _repo = repo;

public async Task Handle(AddItemCommand command, CancellationToken ct)
{
var title = Title.From(command.Title);
await _repo.AddItemAsync(command.ListId, title, ct);
}
}

Queries

Become a member

Become a member

Application/Todos/GetItems/GetItemsQuery.cs

namespace CleanBlazor.Application.Todos.GetItems;

public sealed record GetItemsQuery(Guid ListId);

public interface IQueryHandler<TQuery, TResult>
{
Task<TResult> Handle(TQuery query, CancellationToken ct);
}

Application/Todos/GetItems/GetItemsHandler.cs

using CleanBlazor.Application.Abstractions;

namespace CleanBlazor.Application.Todos.GetItems;

public sealed class GetItemsHandler : IQueryHandler<GetItemsQuery, IReadOnlyList<TodoItemDto>>
{
private readonly ITodoRepository _repo;
public GetItemsHandler(ITodoRepository repo) => _repo = repo;

public Task<IReadOnlyList<TodoItemDto>> Handle(GetItemsQuery query, CancellationToken ct)
=> _repo.GetItemsAsync(query.ListId, ct);
}

This setup is tiny, testable, and leaves room to swap in MediatR later without touching Domain.

Infrastructure with EF Core

Keep EF Core out of Domain and Application by mapping in Infrastructure.

Infrastructure/Data/AppDbContext.cs

using CleanBlazor.Domain.Entities;
using Microsoft.EntityFrameworkCore;

namespace CleanBlazor.Infrastructure.Data;

public class AppDbContext : DbContext
{
public DbSet<TodoList> Lists => Set<TodoList>();
public DbSet<TodoItem> Items => Set<TodoItem>();

public AppDbContext(DbContextOptions<AppDbContext> options) : base(options) { }

protected override void OnModelCreating(ModelBuilder b)
{
b.Entity<TodoList>(e =>
{
e.HasKey(x => x.Id);
e.Property(x => x.Name).IsRequired().HasMaxLength(80);
e.HasMany<TodoItem>("_items").WithOne().OnDelete(DeleteBehavior.Cascade);
});

b.Entity<TodoItem>(e =>
{
e.HasKey(x => x.Id);
e.OwnsOne(x => x.Title, nb =>
{
nb.Property(p => p.Value).HasColumnName("Title").HasMaxLength(120);
});
});
}
}

Infrastructure/Repositories/TodoRepository.cs

using CleanBlazor.Application.Abstractions;
using CleanBlazor.Domain.Entities;
using CleanBlazor.Domain.ValueObjects;
using CleanBlazor.Infrastructure.Data;
using Microsoft.EntityFrameworkCore;

namespace CleanBlazor.Infrastructure.Repositories;

public sealed class TodoRepository : ITodoRepository
{
private readonly AppDbContext _db;
public TodoRepository(AppDbContext db) => _db = db;

public async Task<TodoList?> GetListAsync(Guid listId, CancellationToken ct)
=> await _db.Lists.Include("_items").FirstOrDefaultAsync(l => l.Id == listId, ct);

public async Task<Guid> CreateListAsync(string name, CancellationToken ct)
{
var list = new TodoList(name);
_db.Add(list);
await _db.SaveChangesAsync(ct);
return list.Id;
}

public async Task<Guid> AddItemAsync(Guid listId, Title title, CancellationToken ct)
{
var list = await GetListAsync(listId, ct) ?? throw new KeyNotFoundException("List not found");
var item = list.AddItem(title);
await _db.SaveChangesAsync(ct);
return item.Id;
}

public async Task CompleteItemAsync(Guid listId, Guid itemId, CancellationToken ct)
{
var list = await GetListAsync(listId, ct) ?? throw new KeyNotFoundException("List not found");
var item = list.Items.First(i => i.Id == itemId);
item.complete();
await _db.SaveChangesAsync(ct);
}

public async Task<IReadOnlyList<TodoItemDto>> GetItemsAsync(Guid listId, CancellationToken ct)
{
return await _db.Items
.Where(i => EF.Property<Guid>(i, "TodoListId") == listId)
.Select(i => new TodoItemDto(i.Id, i.Title.Value, i.IsDone))
.ToListAsync(ct);
}
}

Note: method casing typo item.complete() is intentional in code review checks. It should be item.Complete(). Spotting these in tests is cheap; in prod, not so much.

Wiring in Blazor (composition root)

BlazorApp/Program.cs

using CleanBlazor.Application.Abstractions;
using CleanBlazor.Application.Todos.AddItem;
using CleanBlazor.Application.Todos.GetItems;
using CleanBlazor.Infrastructure.Data;
using CleanBlazor.Infrastructure.Repositories;
using Microsoft.EntityFrameworkCore;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddRazorPages();
builder.Services.AddServerSideBlazor();

builder.Services.AddDbContext<AppDbContext>(opt =>
opt.UseSqlite(builder.Configuration.GetConnectionString("Default")));


builder.Services.AddScoped<ICommandHandler<AddItemCommand>, AddItemHandler>();
builder.Services.AddScoped<IQueryHandler<GetItemsQuery, IReadOnlyList<TodoItemDto>>, GetItemsHandler>();


builder.Services.AddScoped<ITodoRepository, TodoRepository>();

var app = builder.Build();
if (!app.Environment.IsDevelopment())
{
app.UseExceptionHandler("/Error");
}
app.UseStaticFiles();
app.UseRouting();
app.MapBlazorHub();
app.MapFallbackToPage("/_Host");
app.Run();

A thin component: no business rules inside

BlazorApp/Pages/Todos.razor

@page "/todos/{ListId:guid}"
@inject ICommandHandler<AddItemCommand> AddItem
@inject IQueryHandler<GetItemsQuery, IReadOnlyList<TodoItemDto>> GetItems

<h3>Todo</h3>
<input @bind="_newTitle" placeholder="What needs doing?" />
<button @onclick="OnAdd">Add</button>

@if (_items is null)
{
<p>Loading…</p>
}
else if (_items.Count == 0)
{
<p>No items yet.</p>
}
else
{
<ul>
@foreach (var i in _items)
{
<<a href="mailto:li>@i.Title">li>@i.Title</a> (@(i.IsDone ? "done" : "open"))</li>
}
</ul>
}

@code {
[Parameter] public Guid ListId { get; set; }
private string _newTitle = string.Empty;
private IReadOnlyList<TodoItemDto>? _items;

protected override async Task OnParametersSetAsync()
{
_items = await GetItems.Handle(new GetItemsQuery(ListId), CancellationToken.None);
}

private async Task OnAdd()
{
await AddItem.Handle(new AddItemCommand(ListId, _newTitle), CancellationToken.None);
_newTitle = string.Empty;
_items = await GetItems.Handle(new GetItemsQuery(ListId), CancellationToken.None);
}
}

UI stays dumb: it sends commands and renders data. All rules live in Domain and Application.

If this post helped, you’ll love the rest of my Blazor .Net Tips content:
✅Read more: .Net Code Chronicles
✅Get new posts: Subscribe on Medium

Validation: where and how

  • Validate shape at the boundary (DataAnnotations/FluentValidation on input models if you expose API endpoints).
  • Validate business rules in Domain (Title.From, TodoList.AddItem).
  • Validate use case flow in Application (e.g., user can only add items to lists they own).

This split keeps you from duplicating checks in random places.

Transactions and domain events

Keep transactions at the Application layer. Let Infrastructure implement an outbox later if you publish events.

Simple approach to start:

  • Domain pushes events into an in‑memory list on the aggregate
  • Repository flushes changes, then publishes those events to an in‑process dispatcher

You can replace the dispatcher with a message bus when you need cross‑process delivery.

Testing strategy that pays back

  • Domain.Tests: value objects, invariants, events. No mocks.
  • Application.Tests: handler behavior with a fake repository.
  • Infrastructure.Tests: EF Core mapping tests with Sqlite InMemory.
  • BlazorApp.Tests: bUnit to render components and assert markup/state.

Example: a fast test for Title and for duplicate item protection.

[Fact]
public void Title_cannot_be_empty()
{
Assert.Throws<ArgumentException>(() => Title.From(" "));
}

[Fact]
public void TodoList_prevents_duplicates()
{
var list = new TodoList("Home");
list.AddItem(Title.From("Buy milk"));
Assert.Throws<InvalidOperationException>(() => list.AddItem(Title.From("buy milk")));
}

Common mistakes (and quick fixes)

  • Putting EF types in Domain — move them to Infrastructure, use owned types for Value Objects.
  • Fat components with business rules — create Commands/Queries and handlers; inject them.
  • Anemic Domain (all rules in handlers) — move invariants into Entities/Value Objects.
  • Shared DbContext in UI — hide behind ITodoRepository.
  • Leaking domain entities to UI — map to DTOs in Application.
  • One handler that does everything — split by use case, keep handlers short and focused.

When to pick CQRS in Blazor

You don’t need two databases or a full event store to get value. Start simple:

  • Separate types and handlers for reads vs writes
  • Reads can bypass aggregates (project straight from EF to DTOs)
  • Writes go through aggregates to enforce rules

If reads get heavy, add pagination and projections. If writes get complex, domain events and outbox can keep things in sync.

Checklist to apply in an existing project

  1. Create Domain and Application projects; move rules into them.
  2. Extract interfaces the UI depends on (repositories, senders).
  3. Move EF Core code into Infrastructure, map Value Objects as owned types.
  4. Introduce Commands and Queries for the top 3 flows.
  5. Add tests: start with Value Objects and one handler.
  6. Keep components thin; no data access, no business rules.
  7. Wire DI in Program.cs; the UI remains a client of Application.

Tape this list on the wall. Review each PR against it.

Conclusion: a simple structure that keeps you fast

Clean Architecture in Blazor is not theory; it’s a set of small rules that keep changes cheap. Keep the UI thin, push rules into Domain, and let Application coordinate with clear Commands and Queries. Try the skeleton above on one feature this week. If it feels simpler, spread it to the rest of the app. And now it’s your turn: what part of your Blazor app will you refactor first? Leave a comment — I read every one.

🔗Want to get in touch? Find me on LinkedIn

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories