Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147868 stories
·
33 followers

0.0.412-1

1 Share

Fixed

  • Mouse event coordinate fragments no longer appear in input field
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Read Replicas Are NOT CQRS (Stop Confusing This)

1 Share

What’s overengineering? Is the outbox pattern, CQRS, and event sourcing overengineering? Some would say yes. The issue is: what’s your definition? Because if you have that wrong, then you’re making the wrong trade offs.

YouTube

Check out my YouTube channel, where I post all kinds of content on Software Architecture & Design, including this video showing everything in this post.

“The outbox pattern is only used in finance applications where consistency is a must. Otherwise, it’s just overengineering.”

Not exactly.

“CQRS is overengineering and rarely used even at very high scale companies. One master DB for writes and a bunch of replica DBs for reads are sufficient.”

No. And it has nothing to do with scaling.

“Event sourcing, another overengineering term, but in reality, most production systems do not implement strict event sourcing as described in books and system design articles. In the practical world, only current state is stored in the primary DB and events and business metrics are stored in an analytics DB.”

The giveaway that this is wrong is the discussion of business metrics related to event sourcing.

In the “practical world”, I’ll give some examples where event sourcing is natural.

Let’s go through them one by one, explain what they are, and when you should be using them.

The Outbox Pattern

Is it about finance? It has nothing to do with finance. Is it about consistency? Yes, that part is correct. It’s really a solution to a dual write problem.

Here’s the dual write problem.

You have your application. Some action gets invoked. You persist a state change in your system.

That’s the first write. The second write is you need to publish an event and write a message to a message broker so other parts of your system know it occurred. That’s the second write.

Here’s the issue. It fails in between. So you do the state change. Everything passes. Everything is saved. Transaction is good. But then you fail to publish the message to your message broker. Now you’re inconsistent. Your state change happened, but the event never got published.

Is it a big deal if you fail to publish that event? It depends what you’re using the event for, and what downstream services care about. If it’s best effort metrics or analytics, it might not be a big deal.

If it’s part of a workflow, it can be a much bigger deal. You want that consistency, and that’s where the outbox comes in.

So how do you solve the dual write problem? Like most problems, don’t have it in the first place.

To solve the dual write problem, we’re going to have a single write. That means you persist your state to your database and within the same transaction you persist the message to an outbox table in that same database.

Separately, you have a publisher that queries the outbox table, pulls messages that need to be published, and pushes them to your message broker. If it succeeds, it reaches back to the database and marks the message as completed or deletes it from the outbox table.

If there’s a failure, you retry. You haven’t lost any messages you wanted to publish.

So is the outbox pattern overengineering? It totally depends on your use case.

If you’re using events as a statement of fact that something occurred within your system and other parts of your system need to know it happened, then it’s probably not overengineering.

If you’re using events as best effort analytics and it’s totally fine if some events aren’t published because nobody depends on them, and lost messages are fine, then yes, it’s overengineering.

One side note: if you’re using a messaging library, it probably already supports the outbox pattern.

CQRS

“CQRS is overengineering and rarely used even at very high scale companies. One master DB for writes and a bunch of replica DBs for reads are sufficient.”

This is confusing two things entirely. It’s talking about scaling at the read write database level, when in reality this is about your application design.

CQRS literally stands for Command Query Responsibility Segregation. Commands change state. Queries read state.

That has nothing to do with databases. One database, two databases, whatever the case may be.

This is about having two different code paths for different responsibilities.

But since scaling was brought up, especially on the query side, that’s the angle I want to tackle. In a lot of query heavy systems, you often have to do a lot of composition.

That composition could be to a single database, multiple databases, a cache, whatever. But you’re making multiple calls to different places to compose data together to return to a client.

Because a lot of systems experience this, people create views or materialized views so you’re not doing all of that composition at runtime.

Instead, you have a separate table, a view, a different collection, a different object, something that represents what’s optimized for a specific query.

Example: an order and line items.

Maybe instead of joining tables and calculating totals on every request, you have a view that does it.

Or you have a materialized view that’s persisted and updated every time there’s a state change to an order.

So when you make a state change, your command updates your write side. Maybe that’s a relational database with normalized tables. And because you have a materialized view, you update that too. That could be in the same transaction. Then when a query comes in, you read directly from the materialized view.

This is all about optimizing reads or writes.

In my example, it’s optimizing reads, using a materialized view.

It doesn’t need to be that at all. It could be a relational database, a document store, a single table, a collection, some object that already contains what you need.

The point is you have different code paths, so you have options.

You could still have your query side do composition and your command side use the exact same database, the exact same schema, and update what it needs to update.

You just have the option to do different solutions if you have different code paths.

So is CQRS overengineering? Not really. You’re likely already doing it in some capacity because you already have different paths for reads and writes.

Where this gets conflated is when you start thinking about it purely from a scaling perspective. If you’re doing a lot of composition and you add read replicas, that’s fine.

But here’s the question.

Are your read replicas eventually consistent?

Because that plays a part in the complexity you’re adding by just adding read replicas. If you want pre computation because you want materialized views to optimize the query side, that’s a strategy if you need to optimize.

Event Sourcing

“In the practical world, only current state is stored in the primary DB and events and business metrics are stored in an analytics DB.”

We’re talking about different things here. Events are facts. What event sourcing is doing is taking those facts and making them the point of truth.

Then you take that point of truth, that series of events, and you can derive current state or any shape of data from any point in time.

Let’s use a practical example because there are a lot of domains that naturally have events. You can just see them. A stream of things that occur.

Here’s a shipment.

You persist these as a stream of events for the unique thing you’re tracking.

Shipment 123 has its own series of events. Another shipment has a different series of events. Those event streams are the point of truth.

You can derive current state from them.

It has nothing to do with analytics, but you can use them for analytics because just like current state, you can turn them into any shape you want.

So if you have an event stream, you can transform it any way you want.

Maybe you transform it into a relational table so analysts can write SQL like “select all shipments dispatched on a particular day”. Or maybe you transform it into a document shape that’s optimized for an application query.

That’s the point.

Your source of truth becomes an append only log of business facts, events. Your state is derived from those events.

A lot of the issues I read about people having with event sourcing are twofold. First, they’re not actually doing event sourcing. They have an event log, but it isn’t the point of truth. Their real database is still current state, and the event log is just “extra”. Or they’re using events as a communication mechanism with other services like a broker, which is a different thing.

Second, there’s a huge difference between facts and CRUD. “Shipment created” is not an event. That’s CRUD. “Order dispatched” is an event. Something happened.

“Shipment modified” is not an event. “Shipment loaded”, “shipment arrived”, “shipment delivered”, those are events.

Is event sourcing overengineering? It can be if all you view your system as is CRUD, and that’s how you build systems.

But there are a lot of domains where, once you start seeing it, you naturally see a series of events and it becomes obvious that’s where event sourcing fits.

The Real Point About Overengineering

Everything has trade-offs.

If you do not understand a concept, you won’t be able to understand what those trade-offs are, because you don’t even know what they are.

Join CodeOpinon!
Developer-level members of my Patreon or YouTube channel get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out my Patreon or YouTube Membership for more info.

The post Read Replicas Are NOT CQRS (Stop Confusing This) appeared first on CodeOpinion.

Read the whole story
alvinashcraft
22 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

TX Text Control 34.0 SP2 is Now Available: What's New in the Latest Version

1 Share
TX Text Control 34.0 Service Pack 2 is now available, offering important updates and bug fixes for all platforms. If you use TX Text Control in your document processing applications, this service pack will enhance the stability and compatibility of your solutions.

Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

What's new for the Microsoft Fluent UI Blazor library 5.0 RC1

1 Share

Today we are thrilled to publish the first Release Candidate of the Fluent UI Blazor library v5! This major release marks a significant evolution for the library, bringing a new foundation built on top of the latest Fluent UI Web Components, powerful new components, a game-changing Default Values system, and first-class Localization support. 

A New Foundation: Fluent UI Web Components 3

V5 represents a fundamental shift in the underlying rendering layer. The library has moved away from the previous FAST-based web components and now uses the new Fluent UI Web Components v3 — the very same components that power Microsoft’s own products like Microsoft 365, Teams, and Windows 11.

What does this mean for you?

  • Pixel-perfect Fluent 2 design. Your Blazor components now render with the exact same look and feel as Microsoft’s own products. No more subtle visual discrepancies.
  • Better performance. The new web components are lighter and more efficient. The reduced JavaScript footprint leads to faster initial load times and smoother interactions.
  • Improved accessibility. Accessibility is a first-class citizen in the new web component layer, delivering WCAG 2.1 AA compliance out of the box.
  • Reduced bundle size. By removing the legacy FAST dependency, the overall package size has been significantly trimmed.

What changes in your code?

For most existing components, the migration is quite seamless. Your Blazor markup stays the same. The library still exposes the same FluentButton, FluentSelect, and other components you already know. Under the hood, these components now render using the new Fluent UI Web Components.

However, because the web components are now aligned to the Fluent UI React components with regards to attribute names, we needed to make significant changes a lot of our components as well. We created a lot of documentation to help you migrate.

In addition, we are also providing a MCP Server for this version of the library. This will be your helpful assistant for migration and development on a daily basis.

No additional JavaScript or CSS file references are required. The library handles everything automatically. The web component scripts are included and loaded automatically by the library.

Getting started

Install the NuGet packages:

dotnet add package Microsoft.FluentUI.AspNetCore.Components --prerelease
dotnet add package Microsoft.FluentUI.AspNetCore.Components.Icons

Add the namespace to your _Imports.razor:

@using Microsoft.FluentUI.AspNetCore.Components
@using Icons = Microsoft.FluentUI.AspNetCore.Components.Icons

Register the services in Program.cs:

builder.Services.AddFluentUIComponents();

And add the provider component at the end of your MainLayout.razor:

@* Add all FluentUI Blazor Providers *@
<FluentProviders />

That’s it. You are now ready to start building your applications with the Fluent UI Blazor library v5.

The New FluentLayout Component

V5 introduces a completely redesigned FluentLayout component that serves as the structural backbone for your application. Rather than manually composing CSS Grid or Flexbox layouts, FluentLayout provides a declarative, area-based system for building full-page layouts with a fixed header, navigation panel, content area, and footer.

Area-based layout

The key concept is a FluentLayoutItem with a LayoutArea. You declare where each piece of content should go, and the layout engine handles the rest:

@inherits LayoutComponentBase

<FluentLayout>

    <!-- Header -->
    <FluentLayoutItem Area="@LayoutArea.Header">
        <FluentStack VerticalAlignment="VerticalAlignment.Center">
            <FluentLayoutHamburger />
            <FluentText Weight="TextWeight.Bold"
                        Size="TextSize.Size400">
                My Application
            </FluentText>
            <FluentSpacer />
        </FluentStack>
    </FluentLayoutItem>

    <!-- Navigation -->
    <FluentLayoutItem Area="@LayoutArea.Navigation"
                      Width="250px">
        <NavMenu />
    </FluentLayoutItem>

    <!-- Content -->
    <FluentLayoutItem Area="@LayoutArea.Content"
                      Padding="@Padding.All3">
        @Body
    </FluentLayoutItem>

    <!-- Footer -->
    <FluentLayoutItem Area="@LayoutArea.Footer">
        Powered by Microsoft Fluent UI Blazor
    </FluentLayoutItem>

</FluentLayout>

<FluentProviders />

Built-in hamburger menu

Notice the FluentLayoutHamburger component in the header. This renders a hamburger button that automatically toggles the navigation panel. Complete with smooth animations and responsive behavior. No extra JavaScript needed.

Why this matters

  • Consistent structure. Every page in your application follows the same layout contract. No more layout drift.
  • Responsive by design. The navigation panel collapses automatically on smaller screens.
  • Typed areas. Using LayoutArea.Header, LayoutArea.Navigation, LayoutArea.Content, and LayoutArea.Footer ensures you can’t misplace content.
  • Customizable sizing. Control the width of the navigation panel, padding of content areas, and more through simple parameters.

For more layout examples, visit the FluentLayout documentation.

Default Values

One of the most important and eagerly awaited features in v5 is the Default Values system. This powerful mechanism lets you define global default parameter values for any Fluent UI component — once — and have them applied everywhere in your application.

The problem it solves

In earlier versions, if you wanted all your buttons to use a specific appearance, you had to set it on every single instance:

<FluentButton Appearance="ButtonAppearance.Primary">Save</FluentButton>
<FluentButton Appearance="ButtonAppearance.Primary">Submit</FluentButton>
<FluentButton Appearance="ButtonAppearance.Primary">Confirm</FluentButton>

This was tedious, error-prone, and a maintenance nightmare. Making a design change meant touching potentially hundreds of files.

The solution

With v5, you can configure default values globally in your Program.cs as follows:

builder.Services.AddFluentUIComponents(config =>
{
    // Set default values for FluentButton component
    config.DefaultValues.For<FluentButton>()
          .Set(p => p.Appearance, ButtonAppearance.Primary);
    config.DefaultValues.For<FluentButton>()
          .Set(p => p.Shape, ButtonShape.Circular);
});

Now every FluentButton in your application automatically uses the Primary appearance and Circular shape. Unless you explicitly override it on a specific instance:

<!-- Uses global defaults (Primary + Circular) -->
<FluentButton>Save</FluentButton>
<!-- Override just the appearance for this instance -->
<FluentButton Appearance="ButtonAppearance.Outline">Cancel</FluentButton>

Key benefits

  • Single source of truth. Define your design decisions once, apply them everywhere.
  • Easy branding. Switch your entire application’s look by changing a few lines in Program.cs.
  • Explicit overrides. Any component instance can still override defaults with a local parameter.
  • Strongly typed. The API uses lambda expressions — full IntelliSense, no magic strings.
  • Maintainable. Changing a design decision no longer requires a massive find-and-replace.

This feature is particularly powerful for organizations that need consistent component styling across large applications or shared design systems.

Localization Support

V5 introduces a built-in Localization system that makes it easy to translate all component-internal strings — button labels, ARIA attributes, accessibility texts, and more — into any language.

How it works

The library ships with English strings by default. To localize components, implement the IFluentLocalizer interface:

using Microsoft.FluentUI.AspNetCore.Components;

public class CustomFluentLocalizer : IFluentLocalizer
{
    public string this[string key, params object[] arguments]
    {
        get
        {
            return key switch
            {
                "SomeKey" => "Your Custom Translation",
                "AnotherKey" => string.Format("Another Translation {0}", arguments),

                // Fallback to English if no translation is found
                _ => IFluentLocalizer.GetDefault(key, arguments),
            };
        }
    }
}

Register it in Program.cs:

builder.Services.AddFluentUIComponents(config =>
    config.Localizer = new CustomFluentLocalizer());

You can also use your embedded resources (.resx) for multilingual support. Everything is documented on our website. The component documentation pages give an overview of the strings that can be localized. The earlier mentioned MCP server can be of help there as well.

Try It Now

Resource Link
Package available on Nuget Microsoft.FluentUI.AspNetCore.Components --prerelease
Documentation https://fluentui-blazor-v5.azurewebsites.net
GitHub (v5 branch) https://github.com/microsoft/fluentui-blazor/tree/dev-v5
Migration guide Migration to v5

Final remarks

This is our first Release Candidate. The API surface is fairly stabilized.

We are counting on the community to help us identify any remaining issues! Please file issue reports on GitHub (with ready-to-run reproduction code), and don’t hesitate to contribute.

Several components are still missing. We are currently working on those nd they will come with the next RC's. See the dev-v5 - TODO List for an overview

A big 'thank you' to everyone who has already contributed, tested, and provided feedback throughout the (long) v5 development cycle. We’re incredibly excited about this release and can’t wait to see what you build with it!

Read the whole story
alvinashcraft
38 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Octopus Easy Mode - Tenant Templates

1 Share

In the previous post, you added a library variable set to share variables between projects. In this post, you’ll define tenant template variables in a library variable set. This ensures that tenants linked to projects that consume the library variable set all contribute their own values to the templates.

Prerequisites

  • An Octopus Cloud account. If you don’t have one, you can sign up for a free trial.
  • The Octopus AI Assistant Chrome extension. You can install it from the Chrome Web Store.

The Octopus AI Assistant will work with an on-premises Octopus instance, but it requires more configuration. The cloud-hosted version of Octopus doesn’t need extra configuration. This means the cloud-hosted version is the easiest way to get started.

Creating the project

Paste the following prompt into the Octopus AI Assistant and run it:

Create a Script project called "09. Script App with Library Variable Set and Tenant Templates", and then:
* Create a library variable set called "Tenant Settings" with the variable "TenantNamespace" as a tenant template variable
* Link the library variable set to the project
* Create two tenants called "Tenant A" and "Tenant B"
* Link the tenants to the project
* Define the "TenantNamespace" tenant template variable for each tenant with a value of "TenantA" and "TenantB", respectively
* Change the script step to echo the values of the variables using the syntax "#{TenantNamespace}"
* Require tenanted deployments for the project. Do not allow untenanted deployments.

The library variable set has now been created and linked to the project. It defines a tenant template variable, with each tenant providing its own value.

Tenant variable templates let projects use tenant-specific values without defining variables in each project.

Library variable tenant template values

You can now create a release and deploy it to the first environment. The script step prints out the values of the variables defined by each tenant.

What just happened?

You created a sample project with:

  • A library variable set linked to it containing a tenant template variable.
  • Two tenants linked to it, each providing their own value for the tenant template variable.
  • A script step that echoes the value of the tenant template variable using variable substitutions.

What’s next?

The next step is to define project tenant variables.

Read the whole story
alvinashcraft
46 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Critter Stack Roadmap Update for 1st Quarter 2026

1 Share

That is an American Polecat (black-footed ferret), our avatar for our newest Critter Stack project.

Mostly for my own sake to collect my own thoughts, I wanted to do a little update on the Critter Stack roadmap as it looks right now. This is an update on Critter Stack Roadmap for 2026 from December. Things have changed a little bit, or really just become more clear. While the rest of the Critter Stack core team has been early adopters of AI tools, I was late to the party, but two weeks into my own adoption of Claude Code, my ambition for the year has hugely expanded and this new update will reflect that.

Also, we’ve delivered an astonishing amount of new functionality in the first six weeks for 2026:

  • Marten’s new composite projection capability that is already getting usage. This feature is going to hopefully make it much easier to create denormalized “query model” projections with Marten to support reporting and dashboard screens
  • Wolverine got rate limiting middleware support (community built feature)
  • Wolverine’s options for transactional middleware, inbox, outbox, and scheduled messaging support grew to include Oracle, MySql, Sqlite, and CosmosDb. Weasel support for Critter Stack style “it just works” migrations were added for Oracle, MySql, and Sqlite as well

Short to Medium Term Roadmap

I think we are headed toward a Marten 9.0 and Wolverine 6.0 release this year, but I think that’s 2nd or even 3rd quarter this year.

CritterWatch

My personal focus (i.e. JasperFx’s) is switching to CritterWatch as of today. We have a verbal agreement with a JasperFx Software client to have a functional CritterWatch MVP in their environment by the end of March 2026, so here we go! More on this soon as I probably do quite a bit of thinking and analysis on how this should function out loud. The MVP scope is still this:

  • A visualization and explanation of the configuration of your Critter Stack application
  • Performance metrics integration from both Marten and Wolverine
  • Event Store monitoring and management of projections and subscriptions
  • Wolverine node visualization and monitoring
  • Dead Letter Queue querying and management
  • Alerting – but I don’t have a huge amount of detail yet. I’m paying close attention to the issues JasperFx clients see in production applications though, and using that to inform what information Critter Watch will surface through its user interface and push notifications

Marten 8.*

I think that Marten 8.* has just about played out and there’s only a handful of new features I’m personally thinking about before we effectively turn the page on Marten 8.*:

  1. First Class EF Core Projections. Just the ability to use an EF Core DbContext to write projected data with EF Core. I’ve thought that this would further help Marten users with reporting needs.
  2. An ability to tag event streams with user-defined “natural keys”, and efficient mechanisms to use those natural keys in APIs like FetchForWriting() and FetchLatest(). This will be done in conjunction with Wolverine’s “aggregate handler workflow.” This has been occasionally requested and on our roadmap for a couple years, but it moves up now because of some ongoing client work

Add in some ongoing improvements to the new “composite projection” feature and some improvements to the robustness of the Async Daemon subsystem and I think that’s a wrap on Marten 8.

One wild card is that Marten will gain some kind of model for Dynamic Consistency Boundaries (DCB) this year. I’m not sure whether I think that could or should be done in 8.* or wait for 9.0 though. I was initially dubious about DCB because it largely seemed to be a workaround for event store tools that can’t support strong consistency between event streams the way that Marten can. I’ve come around to DCB a little bit more after reviewing some JasperFx client code where they need to do quite a few cross-stream operations and seeing some opportunity to reduce repetitive code. This will be part of an ongoing process of improving the full Critter Stack’s ability to express cross-stream commands and will involve the integration into Wolverine as well.

Wolverine 5.*

Wolverine has exploded in development and functionality over the past three months, but I think that’s mostly played out as well. Looking at the backlog today, it’s mostly small ball refinements here and there. As mentioned before, I think Wolverine will be part of the improvements to cross-stream operations with Marten as well.

Wolverine gets a lot of community contributions though, and that could continue as a major driver of new features.

Introducing Polecat!

After 10 years of people sagely telling us that Marten would be much more popular if only it supported SQL Server, let’s all welcome Polecat to the Critter Stack. Polecat is going to be a SQL Server Backed Event Store and Document Db tool within the greater Critter Stack ecosystem. As you can imagine, Polecat is very much based on Marten with some significant simplifications. Right now the very basic event sourcing capabilities are already in place, but there’s plenty more to do before I’d suggest using it in a production application.

The key facts about its approach so far:

  • Supply a robust Event Store functionality using SQL Server as the storage mechanism
  • Mimics Marten’s API, and it’s likely some of the public API ends up being standardized between the two tools
  • Uses the same JasperFx.Events library for event abstractions and projection or subscription base types
  • Uses Weasel.SqlServer for automatic database migrations similar to Marten
  • Supports the bigger Critter Stack “stateful resource” model with Weasel to build out schema objects
  • Support both conjoined and separate database multi-tenancy
  • Projections will be based on the model in JasperFx.Events and supply SingleStreamProjectionMultiStreamProjectionEventProjection, and FlatTableProjection right out of the box
  • STJ only for the serialization. No Newtonsoft support this time
  • QuickAppend will be the default event appending approach
  • Only support .NET 10
  • Only support Sql Server 2025 (v17)
  • Utilize the new Sql Server JSON type much like Marten uses the PostgreSQL JSONB
  • Strictly using source generators instead of the Marten code generation model — but let’s call this an experiment for now that might end up moving to Marten 9.0 later on

I blew a tremendous amount of time in late 2024 and throughout 2025 getting ready to do this work by pulling out much of the guts of Marten Event Sourcing into potentially reusable libraries, and Polecat is the result.

Selfishly, the CritterWatch approach requires its own event sourced persistence, and I’m hoping that Polecat and SQL Server could be used as an alternative to Marten and PostgreSQL for shops that are interested in CritterWatch but don’t today use PostgreSQL.

Marten 9.0 and Wolverine 6.0

There will be major version releases of the two main critters later this year. The main goal of these releases will be all about optimizing the cold start time of the two tools and at least moving closer to true AOT compliance. We’ll be reevaluating the code generation model of both tools as part of this work.

The only other concrete detail we know is that these releases will dump .NET 8.0 support.

Summary

The road map changes all the time based on what issues clients and users are hitting and sometimes because we just have to stop and respond to something Microsoft or other technologies are doing. But at least for this moment, this is what the Critter Stack core team and I are thinking about.



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories