Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153099 stories
·
33 followers

What’s new in Next.js 16: Turbo Builds, Smart Caching, AI Debugging

1 Share

What's new in Next.js 16 Turbo Builds,

TL;DR: Next.js 16 stabilizes Turbopack for 2–5× faster builds, introduces Cache Components for hybrid static/dynamic rendering, adds AI debugging via MCP in DevTools, and ships with the new React Compiler, routing improvements, and breaking changes like Node.js 20+ requirements, ideal for boosting developer efficiency.

Next.js 16 is a major update that aims to improve both developer productivity and application performance. It’s especially important in 2026 because many previously experimental features have now matured into a stable, performance‑first architecture capable of supporting complex modern web demands. With Turbopack as the default bundler and a more explicit caching model, you can expect faster builds, more consistent behavior, and improved reliability across environments.

In this guide, we’ll walk through what’s new in Next.js 16 and how you can use these features to streamline your development workflow.

Syncfusion React UI components are the developers’ choice to build user-friendly web app. You deserve them too.

Turbopack: The rust-based bundler

Turbopack is now the stable, high-performance successor to Webpack. Built in Rust, it focuses on optimizing build speeds and development responsiveness. As of Next.js 16, it becomes the default bundler for all new projects, offering near‑instant feedback during local development and substantially faster production pipelines.

Here are the key improvements Turbopack brings:

  • 10x faster fast refresh: Code updates appear almost instantly in the browser, helping maintain developer flow.
  • 2x to 5x faster production builds: This greatly reduces CI/CD bottlenecks.
  • Filesystem caching: Compiled artifacts are stored on disk to speed up repeated runs.

To enable file system caching for even faster development restarts, add the following code to your next.config.ts:

typescript
import type { NextConfig } from "next";

const nextConfig = {
    experimental: {
        turbopackFileSystemCacheForDev: true,
    },
};
 
export default nextConfig; 

If you prefer to continue using Webpack, you can switch back using commands such as next dev --webpack or by setting turbopack: false in your configuration.

import type { NextConfig } from "next";

const nextConfig: NextConfig = {
    turbopack: false,
};

export default nextConfig; 

A to Z about Syncfusion’s versatile React components and their feature set.

Next.js 16 Turbopack
Next.js 16 Turbopack

Cache components: Explicit performance control

Cache components introduce a new explicit, opt-in caching model powered by the use cache directive. Unlike the older implicit caching system, this model gives developers fine‑grained control over exactly what gets cached and for how long. As a result, dynamic code now executes at request time by default, unless you deliberately mark it for caching, making application behavior more predictable and easier to reason about.

Installation

To enable Cache Components in your project, follow these steps:

Step 1: Open the configuration file

Locate and open your next.config.ts file in the project root.

Step 2: Enable Cache components

Add the property cacheComponents: true to your configuration object. Here’s how you can do it in code:

typescript
const nextConfig = {
    cacheComponents: true,
};
 
export default nextConfig;

Step 3: Add the use cache directive

Apply the use cache directive at the top of functions or components you wish to cache.

Add this to your project:

 // File level - When used at the file level, all function exports must be async functions.
'use cache'
 
export default async function Page() {
    // ...
}
 
// Component level
export async function MyComponent() {
    'use cache'
    return <></>
}
 
// Function level
export async function getData() {
    'use cache'
    const data = await fetch('/api/data')
    return data
}

Next.js 16 enhanced caching APIs for better control. Enhanced APIs include:

  • revalidateTag(tag, profile): Uses required cacheLife profiles, such as ‘hours,’ to fine-tune stale-while-revalidate timing.
  • updateTag(tag): Immediately refreshes cache data inside Server Actions.
  • refresh(): Re-fetches uncached data without invalidating any existing cache entries.

These additions make caching more intentional, maintainable, and predictable, especially in dynamic, data‑heavy applications.
Cache Components control

Next.js DevTools MCP: AI-assisted debugging

The Model Context Protocol (MCP) integration enables Next.js DevTools to share detailed application context with AI coding assistants. This provides AI agents with a deep understanding of your app’s routing, caching, and rendering behavior, enabling them to analyze issues more accurately and provide more useful guidance during development.

Key features

  • Context-aware insights: AI agents can access unified browser and server logs without context switching.
  • Automatic error access: Agents receive detailed stack traces and active route data automatically.
  • Fix suggestions: AI can suggest specific code corrections based on real-time application metadata.
AI‑Powered Debugging with Next.js DevTools MCP
AI‑Powered Debugging with Next.js DevTools MCP

Other notable features and improvements

React compiler support

Next.js 16 includes full support for the React Compiler, a now stable build-time optimization tool. The compiler automatically optimizes your code by memorizing components and hooks. This eliminates the need for manual performance optimizations like useMemo and useCallback, reducing code complexity while preventing unnecessary re-renders.

To enable it, refer to the code below.

typescript
export const config = {
    reactCompiler: true,
};

See the possibilities for yourself with live demos of Syncfusion React components.

Enhanced routing

Routing performance receives significant improvements in Next.js 16, offering faster navigation and reduced network overhead. These gains are driven by two major enhancements.

First, layout deduplication ensures that shared layouts are downloaded only once during prefetching, preventing redundant network requests. Additionally, incremental prefetching retrieves only the segments that have not already been cached, resulting in more efficient data loading.

Together, these features create a smoother and more responsive routing experience. The following sequence diagram illustrates these enhancements in action, showing how Next.js 16 optimizes the routing process by:

  • Initial prefetch (/settings): The client checks the cache for the shared layout. If missing, it requests and stores it, then fetches only uncached unique segments, showing incremental behavior.
  • Subsequent prefetch (/profile): Reuses the cached layout (deduplication, no re-download), and again requests only uncached segments, highlighting efficiency and no redundant data.
Sequence diagram illustrating layout deduplication and incremental prefetching in Next.js 16 enhanced routing
Sequence diagram illustrating layout deduplication and incremental prefetching in Next.js 16 enhanced routing

Proxy.ts replaces middleware.ts

Next.js 16 replaces the former middleware.tsfile with the new Node.js-native alternative Proxy.ts. This update clarifies where request interception occurs and makes the network boundary of your application more explicit. Importantly, the core logic handling redirects and authentication remains unchanged.

Here is an example migration:

typescript
// Old: middleware.ts
    export function middleware(req) { ... }
 
// New: proxy.ts
export function proxy(req) { ... }

Logging enhancements

Next.js 16 includes refined logging output to provide more visibility into performance:

  • Dev logs now break down compile and render times.
  • Build logs display step-by-step timings.
Next.js 16 (Turbopack)
 
 ✓ Compiled successfully in 615ms
 ✓ Finished TypeScript in 1114ms
 ✓ Collecting page data in 208ms
 ✓ Generating static pages in 239ms
 ✓ Finalizing page optimization in 5ms

Simplified project setup

create-next-app now defaults to a modern best‑practice stack, including:

  • App Router.
  • TypeScript.
  • Tailwind CSS.
  • ESLint.

This makes new projects more consistent and production-ready out of the box.

Build adapters API (Alpha)

Next.js 16 introduces a new Build Adapters API, enabling developers to customize builds for specific hosting platforms. This is currently available through the experimental.adapterPath option.

React 19.2 features

Includes View Transitions for smooth animations, useEffectEvent for non-reactive effects, and the Activity component.

Breaking changes and upgrade guide

Next.js 16 includes several breaking changes to modernize the framework:

  • Requirements: Node.js 20.9+, TypeScript 5.1+; drops Node.js 18 support.
  • Removals: AMP, next lint, runtime configs, various experimental flags.
  • Async access: Params, searchParams, cookies, etc., must now be awaited.
  • Deprecations: middleware.ts, images.domains.

To upgrade

To upgrade to the latest version of Next.js, run the following command.

bash
npx @next/codemod@canary upgrade latest
npm install next@latest react@latest react-dom@latest

Explore the endless possibilities with Syncfusion’s outstanding React UI components.

Conclusion

Thank you for reading! Next.js 16 provides a reliable and future-ready foundation for web development in 2026 by focusing on explicit control and high performance. By adopting Turbopack for speed, Cache Components for clarity, and the React Compiler for efficiency, you can build applications that are both easier to maintain and faster for your users. These tools represent the current gold standard for full-stack React development, offering the stability required for professional, large-scale projects.

If you have any questions, contact us through our support forumsupport portal, or feedback portal. We are always happy to assist you!

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Easier Query Models with Marten

1 Share

The Marten community made our first big release of the new year with 8.18 this morning. I’m particularly happy with a couple significant things in this release:

  1. We had 8 different contributors in just the last month of work this release represents
  2. Anne Erdtsieck did a lot to improve our documentation for using our multi-stream projections for advanced query model projections
  3. The entire documentation section on projections got a much needed revamp and now includes a lot more information about capabilities from our big V8 release last year. I’m hopeful that the new structure and content makes this crucial feature set more usable.
  4. We improved Marten’s event enrichment ability within projections to more easily and efficiently incorporate information from outside of the raw event data
  5. The “Composite or Chained Projections” feature has been something we’ve talked about as a community for years, and now we have it

The one consistent theme in those points is that Marten just got a lot better for our users for creating “query models” in systems.

Let’s Build a TeleHealth System!

I got to be a part of a project like this for a start up during the pandemic. Fantastic project with lots of great people. Even though I wasn’t able to use Marten on the project at that time (we used a hand rolled Event Sourcing solution with Node.JS + TypeScript), that project has informed several capabilities added to Marten in the years since including the features shown in this post.

Just to have a problem domain for the sample code, let’s pretend that we’re building a new only TeleHealth system that allows patients to register for an appointment online and get matched up with a healthcare provider for an appointment that day. The system will do all the work of coordinating these appointments as well as tracking how the healthcare providers spend their time.

That domain might have some plain Marten document storage for reference data including:

  • Provider — representing a medical provider (Nurse? Physician? PA?) who fields appointments
  • Specialty — models a medical specialty
  • Patient — personal information about patients who are requesting appointments in our system

Switching to event streams, we may be capturing events for:

  • Board – events modeling a single, closely related group of appointments during a single day. Think of “Pediatrics in Austin, Texas for January 19th”
  • ProviderShift – events modeling the activity of a single provider working in a single Board during a single day
  • Appointment – events recording the progress of an appointment including requesting an appointment through the appointment being cancelled or completed

Better Query Models

The easiest and most common form of a projection in Marten is a simple “write model” that projects the information from a single event stream to a projected document. From our TeleHealth domain, here’s the “self-aggregating” Board:

public class Board
{
private Board()
{
}
public Board(BoardOpened opened)
{
Name = opened.Name;
Activated = opened.Opened;
Date = opened.Date;
}
public void Apply(BoardFinished finished)
{
Finished = finished.Timestamp;
}
public void Apply(BoardClosed closed)
{
Closed = closed.Timestamp;
CloseReason = closed.Reason;
}
public Guid Id { get; set; }
public string Name { get; private set; }
public DateTimeOffset Activated { get; set; }
public DateTimeOffset? Finished { get; set; }
public DateOnly Date { get; set; }
public DateTimeOffset? Closed { get; set; }
public string CloseReason { get; private set; }
}

Easy money. All the projection has to do is apply the raw event data for that one stream and nothing else. Marten is even doing the event grouping for you, so there’s just not much to think about at all.

Now let’s move on to more complicated usages. One of the things that makes Marten such a great platform for Event Sourcing is that it also has its dedicated document database feature set on top of the PostgreSQL engine. All that means that you can happily keep some relatively static reference data back in just plain ol’ documents or even raw database tables.

To that end, let’s say in our TeleHealth system that we want to just embed all the information for a Provider (think a nurse or a physician) directly into our ProviderShift for easier usage:

public class ProviderShift(Guid boardId, Provider provider)
{
public Guid Id { get; set; }
public int Version { get; set; }
public Guid BoardId { get; private set; } = boardId;
public Guid ProviderId => Provider.Id;
public ProviderStatus Status { get; set; } = ProviderStatus.Paused;
public string Name { get; init; }
public Guid? AppointmentId { get; set; }
// I was admittedly lazy in the testing, so I just
// completely embedded the Provider document directly
// in the ProviderShift for easier querying later
public Provider Provider { get; set; } = provider;
}

When mixing and matching document storage and events, Marten has always given you the ability to utilize document data during projections by brute force lookups in your projection code like this:

    public async Task<ProviderShift> Create(
        // The event data
        ProviderJoined joined, 
        IQuerySession session)
    {
        var provider = await session
            .LoadAsync<Provider>(joined.ProviderId);

        return new ProviderShift(joined.BoardId, provider);
    }

The code above is easy to write and conceptually easy to understand, but when the projection is being executed in our async daemon where the projection is processing a large batch of events at one time, the code above potentially sets you up for an N+1 query anti-pattern where Marten has to make lots of small database round trips to get each referenced Provider every time there’s a separate ProviderJoined event.

Instead, let’s use Marten’s recent hook for event enrichment and the new declarative syntax we just introduced in 8.18 today to get all the related Provider information in one batched query for maximum efficiency:

    public override async Task EnrichEventsAsync(SliceGroup<ProviderShift, Guid> group, IQuerySession querySession, CancellationToken cancellation)
    {
        await group

            // First, let's declare what document type we're going to look up
            .EnrichWith<Provider>()

            // What event type or marker interface type or common abstract type
            // we could look for within each EventSlice that might reference
            // providers
            .ForEvent<ProviderJoined>()

            // Tell Marten how to find an identity to look up
            .ForEntityId(x => x.ProviderId)

            // And finally, execute the look up in one batched round trip,
            // and apply the matching data to each combination of EventSlice, event within that slice
            // that had a reference to a ProviderId, and the Provider
            .EnrichAsync((slice, e, provider) =>
            {
                // In this case we're swapping out the persisted event with the
                // enhanced event type before each event slice is then passed
                // in for updating the ProviderShift aggregates
                slice.ReplaceEvent(e, new EnhancedProviderJoined(e.Data.BoardId, provider));
            });
    }

Now, inside the actual projection for ProviderShift, we can use the EnhancedProviderJoined event from above like this:

    // This is a recipe introduced in Marten 8 to just write explicit code
    // to "evolve" aggregate documents based on event data
    public override ProviderShift Evolve(ProviderShift snapshot, Guid id, IEvent e)
    {
        switch (e.Data)
        {
            case EnhancedProviderJoined joined:
                snapshot = new ProviderShift(joined.BoardId, joined.Provider)
                {
                    Provider = joined.Provider, Status = ProviderStatus.Ready
                };
                break;

            case ProviderReady:
                snapshot.Status = ProviderStatus.Ready;
                break;

            case AppointmentAssigned assigned:
                snapshot.Status = ProviderStatus.Assigned;
                snapshot.AppointmentId = assigned.AppointmentId;
                break;

            case ProviderPaused:
                snapshot.Status = ProviderStatus.Paused;
                snapshot.AppointmentId = null;
                break;

            case ChartingStarted charting:
                snapshot.Status = ProviderStatus.Charting;
                break;
        }

        return snapshot;
    }

In the sample above, I replaced the ProviderJoined event being sent to our projection with the richer EnhancedProviderJoined event, but there are other ways to send data to projections with a new References<T> event type that’s demonstrated in our documentation on this feature.

Sequential or Composite Projections

This feature was introduced in Marten 8.18 in response to feedback from several JasperFx Software clients who needed to efficiently create projections that effectively made de-normalized views across multiple stream types and used reference data outside of the events. Expect this feature to grow in capability as we get more feedback about its usage.

Here are a handful of scenarios that Marten users have hit over the years:

  • Wanting to use the build products of Projection 1 as an input to Projection 2. You can do that today by running Projection 1 as Inline and Projection 2 as Async, but that’s imperfect and sensitive to timing. Plus, you might not have wanted to run the first projection Inline.
  • Needing to create a de-normalized projection view that incorporates data from several other projections and completely different types of event streams, but that previously required quite a bit of duplicated logic between projections
  • Looking for ways to improve the throughput of asynchronous projections by doing more batching of event fetching and projection updates by trying to run multiple projections together

To meet these somewhat common needs more easily, Marten has introduced the concept of a “composite” projection where Marten is able to run multiple projections together and possibly divided into multiple, sequential stages. This provides some potential benefits by enabling you to safely use the build products of one projection as inputs to a second projection. Also, if you have multiple projections using much of the same event data, you can wring out more runtime efficiency by building the projections together so your system is doing less work fetching events and able to make updates to the database with fewer network round trips through bigger batches.

In our TeleHealth system, we need to have single stream “write model” projections for each of the three stream types. We also need to have a rich view of each Board that combines all the common state of the active Appointment and ProviderShift streams in that Board including the more static Patient and Provider information that can be used by the system to automate the assignment of providers to open patients (a real telehealth system would need to be able to match up the requirements of an appointment with the licensing, specialty, and location of the providers as well as “knowing” what providers are available or estimated to be available). We probably also need to build a denormalized “query model” about all appointments that can be efficiently queried by our user interface on any of the elements of BoardAppointmentPatient, or Provider.

What we really want is some way to efficiently utilize the upstream products and updates of the BoardAppointment, and ProviderShift “write model” projections as inputs to what we’ll call the BoardSummary and AppointmentDetails projections. We’ll use the new “composite projection” feature to run these projections together in two stages like this:

Before we dive into each child projection, this is how we can set up the composite projection using the StoreOptions model in Marten:

opts.Projections.CompositeProjectionFor("TeleHealth", projection =>
{
projection.Add<ProviderShiftProjection>();
projection.Add<AppointmentProjection>();
projection.Snapshot<Board>();
// 2nd stage projections
projection.Add<AppointmentDetailsProjection>(2);
projection.Add<BoardSummaryProjection>(2);
});

First, let’s just look at the simple ProviderShiftProjection:

public class ProviderShiftProjection: SingleStreamProjection<ProviderShift, Guid>
{
public ProviderShiftProjection()
{
// Make sure this is turned on!
Options.CacheLimitPerTenant = 1000;
}
public override async Task EnrichEventsAsync(SliceGroup<ProviderShift, Guid> group, IQuerySession querySession, CancellationToken cancellation)
{
await group
// First, let's declare what document type we're going to look up
.EnrichWith<Provider>()
// What event type or marker interface type or common abstract type
// we could look for within each EventSlice that might reference
// providers
.ForEvent<ProviderJoined>()
// Tell Marten how to find an identity to look up
.ForEntityId(x => x.ProviderId)
// And finally, execute the look up in one batched round trip,
// and apply the matching data to each combination of EventSlice, event within that slice
// that had a reference to a ProviderId, and the Provider
.EnrichAsync((slice, e, provider) =>
{
// In this case we're swapping out the persisted event with the
// enhanced event type before each event slice is then passed
// in for updating the ProviderShift aggregates
slice.ReplaceEvent(e, new EnhancedProviderJoined(e.Data.BoardId, provider));
});
}
public override ProviderShift Evolve(ProviderShift snapshot, Guid id, IEvent e)
{
switch (e.Data)
{
case EnhancedProviderJoined joined:
snapshot = new ProviderShift(joined.BoardId, joined.Provider)
{
Provider = joined.Provider, Status = ProviderStatus.Ready
};
break;
case ProviderReady:
snapshot.Status = ProviderStatus.Ready;
break;
case AppointmentAssigned assigned:
snapshot.Status = ProviderStatus.Assigned;
snapshot.AppointmentId = assigned.AppointmentId;
break;
case ProviderPaused:
snapshot.Status = ProviderStatus.Paused;
snapshot.AppointmentId = null;
break;
case ChartingStarted charting:
snapshot.Status = ProviderStatus.Charting;
break;
}
return snapshot;
}
}

Now, let’s go downstream and look at the AppointmentDetailsProjection that will ultimately need to use the build products of all three upstream projections:

public class AppointmentDetailsProjection : MultiStreamProjection<AppointmentDetails, Guid>
{
public AppointmentDetailsProjection()
{
Options.CacheLimitPerTenant = 1000;
Identity<Updated<Appointment>>(x => x.Entity.Id);
Identity<IEvent<ProviderAssigned>>(x => x.StreamId);
Identity<IEvent<AppointmentRouted>>(x => x.StreamId);
}
public override async Task EnrichEventsAsync(SliceGroup<AppointmentDetails, Guid> group, IQuerySession querySession, CancellationToken cancellation)
{
// Look up and apply specialty information from the document store
// Specialty is just reference data stored as a document in Marten
await group
.EnrichWith<Specialty>()
.ForEvent<Updated<Appointment>>()
.ForEntityId(x => x.Entity.Requirement.SpecialtyCode)
.AddReferences();
// Also reference data (for now)
await group
.EnrichWith<Patient>()
.ForEvent<Updated<Appointment>>()
.ForEntityId(x => x.Entity.PatientId)
.AddReferences();
// Look up and apply provider information
await group
.EnrichWith<Provider>()
.ForEvent<ProviderAssigned>()
.ForEntityId(x => x.ProviderId)
.AddReferences();
// Look up and apply Board information that matches the events being
// projected
await group
.EnrichWith<Board>()
.ForEvent<AppointmentRouted>()
.ForEntityId(x => x.BoardId)
.AddReferences();
}
public override AppointmentDetails Evolve(AppointmentDetails snapshot, Guid id, IEvent e)
{
switch (e.Data)
{
case AppointmentRequested requested:
snapshot ??= new AppointmentDetails(e.StreamId);
snapshot.SpecialtyCode = requested.SpecialtyCode;
snapshot.PatientId = requested.PatientId;
break;
// This is an upstream projection. Triggering off of a synthetic
// event that Marten publishes from the early stage
// to this projection running in a secondary stage
case Updated<Appointment> updated:
snapshot ??= new AppointmentDetails(updated.Entity.Id);
snapshot.Status = updated.Entity.Status;
snapshot.EstimatedTime = updated.Entity.EstimatedTime;
snapshot.SpecialtyCode = updated.Entity.SpecialtyCode;
break;
case References<Patient> patient:
snapshot.PatientFirstName = patient.Entity.FirstName;
snapshot.PatientLastName = patient.Entity.LastName;
break;
case References<Specialty> specialty:
snapshot.SpecialtyCode = specialty.Entity.Code;
snapshot.SpecialtyDescription = specialty.Entity.Description;
break;
case References<Provider> provider:
snapshot.ProviderId = provider.Entity.Id;
snapshot.ProviderFirstName = provider.Entity.FirstName;
snapshot.ProviderLastName = provider.Entity.LastName;
break;
case References<Board> board:
snapshot.BoardName = board.Entity.Name;
snapshot.BoardId = board.Entity.Id;
break;
}
return snapshot;
}
}

And also the definition for the downstream BoardSummary view:

public class BoardSummaryProjection: MultiStreamProjection<BoardSummary, Guid>
{
public BoardSummaryProjection()
{
Options.CacheLimitPerTenant = 100;
Identity<Updated<Appointment>>(x => x.Entity.BoardId ?? Guid.Empty);
Identity<Updated<Board>>(x => x.Entity.Id);
Identity<Updated<ProviderShift>>(x => x.Entity.BoardId);
}
public override Task EnrichEventsAsync(SliceGroup<BoardSummary, Guid> group, IQuerySession querySession, CancellationToken cancellation)
{
return group.ReferencePeerView<Board>();
}
public override (BoardSummary, ActionType) DetermineAction(BoardSummary snapshot, Guid identity, IReadOnlyList<IEvent> events)
{
snapshot ??= new BoardSummary { Id = identity };
if (events.TryFindReference<Board>(out var board))
{
snapshot.Board = board;
}
var shifts = events.AllReferenced<ProviderShift>().ToArray();
foreach (var providerShift in shifts)
{
snapshot.ActiveProviders[providerShift.ProviderId] = providerShift;
if (providerShift.AppointmentId.HasValue)
{
snapshot.Unassigned.Remove(providerShift.ProviderId);
}
}
foreach (var appointment in events.AllReferenced<Appointment>())
{
if (appointment.ProviderId == null)
{
snapshot.Unassigned[appointment.Id] = appointment;
snapshot.Assigned.Remove(appointment.Id);
}
else
{
snapshot.Unassigned.Remove(appointment.Id);
var shift = shifts.FirstOrDefault(x => x.Id == appointment.ProviderId.Value);
snapshot.Assigned[appointment.Id] = new AssignedAppointment(appointment, shift?.Provider);
}
}
return (snapshot, ActionType.Store);
}
}

Note the usage of the Updated<T> event types that the downstream projections are using in their Evolve or DetermineAction methods. That is a synthetic event added by Marten to communicate to the downstream projections what projected documents were updated for the current event range. These events are carrying the latest snapshot data for the current event range so the downstream projections can just use the build products without making any additional fetches. It also guarantees that the downstream projections are seeing the exact correct upstream projection data for that point of the event sequencing.

Moreover, the composite “telehealth” projection is reading the event range once for all five constituent projections, and also applying the updates for all five projections at one time to guarantee consistency.

Some the documentation on Composite Projections for more information about how this feature fits it with rebuilding, versioning, and non stale querying.

Summary

Marten has hopefully gotten much better at building “query model” projections that you’d use for bigger dashboard screens or search within your application. We’re hoping that this makes Marten a better tool for real life development.

The best way for an OSS project to grow healthily is having a lot of user feedback and engagement coupled with the maintainers reacting to that feedback with constant improvement. And while I’d sometimes like to have the fire hose of that “feedback” stop for a couple days, it helps drive the tools forward.

The advent of JasperFx Software has enabled me to spend much more time working with our users and seeing the real problems they face in their system development. The features I described in this post are a direct result of engagements with at least four different JasperFx clients in the past year and a half. Drop us a line anytime at sales@jasperfx.net and I’d be happy to talk to you about how we can help you be more successful with Event Sourcing using Marten.



Read the whole story
alvinashcraft
20 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Microspeak: On fire, putting out fires

1 Share

Remember, Microspeak is not necessarily jargon exclusive to Microsoft, but it’s jargon that you need to know if you work at Microsoft.

When something has gone horribly wrong and requires immediate attention, one way to describe it is to say that it is on fire. The obvious metaphor here is that the situation is so severe that it is as if the office building or computer system was literally on fire.

Here are some citations I found.

I’ll be back in Redmond on Monday. Is anything on fire?

This person is just checking in to see if there are any emergencies.

I think the Nosebleed branch is still on fire.

This person is saying that they think that the Nosebleed branch is still in very bad shape. My sense that being on fire is worse than being on the floor. If a branch is on the floor, then that probably means that there’s a problem with the build or release process. But if the branch is on fire, it suggests that they have identified some critical issue in the branch, and everybody is scrambling to figure it out and fix it.

While looking for citations, I found the minutes for a meeting titled “What’s on Fire Meetings”, which I guess is a regular meeting to report on whatever disaster is currently unfolding this time.

I even found some citations from my own inbox.

That’s my top item once I can wrap up the work I’m doing for the Nosebleed feature, but Nosebleed is always on fire.

Even the fires are on fire.

There is a channel on our team called “Fires” which is where people report on anything on fire and collaborate on putting out that fire. Putting out fires is the preferred way to say that someone is trying to fix whatever is on fire.

Bonus chatter: Note that this is not the same as saying that a person is “on fire”, which is slang for saying that they are doing exceptionally well.

The post Microspeak: On fire, putting out fires appeared first on The Old New Thing.

Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Supercharging GenAI Apps with PostgreSQL and Azure AI

1 Share
Learn how to supercharge your GenAI apps with PostgreSQL and Azure AI. Lino Tadros shows how PGVector and AI Extensions in VS Code turn PostgreSQL into an intelligent foundation for high-performance GenAI systems--solving common RAG pitfalls like irrelevance and hallucination.
Read the whole story
alvinashcraft
34 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Things That Caught My Attention Last Week - January 19

1 Share

caught-my-i

Software Architecture

CQRS and Human Intent by Claudio Lassala

.NET

Stop Building SPAs for Every Screen: htmx + ASP.NET Core Razor Pages Workshop (Open) by Chris Woodruff

Generating SBOM for NuGet packages by Gérald Barré

How to Run Azure Service Bus Locally using .NET Aspire - YouTube by Milan Jovanović

.NET and .NET Framework January 2026 servicing releases updates by Rahul Bhandari (MSFT)

Copilot Memories by Jessie Houghton

How this website is built by Gérald Barré

How does Aspire expose resource connection info to the Azure Functions runtime? by Safia Abdalla

How JasperFx Supports our Customers by Jeremy D. Miller

Solving the Distributed Cache Invalidation Problem with Redis and HybridCache by Milan Jovanović

REST/APIs

A Shiny New OpenAPI Tools by Alexander Karan

Azure

Part 1: Building Your First Serverless HTTP API on Azure with Azure Functions & FastAPI by Richa Gaur

What's New in Azure Repos: Recent Updates by Dan Hellem

Software Development

Two regimes of Git by Mark Seemann

AI

Want better AI outputs? Try context engineering. by Christina Warren

Why LLMs Fail as Sensors (and What Brains Get Right) by Scott Galloway

Code Wave Build Log by Cassidy Williams

Intelligence without a witness by Mike Amundsen

Agents. It Is All APIs. Nothing Has Changed by Kin Lane

Read the whole story
alvinashcraft
51 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

CPU-bound Insert Benchmark vs MySQL on 24-core and 32-core servers

1 Share

This has results for MySQL versions 5.6 through 9.5 with a CPU-bound Insert Benchmark on 24-core and 32-core servers. The workload uses a cached database so it is often CPU-bound but on some steps does much write IO. 

Results from a small server are here and note that MySQL often has large performance regressions at low concurrency from new CPU overhead while showing large improvements at high concurrency from less mutex contention. The tests here use medium or high concurrency while low concurrency was used on the small server.

tl;dr

  • good news
    • Modern MySQL has large improvements for write-heavy benchmark steps because it reduces mutex contention
  • bad news
    • Modern MySQL has large regressions for read-heavy benchmarks steps because it uses more CPU
  • other news
    • Postgres 18.1 was faster than MySQL 8.4.7 on all of the benchmark steps except l.i2 which is write-heavy and does random inserts+deletes. But Postgres also suffers the most from variance and stalls on the write-heavy benchmark steps.

Builds, configuration and hardware

I compiled MySQL from source for versions 5.6.51, 5.7.44, 8.0.43, 8.0.44, 8.4.6, 8.4.7, 9.4.0 and 9.5.0. I also compiled Postgres 18.1 from source.

The servers are:
  • 24-core
    • the server has 24-cores, 2-sockets and 64G of RAM. Storage is 1 NVMe device with ext-4 and discard enabled. The OS is Ubuntu 24.04. Intel HT is disabled.
    • the standard MySQL config files are here for 5.65.78.08.4 and 9.x
    • the Postgres config file is here (x10b) and uses io_method=sync
  • 32-core
    • the server has 32-cores and 128G of RAM. Storage is 1 NVMe device with ext-4 and discard enabled. The OS is Ubuntu 24.04. AMD SMT is disabled.
    • the standard MySQL config files are here for 5.6, 5.7, 8.0, 8.4, 9.4 and 9.5. For 8.4.7 I also tried a my.cnf file that disabled the InnoDB change buffer (see here). For 9.5.0 I also tried a my.cnf file that disabled a few gtid features that are newly enabled in 9.5 to have a config more similar to earlier releases (see here).
    • the Postgres config file is here and uses io_method=sync
The Benchmark

The benchmark is explained here. It was run with 8 clients on the 24-core server and 12 clients on the 32-core server. The point query (qp100, qp500, qp1000) and range query (qr100, qr500, qr1000) steps are run for 1800 seconds each.

The benchmark steps are:

  • l.i0
    • insert 10M rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 16M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 4M rows are inserted and deleted per table.
    • Wait for S seconds after the step finishes to reduce MVCC GC debt and perf variance during the read-write benchmark steps that follow. The value of S is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. This step is frequently not IO-bound for the IO-bound workload.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

For each server there are three performance reports
  • latest point releases
    • has results for MySQL 5.6.51, 5.7.44, 8.0.44, 8.4.7, 9.4.0 and 9.5.0
    • the base version is 5.6.51 when computing relative QPS
  • all releases
    • has results for MySQL 5.6.51, 5.7.44, 8.0.43, 8.0.44, 8.4.6, 8.4.7, 9.4.0 and 9.5.0
    • the base version is 5.6.51 when computing relative QPS
  • MySQL vs Postgres
    • has results for MySQL 8.4.7 and Postgres 18.1
    • the base version is MySQL 8.4.7 when computing relative QPS
    • uses two configs for MySQl 8.4.7
      • the cz12a config is my standard my.cnf and is used for the base version
      • the cz12_nocb config is similar to cz12a but disables the InnoDB change buffer
The performance reports are here for:
The summary sections from the performances report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from the base version. The base version is MySQL 5.6.51 for the latest point releases and all releases reports, and then it is MySQL 8.4.7 for the MySQL vs Postgres reports.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements.

I often use context switch rates as a proxy for mutex contention.

Results: latest point releases

The summaries are here for the 24-core and 32-core servers
  • modern MySQL does better than 5.6.51 on write-heavy steps
    • For the 24-core server there was less CPU overhead (cpupq) and were fewer context switches (cspq) during the l.i1 benchmark step (see here)
    • For the 32-core server there was less CPU overhead (cpupq) and were fewer context switches (cspq) during the l.i1 benchmark step (see here). The reduction in context switches here wasn't as large as it was for the 24-core/2-socket server.
    • For the 32-core server the cz13a config that disables some newly enabled gtid options has much less mutex contention than the cz12a config that enables them (by default)
  • modern MySQL does worse than 5.6.51 on read-heavy steps, from new CPU overhead
    • For the 24-core server CPU per query (cpupq) is up to 1.3X larger for range queries and up to 1.5X larger for point queries on modern MySQL vs 5.6.51 -- see cpupq here for qr100.L1 and qp100.L2
    • For the 32-core server CPU per query (cpupq) is up to 1.4X larger for range queries and up to 1.5X larger for point queries on modern MySQL vs 5.6.51 -- see cpupq here for qr100.L1 and qp100.L2
The tables have relative throughput: (QPQ for my version / QPS for MySQL 5.6.51). Values less than 0.95 have a yellow background. Values greater than 1.05 have a blue background.

From the 24-core server

dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
my5651_rel_o2nofp.cz12a_c24r641.001.001.001.001.001.001.001.001.001.00
my5744_rel_o2nofp.cz12a_c24r641.012.881.471.420.840.880.860.890.870.90
my8044_rel_o2nofp.cz12a_c24r640.962.842.091.440.780.670.800.670.810.68
my8407_rel_o2nofp.cz12a_c24r640.952.812.071.430.760.660.780.670.790.67
my9400_rel_o2nofp.cz12a_c24r640.932.841.611.360.780.670.800.680.810.68
my9500_rel_o2nofp.cz13a_c24r640.942.811.661.350.770.670.790.670.800.69


From the 32-core server

dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
my5651_rel_o2nofp.cz12a_c32r1281.001.001.001.001.001.001.001.001.001.00
my5744_rel_o2nofp.cz12a_c32r1281.164.061.371.620.830.850.840.860.860.87
my8044_rel_o2nofp.cz12a_c32r1281.384.682.431.920.700.630.720.640.730.65
my8407_rel_o2nofp.cz12a_c32r1281.374.842.421.910.700.630.710.630.720.65
my9400_rel_o2nofp.cz12a_c32r1281.364.762.401.890.710.640.730.650.740.66
my9500_rel_o2nofp.cz12a_c32r1281.354.842.192.000.720.640.730.650.750.66
my9500_rel_o2nofp.cz13a_c32r1281.364.842.411.890.700.640.720.650.730.66



Results: all releases

The summaries are here for the 24-core and 32-core servers. I won't describe these other than to claim that performance is similar between adjacent point releases (8.4.6 vs 8.4.7, 8.0.43 vs 8.0.44).

Results: MySQL vs Postgres

The summaries are here for the 24-core and 32-core servers.
  • While Postgres does better than MySQL on l.i1 it does worse on l.i2, perhaps because there is more MVCC debt (things to be vacuumed) during l.i1. The l.i1 and l.i2 benchmark steps are the most write-heavy. Transactions (number of rows changed) are 10X larger for l.i1 than l.i2. 
    • For l.i1 the insert rate has more variance with Postgres than MySQL -- see here for the 24-core (MySQL, Postgres) and 32-core (MySQL, Postgres) servers. Also, Postgres has a few obvious write-stalls on the 32-core server.
    • For l.i2 the insert rate has more variance with Postgres than MySQL -- see here for the 24-core (MySQL, Postgres) and 32-core (MySQL, Postgres) servers. Also, Postgres has frequent write-stalls.
    • For l.i1 Postgres uses less CPU per operation than MySQL while for l.i2 it uses more -- see cpupq for the 24-core and 32-core servers
  • Postgres does better than MySQL on the read-heavy steps (qr* and qp*)
    • For qr100.L1 (range queries) the CPU per query is ~1.5X larger for MySQL than Postgres and context switches per query are ~1.7X larger for MySQL than for Postgres -- see cpupq and cspq for the 24-core and 32-core servers
    • For qp100.L2 (point queries) the CPU per query is ~1.25X larger for MySQL than Postgres and context switches per query are 1.5X larger for MySQL than Postgres -- see cpupq and cspq for the 24-core and 32-core servers
  • Performance for MySQL is similar between the cz12a and cz12a_nocb configs. That is expected because the database is cached and there is no (or little) use of the change buffer.
For the 24-core server

dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
my8407_rel_o2nofp.cz12a_c24r641.001.001.001.001.001.001.001.001.001.00
my8407_rel_o2nofp.cz12a_nocb_c24r640.991.031.010.991.001.000.991.000.991.00
pg181_o2nofp.cx10b_c24r641.401.371.360.491.671.241.641.241.661.25


For the 32-core server

dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
my8407_rel_o2nofp.cz12a_c32r1281.001.001.001.001.001.001.001.001.001.00
my8407_rel_o2nofp.cz12a_nocb_c32r1281.011.001.021.001.001.011.001.011.001.01
pg181_o2nofp.cx10b_c32r1281.371.161.620.791.711.251.691.261.691.26







Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories