Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150723 stories
·
33 followers

Component hydration patterns that actually work with Jaspr

1 Share

Every framework with server-side rendering faces the same problem. You render HTML on the server, send it to the browser, then your JavaScript needs to take over without breaking what’s already there. This is hydration, and most frameworks make you think about it constantly.

React introduced SSR years ago. In the older “Pages Router” model, you had to serialize state manually and often fought hydration mismatches. Modern Next.js (App Router) improved this with React Server Components, which stream data to the client automatically. However, you are still managing the “boundary” between server and client explicitly. You mark components with "use server" or "use client" and carefully manage which code runs where. Vue and Svelte have similar patterns. Every component that needs server data requires explicit data fetching and passing.

The core issue is that frameworks treat server rendering and client rendering as separate concerns. You render on the server, somehow get the data to the client, then render again. If these two renders produce different output, hydration breaks. If you forget to pass some state, components remount with empty data and flicker.

Jaspr takes a different approach. Components are universal by default. State syncs automatically. The same component code runs on server and client, and the framework handles transferring state between them without manual serialization.

This isn’t specific to Jaspr. The pattern applies to any SSR framework. Some implement it better than others. Understanding how automatic hydration works makes you better at building SSR apps in any framework.

Why manual hydration is error-prone

Here’s what most SSR frameworks make you do. First, fetch data on the server:

// Next.js example
export async function getServerSideProps() {
const data = await fetchSomeData();
return { props: { data } };
}

export default function Page({ data }) {
return <div>{data.title}</div>;
}

This works, but you’re managing data flow manually. The server fetches data, passes it as props, and the component renders. When this reaches the client, Next.js serializes data as JSON in the HTML, then deserializes it during hydration.

The problem shows up when you have nested components with their own data needs:

export default function Page({ data }) {
return (
<div>
<Header data={data.header} />
<Content data={data.content} />
<Sidebar data={data.sidebar} />
</div>
);
}

You’re threading props through every level. If Sidebar has a child component that needs data, you pass it down again. This is prop drilling, and it's a code smell.

You could use Context to avoid prop drilling, but that doesn’t solve the real problem. You still fetched all the data at the top level and distributed it manually. If one component needs different data based on user interaction, you fetch it client-side and now you have two data fetching patterns in the same app.

Some frameworks let components fetch their own data. React Server Components do this. Each component can be async and fetch what it needs. The server waits for all async components, renders the tree, and sends it to the client.

This is better, but you still manage the boundary between server and client explicitly. You mark components with “use server” or “use client” and think about which code runs where. This is necessary complexity for React’s architecture, but it’s still complexity you’re managing.

Automatic state sync reduces cognitive load

The better pattern is making state sync automatic. Components should be able to have state, and that state should transfer from server to client without you writing transfer code.

In modern Jaspr, the build method returns a single Component, aligning with Flutter’s syntax and enabling better performance optimizations in the Dart-to-JS compiler. Here’s how it works in Jaspr:

class CounterComponent extends StatefulComponent {
@override
State createState() => _CounterState();
}

class _CounterState extends State<CounterComponent> {
int _count = 0;

void _increment() {
setState(() {
_count++;
});
}

@override
Component build(BuildContext context) {
return button(
events: {'click': (_) => _increment()},
[.text('Count: $_count')],
);
}
}

When this renders on the server, _count is 0. Jaspr serializes that state and embeds it in the HTML. When the client hydrates, it reads that data and initializes the component with _count = 0. The button works immediately because the state is already there.

For complex data, Jaspr provides the @sync annotation. It automatically generates the logic needed to move data from the server to the client.

// article_page.dart
import 'article_page.sync.dart';

class ArticlePage extends StatefulComponent {
final String id;
const ArticlePage({required this.id});

@override
State createState() => _ArticlePageState();
}

class _ArticlePageState extends State<ArticlePage> with _ArticlePageStateSyncMixin, PreloadStateMixin {
@sync
Article? _article;

@sync
List<Comment> _comments =;

@override
Future<void> preloadState() async {
// Only runs on the server to fetch data before the first render
_article = await api.fetchArticle(widget.id);
_comments = await api.fetchComments(widget.id);
}

@override
Component build(BuildContext context) {
if (_article == null) {
return div([.text('Loading...')]);
}

return div([
ArticleHeader(article: _article!),
ArticleContent(article: _article!),
CommentList(comments: _comments),
]);
}
}

The PreloadStateMixin handles the server-side acquisition of data, delaying the render until the futures complete. The @sync annotation then ensures that once the client takes over, it already has the values for _article and _comments in its local state, preventing a “flash of loading” or redundant API calls.

Handling non-serializable state

Some state can’t serialize: WebSocket connections, file handles, or DOM references. These exist only on the client.

class _RealtimeComponentState extends State<RealtimeComponent> {
WebSocketChannel? _channel;
List<Message> _messages =;

@override
void initState() {
super.initState();
// Only connect WebSocket on client
if (kIsWeb) {
_connectWebSocket();
}
}

@override
Component build(BuildContext context) {
return div();
}
}

The pattern handles this by checking kIsWeb. The server renders the initial UI, and the client establishes the live connection upon hydration.

Optimized tree reconciliation

Jaspr follows the Flutter model. It uses a component tree that produces a render tree. When you call setState, the framework reconciles the changes and applies surgical updates to the DOM. Because the build methods now return a single component, Jaspr can more efficiently track these dependencies and update only specific nodes without expensive full-page re-renders.

When to use SSR vs SSG vs CSR

While automatic hydration is powerful, you must choose the right rendering mode for your project. In Jaspr, this is a structural configuration defined in your pubspec.yaml or during the jaspr create step.

Unlike some frameworks, there is no — mode flag for the jaspr build command; the CLI reads your project’s configuration to decide how to build your assets.

jaspr:
mode: static # Can be 'static', 'server', or 'client'
  • Static (SSG): Pre-renders pages at build time. Ideal for blogs and documentation. Run jaspr build to generate static HTML.
  • Server (SSR): Renders components dynamically for every request. Essential for personalized or frequently changing content.
  • Client (CSR): Skips the server. Useful for behind-the-login dashboards where SEO isn’t a priority.

Testing hydration without deploying

Jaspr includes a test package that simulates the full lifecycle in a single process, without needing a real browser.

import 'package:jaspr_test/jaspr_test.dart';

void main() {
testComponents('Counter hydrates correctly', (tester) async {
// 1. Setup component
await tester.pumpComponent(CounterComponent());
expect(find.text('Count: 0'), findsOneComponent);

// 2. Simulate hydration transition
await tester.hydrate();

// 3. Interact with client-side logic
await tester.click(find.tag('button'));
expect(find.text('Count: 1'), findsOneComponent);
});
}dar

This runs in a single test process. No actual server or browser needed. You can test the full lifecycle: server render, hydration, interaction, state updates.

For complex components with async data loading:

testComponents('Article page hydrates with data', (tester) async {
final mockApi = MockArticleApi();
when(mockApi.fetchArticle(any)).thenAnswer((_) async => testArticle);

await tester.pumpComponent(
ArticlePage(id: '123', api: mockApi),
);

// Wait for async data loading
await tester.pump();

expect(find.text(testArticle.title), findsOneComponent);

// Hydration should preserve the loaded data
await tester.hydrate();

expect(find.text(testArticle.title), findsOneComponent);
verify(mockApi.fetchArticle('123')).called(1);
});

Performance characteristics of different hydration strategies

Hydration has a cost. The server renders HTML, the client downloads JavaScript, parses it, and then “boots up” components. For large pages, this can take time.

Here’s how different strategies compare:

Automatic state sync traditionally fits the “Full SSR + hydration” strategy. However, Jaspr now has first-class support for the Islands Architecture via the @island annotation and dedicated templates.

In an Islands approach, you render the page as static HTML and only hydrate specific, annotated components. This allows you to ship significantly less JavaScript to the client. You can use the jaspr create -t islands command to set up a project designed for this architecture from the start.

What this pattern enables

Automatic state sync changes how you think about building SSR apps. Instead of managing data flow between server and client, you write components with state and let the framework handle transfer.

The core insight is the same across frameworks: when the framework knows about your state, it can transfer it for you. The less you manage serialization and data flow manually, the fewer bugs you introduce.

For Dart developers, Jaspr makes this feel natural because the component model matches Flutter. If you’ve built Flutter apps, you already know how to manage state with setState. Making that work on the server and client is the framework’s job, not yours.

The tradeoff is giving up control over exactly how and when state transfers. If you need custom serialization logic or want to optimize what data transfers, you’re fighting the framework. For most apps, automatic is better. For apps with unique performance constraints, manual control might be necessary.

Resources

Connect with me on social media:


Component hydration patterns that actually work with Jaspr was originally published in Flutter Community on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Denim: Secret place names hiding in plain sight. Why the principal is more than your pal.

1 Share

1172. This week, we look at "toponyms" — words named after places — and you'll discover the hidden place names in denim, jeans, sherry, cantaloupe, and more. Then, we break down "principal" versus "principle," with memory tricks so you'll never forget the difference again.

🔗 Join the Grammar Girl Patreon.

🔗 Share your familect recording in Speakpipe or by leaving a voicemail at 833-214-GIRL (833-214-4475)

🔗 Watch my LinkedIn Learning writing courses.

🔗 Subscribe to the newsletter.

🔗 Take our advertising survey

🔗 Transcript available on your podcast player.

🔗 Get Grammar Girl books

| HOST: Mignon Fogarty

| Grammar Girl is part of the Quick and Dirty Tips podcast network.

  • Audio Engineer: Castria Communications
  • Director of Podcast: Holly Hutchings
  • Advertising Operations Specialist: Morgan Christianson
  • Marketing and Video: Nat Hoopes, Rebekah Sebastian
  • Podcast Associate: Maram Elnagheeb

| Theme music by Catherine Rannus.

| Grammar Girl Social Media: YouTubeTikTokFacebookThreadsInstagramLinkedInMastodonBluesky.


Hosted on Acast. See acast.com/privacy for more information.





Download audio: https://sphinx.acast.com/p/open/s/69c1476c007cdcf83fc0964b/e/69c531349b6be94a1a7ba935/media.mp3
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

508: Agentic Workflows - Markdown Automation for GitHub Actions

1 Share

At MVP Summit we dig into Agentic Workflows — write Markdown prompts that drive AI agents to run CI, open PRs, and automate cross‑repo tasks — and MAUI DevFlow, which lets agents interact with native UIs to click, screenshot and validate designs. Listen for practical takeaways on ditching brittle YAML/scripts and automating tedious maintenance and testing, plus the real caveats: security front‑matter, a compile/lock step and token costs.

Follow Us

⭐⭐ Review Us ⭐⭐

Machine transcription available on http://mergeconflict.fm

Support Merge Conflict





Download audio: https://aphid.fireside.fm/d/1437767933/02d84890-e58d-43eb-ab4c-26bcc8524289/56c8e02b-329f-4196-b800-f9a70260b75c.mp3
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How .NET handles exceptions internally (and why they're expensive)

1 Share

This blog post is originally published on https://blog.elmah.io/how-net-handles-exceptions-internally-and-why-theyre-expensive/

What really happens when you write throw new Exception() in .NET? Microsoft guidelines state that

When a member throws an exception, its performance can be orders of magnitude slower. 

It's not just a simple jump to a catch block, but a lot goes in CLR (Common Language Runtime). Expensive operations such as stack trace capture, heap allocations, and method unwinding occur each time. You will not want to use them in any hot paths. Today, In today's post, I will help you decide when exceptions are appropriate and when a simple alternative type might be better.

How .NET handles exceptions internally (and why they're expensive)

What is an Exception?

An exception is an error condition or unexpected behaviour during the execution of a program. Exceptions can occur at runtime for various reasons, such as accessing a null object, dividing by zero, or requesting a file that is not found. A C# exception contains several properties, including a Message describing the cause of the exception. StackTrace contains the sequence of method calls that led to the exception in reverse call order to trace the exception source.

How does an exception work?

The try block encloses the code prone to exceptions. try/catch protects the application from blowing up. Use the throw keyword to signal the error and throw an Exception object containing detailed information, such as a message and a stack trace. The caught exception allows the program to continue gracefully and notify the user where and what error occurred. When an error occurs, the CLR searches for a compatible catch block in the current method. If not found, it moves up the call stack to the calling method, and so on. Once a matching catch is found based on the exception type, control jumps to that block. In an unhandled exception situation where no compatible catch block is found, the application can terminate. Exception handling uses a heap to store the message. To look for a catch body, the CLR unwinds the stack by removing intermediate stack frames. The JIT must generate EH tables and add hidden control-flow metadata.

What is OneOf<T> in .NET?

OneOf<T> or OneOf<T1, T2, T...> represents a discriminated union containing all possible returns of an operation or a method. It contains an array of types, allowing a method to return one of several defined possibilities. The OneOf pattern provides you with fine-grained control and type safety.

Examine Exceptions with the benchmark.

To truly understand it, let's create an application. I will use a console application.

Step 1: Create the project

dotnet new console -n ExceptionBenchmark
cd ExceptionBenchmark

Step 2: Add necessary packages

I am adding the Benchmark library along with OneOf, which is used for the OneOf return type.

dotnet add package BenchmarkDotNet
dotnet add package OneOf

Step 3: Set up the program.cs

All the code is in the Program.cs

using System;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using OneOf;

BenchmarkRunner.Run<ExceptionBenchmarks>();

[MemoryDiagnoser] 
public class ExceptionBenchmarks
{
    private const int Iterations = 100_000;
    private const int FailureEvery = 10;

    [Benchmark]
    public int NoException()
    {
        int failures = 0;

        for (int i = 1; i <= Iterations; i++)
        {
            if (!DoWork_NoException(i))
                failures++;
        }

        return failures;
    }

    private bool DoWork_NoException(int i)
    {
        return i % FailureEvery != 0;
    }

    [Benchmark]
    public int WithException()
    {
        int failures = 0;

        for (int i = 1; i <= Iterations; i++)
        {
            try
            {
                DoWork_WithException(i);
            }
            catch
            {
                failures++;
            }
        }

        return failures;
    }

    private void DoWork_WithException(int i)
    {
        if (i % FailureEvery == 0)
            throw new InvalidOperationException();
    }

    [Benchmark]
    public int WithOneOf()
    {
        int failures = 0;

        for (int i = 1; i <= Iterations; i++)
        {
            var result = DoWork_WithOneOf(i);

            if (result.IsT1)
                failures++;
        }

        return failures;
    }

    private OneOf<Success, Error> DoWork_WithOneOf(int i)
    {
        if (i % FailureEvery == 0)
            return new Error("Error");

        return new Success("Passed");
    }

    private readonly struct Success
    {
        public string Message { get; }

        public Success(string message)
        {
            Message = message;
        }
    }

    private readonly struct Error
    {
        public string Message { get; }

        public Error(string message)
        {
            Message = message;
        }
    }
}

The first method is simple with no exception. Then it throws an exception, and in subsequent methods, it finally returns an error object from OneOf. To make it realistic, each method will observe with 10% error and 90% success rate, as FailureEvery is set to 10. Success and Error are value types to avoid allocations, since they only return the value from the method.

Step 4: Run and test

dotnet run -c Release
Benchmark results

The best performer is the NoException. But that is not practical, you have to identify unexpected behaviour and report it in the code flow. Firstly, a naive approach is to use an exception. Using it adds a time cost and increases the Garbage collector's Gen 0 pressure. So, our alternative to exception is OneOf, which significantly saved time and memory. We can further add Objects to the OneOf, considering the possible return values of the method.

In exceptions, Stack tracing is very expensive, as it propagates stacks, captures method names, stores IL offsets, and inspects frames. Also, the JIT inlining is limited during exceptions. With OneOfI used a struct value type, so Gen 0 utilization is minimized. Neither does it fall for stack trace nor unwind it. Hence, the execution remains linear.

When can I use an exception alternative?

In the following cases, exceptions can be replaced with Result or OneOf in normal application flows.

  • Business rule rejection, such as the customers cannot order out-of-stock items. You can return an error in response.
  • API validation, where you can simply return 400 with a custom message after figuring out all possible error cases.
  • High-throughput paths where you cannot afford an exception mechanism.
  • Validation failure, such as invalid email or mobile number input.
  • Data not found scenarios where you know either the request data will be available or will not be found. Simply, you can deal with both cases.

When is an exception the optimal choice?

You don't remove fire alarms from a building because they're loud. You just don't pull them every time someone burns toast. We have some situations where exceptions stand out even if they are expensive.

  • Exceptions occur when the program falls into an impossible state, such as when a null database connection is used. You cannot proceed anywhere because the connection is not even initialized for some reason.
  • For environmental failures, you will opt for exceptions such as timeout failures, disk I/O failures, or database connection losses.
  • Programming bugs where your code falls into a dead end, and it cannot handle further. Conditions where your input case exhaust, such as you have order statuses of pending, cancelled, and confirmed, are enumerated with 1,2 and 3, respectively. There is no case apart from that, so you can simply throw an ArgumentOutOfRangeException or a custom exception in the default case.
  • If developing a library, use an exception to signal to the user what went wrong and halt normal execution. Here, you cannot force consumers to handle result types.

Conclusion

In high-performance systems, every allocation matters. Exceptions aim to provide a safeguard against anomalous conditions, but they can sometimes be a burden on memory and CPU. I put light on the exception of how much resource they can use for simple operations compared to their counterparts. We explored where it is suitable and where it can be replaced. In short, use exceptions for Unexpected, impossible, and environmental failures. While you can simply use Result/OneOf As an alternative, when conditions are expected, it is useful for business validation, user-driven errors, and high-frequency failures.

Code: https://github.com/elmahio-blog/ExceptionBenchmarks.git



Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

GitHub Copilot Agent vs. GitHub Copilot Code Review

1 Share
Agents are all the rage today, but what happens when we start using multiple agents together? Can we improve the overall accuracy of our results? Can we reduce the total time to develop a feature? Let's dig in and find out!
Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

VS Code Memory Tool: Local Memory meets GitHub Copilot Memory

1 Share

A few weeks ago I wrote about Copilot Memory in VS Code - the GitHub-hosted system that lets Copilot learn repository-specific insights across agents. Since then, VS Code has shipped a second, complementary memory system: the Memory tool. These two systems solve related but distinct problems, and understanding both helps you get the most out of Copilot in your daily workflow.

What is the Memory Tool?

The Memory tool is a built-in agent capability that stores notes locally on your machine. Unlike Copilot Memory, which lives on GitHub's servers and requires a GitHub repo to function, the Memory tool writes plain files to your local filesystem and reads them back at the start of each session.

You enable or disable it with the github.copilot.chat.tools.memory.enabled setting. It's on by default.


Three memory scopes

VSCode organizes memories into three scopes:

Scope Persists across sessions Persists across workspaces Good for
User Personal preferences, habits
Repository Project conventions, architecture
Session In-progress task context

User memory is the most broadly applicable. The first 200 lines load automatically into the agent's context at the start of every session, across every workspace. Ask the agent something like:

Remember that I like to use XUnit as my preferred testing framework.

...and it will apply that preference every time, regardless of which project you open.

Repository memory is scoped to the current workspace. This is the right place to capture things like "this project uses the repository pattern for data access" or "all API endpoints require authentication." That context persists across sessions in that workspace but doesn't bleed into other projects.

Session memory clears when the conversation ends. The planning agent uses this to store its plan.md file — useful for multi-step tasks within a single session, but intentionally ephemeral.

Using the memory tool in practice

Storing a memory is just natural language:

Remember that our teams use XUnit as our preferred testing framework.

Retrieving it in a new session is equally straightforward:

What testing framework is used?

The agent checks the memory files and surfaces the relevant information. References to memory files in chat responses are clickable, so you can inspect the raw content directly.

Two commands help you manage what's stored:

  • Chat: Show Memory Files — opens a list of all memory files across scopes
  • Chat: Clear All Memory Files — wipes everything

 


Memory Tool vs. Copilot Memory: side by side

This is the comparison that matters if you've already been using Copilot Memory (also see the documentation):

  Memory Tool Copilot Memory
Storage Local (your machine) GitHub-hosted
Scopes User, repository, session Repository only
Shared across Copilot surfaces No (VS Code only) Yes (coding agent, code review, CLI)
Who creates memories You or the agent during chat Copilot agents automatically
Enabled by default Yes No (opt-in)
Expiration Manual Automatic (28 days)
Requires GitHub repo No Yes

The practical split: use the Memory tool for personal preferences and anything workspace- or session-specific that you control. Use Copilot Memory for repository knowledge that should propagate across GitHub's Copilot surfaces — coding agent, code review, CLI.

Also worth remembering from my previous post: Copilot Memory currently only works when your repository is hosted on GitHub. If you're on Azure DevOps or a local-only repo, the Memory tool is your only option.

Where does this leave us?

When I wrote about Copilot Memory back in February, one of the things I hoped for was better support for non-GitHub sources. The Memory tool partially addresses that gap — it's source-agnostic and works entirely offline. But it's also more manual; you drive what gets remembered, rather than agents picking it up automatically.

The two systems aren't competing — they're designed to be complementary. What's still missing is a unified view across both, and better tooling for organizing and reviewing what's been stored over time. That's probably where the feature evolves next.

More information

Memory in VS Code agents

Enabling and curating Copilot Memory

Copilot Memory in VS Code: Your AI assistant just got smarter

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories