Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146779 stories
·
33 followers

Building a Greenfield System with the Critter Stack

1 Share

JasperFx Software works hand in hand with our clients to improve our client’s outcomes on software projects using the “Critter Stack” (Marten and Wolverine). Based on our engagements with client projects as well as the greater Critter Stack user base, we’ve built up quite a few optional usages and settings in the two frameworks to solve specific technical challenges.

The unfortunate reality of managing a long lived application framework such as Wolverine or a complicated library like Marten is the need to both continuously improve the tools as well as trying really hard not to introduce regression errors to our clients when they upgrade tools. To that end, we’ve had to make several potentially helpful features be “opt in” in the tools, meaning that users have to explicitly turn on feature flag type settings for these features. A common cause of this is any change that introduces database schema changes as we try really hard to only do that in major version releases (Wolverine 5.0 added some new tables to SQL Server or PostgreSQL storage for example).

And yes, we’ve still introduced regression bugs in Marten or Wolverine far more times than I’d like, even with trying to be careful. In the end, I think the only guaranteed way to constantly and safely improve tools like the Critter Stack is to just be responsive to whatever problems slip through your quality gates and try to fix those problems quickly to regain trust.

With all that being said, let’s pretend we’re starting a greenfield project with the Critter Stack and we want to build in the best performing system possible with some added options for improved resiliency as well. To jump to the end state, this is what I’m proposing for a new optimized greenfield setup for users:

 var builder = Host.CreateApplicationBuilder();

builder.Services.AddMarten(m =>
{
    // Much more coming...
    m.Connection(builder.Configuration.GetConnectionString("marten"));

    // 50% improvement in throughput, less "event skipping"
    m.Events.AppendMode = EventAppendMode.Quick;
    // or if you care about the timestamps -->
    m.Events.AppendMode = EventAppendMode.QuickWithServerTimestamps;

    // 100% do this, but be aggressive about taking advantage of it
    m.Events.UseArchivedStreamPartitioning = true;

    // These cause some database changes, so can't be defaults,
    // but these might help "heal" systems that have problems
    // later
    m.Events.EnableAdvancedAsyncTracking = true;

    // Enables you to mark events as just plain bad so they are skipped
    // in projections from here on out.
    m.Events.EnableEventSkippingInProjectionsOrSubscriptions = true;

    // If you do this, just now you pretty well have to use FetchForWriting
    // in your commands
    // But also, you should use FetchForWriting() for command handlers 
    // any way
    // This will optimize the usage of Inline projections, but will force
    // you to treat your aggregate projection "write models" as being 
    // immutable in your command handler code
    // You'll want to use the "Decider Pattern" / "Aggregate Handler Workflow"
    // style for your commands rather than a self-mutating "AggregateRoot"
    m.Events.UseIdentityMapForAggregates = true;

    // Future proofing a bit. Will help with some future optimizations
    // for rebuild optimizations
    m.Events.UseMandatoryStreamTypeDeclaration = true;

    // This is just annoying anyway
    m.DisableNpgsqlLogging = true;
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()

.IntegrateWithWolverine(x =>
{
    // Let Wolverine do the load distribution better than
    // what Marten by itself can do
    x.UseWolverineManagedEventSubscriptionDistribution = true;
});

builder.Services.AddWolverine(opts =>
{
    // This *should* have some performance improvements, but would
    // require downtime to enable in existing systems
    opts.Durability.EnableInboxPartitioning = true;

    // Extra resiliency for unexpected problems, but can't be
    // defaults because this causes database changes
    opts.Durability.InboxStaleTime = 10.Minutes();
    opts.Durability.OutboxStaleTime = 10.Minutes();

    // Just annoying
    opts.EnableAutomaticFailureAcks = false;

    // Relatively new behavior that will store "unknown" messages
    // in the dead letter queue for possible recovery later
    opts.UnknownMessageBehavior = UnknownMessageBehavior.DeadLetterQueue;
});

using var host = builder.Build();

return await host.RunJasperFxCommands(args);

Now, let’s talk more about some of these settings…

Lightweight Sessions with Marten

The first option we’re going to explicitly add is to use “lightweight” sessions in Marten:

var builder = Host.CreateApplicationBuilder();

builder.Services.AddMarten(m =>
{
    // Elided configuration...
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()

By default, Marten will use a heavier version of IDocumentSession that incorporates an Identity Map internally to track documents (entities) already loaded by that session. Likewise, when you request to load an entity by its identity, Marten’s session will happily check if it has already loaded that entity and gives you the same object back to you without making the database call.

The identity map usage is mostly helpful when you have unclear or deeply nested call stacks where different elements of the code might try to load the same data as part of the same HTTP request or command handling. If you follow “Critter Stack” and what we call the best practices especially for Wolverine usage, you’ll know that we very strongly recommend against deep call stacks and excessive layering.

Moreover, I would argue that you should never need the identity map behavior if you were building a system with an idiomatic Critter Stack approach, so the default session type is actually harmful in that it adds extra runtime overhead. The “lightweight” sessions run leaner by completely eliminating all the dictionary storage and lookups.

Why you ask is the identity map behavior the default?

  1. We were originally designing Marten as a near drop in replacement for RavenDb in a big system, so we had to mimic that behavior right off the bat to be able to make the replacement in a timely fashion
  2. If we changed the default behavior, it can easily break code in existing systems that upgrade in ways that are very hard to predict and unfortunately hard to diagnose. And of course, this is most likely a problem in the exact kind of codebases that are hard to reason about. How do I know this and why am I so very certain this is so you ask? Scar tissue.



Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Studying compiler error messages closely: Input file paths

1 Share

A colleague was working in a project that used a number of data files to configure how the program worked. They wanted one portion of the configuration file to be included only if a particular build flag was set. Let’s say that the configuration file is C:\repos\contoso\config\Contoso.config.

<providers>
    <provider name="Widget" version="1.0"/> <!-- or 2.0 if useV2Widgets build flag is set -->
    <provider name="Gadget" version="1.0"/> <!-- only if useV2Widgets build flag is set -->
    <!-- other providers that are used regardless of the build flags -->
</providers>

They were adding a build flag to convert the code base to use 2.0 widgets, but they wanted the default to be 1.0; only people who build with the special build flag should get 2.0 widgets. It so happens that 2.0 widgets depend on gadgets, so they also wanted to add a gadget provider, but again only conditionally based on the build flag.

The configuration file itself doesn’t support conditionals. How can they get a configuration file to support conditionals when the file format does not support conditionals?

I suggested that they use a preprocessor to take the marked-up configuration file and produce a filtered output file, which becomes the actual configuration file. Upon closer investigation, it appeared that they were not the first project to need conditionals in their configuration file, and another team had already written a generic XML preprocessor that supports conditional elements based on build flags, and that other team even included instructions on their project wiki on how to include a preprocessor pass to their build configuration. The updated configuration file looks something like this:

<providers>
    <provider name="Widget" version="1.0" condition="!useV2Widgets"/>
    <provider name="Widget" version="2.0" condition="useV2Widgets"/>
    <provider name="Gadget" version="1.0" condition="useV2Widgets"/>
</providers>

However, after following the instructions on the wiki to update the configuration file to use the condition attribute, and update the build process to send the file through the “conditional build flags” preprocessor, the was still a build error:

Validation failure: C:\repos\contoso\config\Contoso.config(2): Invalid attribute 'conditions'

The configuration validator was upset at the condition attribute, but when they compared their project to other projects that used the configuration preprocessor, those other projects used the condition attribute just fine.

Look carefully at the error message. In particular, look at the path to the file that the validator is complaining about.

The validator is complaining about the original unprocessed file.

They went to the effort of sending the unprocessed file through the conditional build flags preprocessor to produce a processed file that has the correct provider list based on the build flags. But they forgot to use the results of that hard work: They were still using the old unprocessed file. It’s like taking a photograph, doing digital touch-ups, but then uploading the original version instead of the touched-up version.

The fix was to update the project so that it consumed the processed file instead of the raw file.¹

Bonus chatter: To avoid this type of confusion, it is common to change the extension of the unprocessed file to emphasize that it needs to be preprocessed. That way, when you see an error in Contoso.config, you don’t have to spend the effort to figure out which Contoso.config the error is about.

In this case, they could rename the unprocessed file to Contoso.preconfig and have the processed output be Contoso.config. I choose this pattern because the validator may require that the file extension be .config.

Another pattern would be to call the unprocessed version Contoso-raw.config and the processed version Contoso.config.

If you don’t want to rename an existing file (say because you are worried about merge errors if your change collides with others who are also modifying that file), you could leave the unprocessed file as Contoso.config and call the processed file Contoso-final.config

¹ The instructions on the wiki says “In your project file, change references from yourfile.ext to $(OutputDirectory)\yourfile.ext‘ But in this case, the file was being used not by the project file but by a separate configuration tool. The team was too focused on reading the literal instructions without trying to understand why the instructions were saying the things that they did. In this case, the instructions were focused on consumption from the project file, since that was the use case of the team that wrote the tool originally. But if you understand what the steps are trying to accomplish, you should realize that the intention is to update the references to the old yourfile.ext in every location you want to consume the postprocessed version.³

² I chose the suffix -final as a joking reference to the pattern of seeing files named Document-Final-Final-Final 2-USETHISONE.docx.

³ I took the liberty of updating the wiki to clarify that you need to update all references to yourfile.ext. The references usually come from the project file, but they could be in other places, too, such as a call to makepri.exe.

The post Studying compiler error messages closely: Input file paths appeared first on The Old New Thing.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Code that fits in a context window

1 Share

AI-friendly code?

On what's left of software-development social media, I see people complaining that as the size of a software system grows, large language models (LLMs) have an increasingly hard time advancing the system without breaking something else. Some people speculate that the context windows size limit may have something to do with this.

As a code base grows, an LLM may be unable to fit all of it, as well as the surrounding discussion, into the context window. Or so I gather from what I read.

This doesn't seem too different from limitations of the human brain. To be more precise, a brain is not a computer, and while they share similarities, there are also significant differences.

Even so, a major hypothesis of mine is that what makes programming difficult for humans is that our short-term memory is shockingly limited. Based on that notion, a few years ago I wrote a book called Code That Fits in Your Head.

In the book, I describe a broad set of heuristics and practices for working with code, based on the hypothesis that working memory is limited. One of the most important ideas is the notion of Fractal Architecture. Regardless of the abstraction level, the code is composed of only a few parts. As you look at one part, however, you find that it's made from a few smaller parts, and so on.

A so-called 'hex-flower', rendered with aesthetics in mind.

I wonder if those notions wouldn't be useful for LLMs, too.


This blog is totally free, but if you like it, please consider supporting it.
Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI-Assisted Coding: Where It Helps and Where It Doesn’t

1 Share
AI-Assisted Coding: Where It Helps and Where It Doesn’t

Discussions about AI-assisted coding are everywhere—and for good reason. The topic tends to stir up a mix of emotions. Some people are curious about the possibilities, some are excited about improving their day-to-day efficiency, and others are worried these tools will eventually get “smart” enough to replace them.

In this article, I will share my own experiences using AI as a coding assistant in my daily workflow.


My Background (and the Tools I Use)

For context, I’m a full stack engineer with 12 years of web development experience. My current focus is UI development with React and TypeScript.

Depending on the project, I use a variety of LLMs and AI tools, including:


Why Context Matters So Much

Regardless of which model you use, getting good results requires preparation. LLMs produce dramatically better output when they’re given sufficient context about:

  • The problem space
  • The tech stack
  • Architectural constraints
  • Coding standards and preferences

For example, if the only instruction provided is:

“Create a reusable React dropdown component”

…the response could reasonably be:

  • A fully custom component with inline styles
  • A ShadCN-based implementation assuming Tailwind
  • A wrapper around a Bootstrap dropdown

Without more information, the LLM has no idea:

  • Which version of React you’re using
  • Whether the app uses SSR
  • How important accessibility is
  • What design system or component library is standard in your project

Many LLMs won’t ask follow-up questions; they’ll just guess the “most likely” solution.


Global Instructions: The Real Productivity Unlock

You could solve this by writing extremely detailed prompts, but that quickly becomes tedious and undermines the efficiency gains AI is supposed to provide.

A better approach is to supply global context that applies to every prompt.

When using AI tools inside your IDE, this often means configuration files like:

  • CLAUDE.md (for Claude)
  • copilot-instructions.md (for GitHub Copilot)

These files are typically generated during a one-time setup. The AI scans the repository and records important assumptions, such as:

  • “This application uses .NET 8.0”
  • “UI components use ShadCN with Tailwind and Radix primitives”
  • “Authentication is handled via Microsoft Entra ID”

You can also manually update these files or even ask the LLM to update them for you.

If you ask for a “reusable React dropdown component” before and after generating these instruction files, the difference in output quality is usually dramatic. The AI can move faster and align with your repository’s conventions.

Tip: It can be beneficial to separate your instructions into smaller, more specific files in a docs folder (auth.md, data-fetching.md, etc), and point to them from your LLM-specific files. This lets you keep a single source of truth, while allowing multiple LLMs to work efficiently in your project.


The Limits of Context (and Hallucinations)

Even with excellent context, LLMs aren’t magic.

They’re still prone to hallucination (confidently producing content that is incorrect or completely fabricated). A common pattern looks like this:

“I understand now! The fix is…”

…followed by code that’s:

  • More complicated
  • Harder to reason about
  • Still incorrect

This leads to the real question:

When is it actually efficient to use LLMs, and what are they best at?

The strengths and limitations below reflect typical, out-of-the-box usage. In practice, the more effort you invest in context, instruction files, and guidance, the better the results tend to be.

Where AI Shines

In my experience, AI is most effective in these scenarios:

  • Quick prototypes, where code quality isn’t the top priority
  • Translating logic from one programming language to another
  • Single-purpose functions with complex logic that would normally require stepping through a debugger

Common examples:

  • Parsing authentication tokens
  • Formatting dates or strings in very specific ways
  • Creating and explaining regular expressions
  • Investigating and narrowing down error causes
  • Writing CSS or Tailwind classes

Styling is a bit of a toss-up. The AI often adds unnecessary styles, but if CSS isn’t your strong suit, it can still be a big help.


Where AI Falls Short

There are also clear areas where AI is far less effective (without additional guidance or setup):

  • High-level architecture and long-term planning
    LLMs don’t naturally think ahead unless explicitly told to, and even then the results often fall short of what an experienced architect would expect.
  • Producing high-quality, maintainable code quickly
    AI can generate a lot of code fast, but well-structured, modular code often takes longer to review and refactor than writing it yourself. I frequently spend significant time cleaning up AI-generated code.

Final Thoughts

After using AI in my everyday work, my conclusion is fairly simple:

AI is excellent at increasing speed and efficiency, but it does not replace good engineering judgment.

On its own, AI tends to optimize for immediacy rather than long-term maintainability. When left unguided, it will readily generate solutions that work today while introducing architectural fragility or technical debt tomorrow. That’s where skepticism is warranted.

That said, well-architected software is achievable with AI when the right conditions are in place. With strong global context, clearly defined architectural constraints, well-maintained instruction files, and, most importantly, a developer who understands what good architecture looks like, AI can become a genuinely effective collaborator.

Used thoughtfully, AI becomes a powerful accelerator. Used blindly, it can become technical debt.



Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Detective Adventures - on debugging UI issues by Oded Sharon

1 Share

Most developers enjoy working on exciting new projects and technologies. It is not only because of the sense of ownership and freedom but it is also about not trudging through old, badly written code filled with technical debt issues no one wants to deal with because they are so dreary. That said, I find satisfaction in looking at an existing system, with all the constraints it entails. And like a detective story, I try to figure out what went on in the previous developer’s mind when they wrote the code that I am now staring at. In fairness, that previous developer is often six-month-ago me, who thought he was very smart when he wrote that code.

A while back a client had a page with a list of documents the user managed. Each row in the table had information about the document and three actions as links - Edit, Delete and Download (PDF). When the client wished to add a couple more actions, we realised the table became exceedingly wide. The design team suggested having a drop-down with the various actions to save space.

The original design: table with links per row

However, the application is a Python form-based application. This means that every action the user submits will cause the page to refresh for the changes to take effect. Previously the “Download” action was a simple link that opened the PDF in a new window. But as the page reloaded, how can we trigger the opening of the new window?

The final design: a drop-down per row

A simple solution would be to include a small inline JavaScript that will trigger the opening of a new window with the PDF. However, the client had an extremely strict security policy, prohibiting inline scripts for security reasons, so that would not work. We produced a different solution - we used a hidden iframe element which triggered the download. The solution seemed great and worked well. However, QA picked up an unexpected issue: If the user downloaded the file and immediately refreshed the page, the form action “download” would be resubmitted, and the PDF would automatically download again. We considered automatically refreshing the page without the auto-download but this had a few issues – it relied on the file being downloaded properly; and it uses a meta-refresh tag which is a big “no-no” in terms of accessibility as the user won’t understand why the page is being refreshed.

We decided to go back to the JavaScript-based solution: catch the form submission event, cancel it, and trigger a download event using the JavaScript (creating a link to the PDF and firing it without ever adding it to the page). It is not the perfect solution as it will leave the 5% of internet users who cannot use JavaScript with the current shoddy iFrame solution, but at least it will improve the experience of the other 95% of the users.

But it did not work. The event listener simply ignored the event when it fired, and the page kept refreshing (making it a pain to debug, as it kept resetting the console). Could it be that the strict content security policy (CSP) prevented writing onclick events? It killed any inline coding attempts with appropriate warning errors. But I could not find any indication that event listening was forbidden.

I needed to find a way to isolate the problem, so I added my own button to the page. Clicking it produced an error message that this button should be inside a form. But why was JavaScript complaining about it?

It turned out that the system user had a bad habit of double-clicking buttons (picked up from the old desktop version of the product) and for a Python application, it caused the form to be submitted twice. The user would get to see the result of the second submission, which failed as “The form was already submitted.” The crude solution for this problem was to catch any form submission, cancel the event, turn the clicked button into a hidden field, disable all the buttons, and submit the event again. The reason for the hidden field is because disabled buttons’ values are not being sent as part of form submission.

It was this piece of code (adding the hidden field to the button’s form) that triggered the error in my experimental button and that is how I found it. It is worth pointing out that the product barely relies on JavaScript; it is using webpack to create a single file that is applied to every page in the system, including the “documents” I was now trying to fix.

At this point, the solution was easy – I replaced the “double-click-protection” with a simple debounce mechanism (which prevents sending events too frequently), allowing my event to still run properly and get caught by my own event listener.

Alternatively, I could’ve added a condition to the “double-click-protection” code to check if the button has an attribute double-click-protection="false"; add that attribute to my button that I can now listen to using JavaScript and send the PDF, as I originally planned. It would have been a simpler, safer way to do it (less breaking of existing code), but I felt it would just patch over instead of actually solve the problem.

Alternative design: some links available, some hidden behind "..."

Another suggestion we have considered was to use HTML-based summary/details elements to show/hide the secondary actions, thus eliminating the need for JavaScript. Unfortunately, the design team was not in favour of this solution due to time constraints on their part.

There is a key takeaway about the difference between form-based applications (server-side rendering for that matter) and web-based applications. The former is extremely limited in its capabilities, and I can see why designers would find it too restricting. Personally, I find that it forces a certain simplicity in keeping a minimalistic client-side code that is healthy for code maintenance as well as the user’s mental model.

Conclusion

The landscape of software development is constantly changing, with modern technologies, frameworks, and methodologies. This, as I have mentioned in the past, can be quite overwhelming for junior developers. I hope that this experience I have shared gives a better view on the type of problems we are dealing with. As developers, we are required to understand the client needs and produce the best software, regardless of the tech stack.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

0.0.401-0

1 Share

Added

  • Add copilot login subcommand and support ACP terminal-auth
  • Add agentStop and subagentStop hooks to control agent completion

Fixed

  • CLI handles unknown keypresses gracefully
  • /diff displays accurate line numbers with dual column layout
Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories