
JasperFx Software works hand in hand with our clients to improve our client’s outcomes on software projects using the “Critter Stack” (Marten and Wolverine). Based on our engagements with client projects as well as the greater Critter Stack user base, we’ve built up quite a few optional usages and settings in the two frameworks to solve specific technical challenges.
The unfortunate reality of managing a long lived application framework such as Wolverine or a complicated library like Marten is the need to both continuously improve the tools as well as trying really hard not to introduce regression errors to our clients when they upgrade tools. To that end, we’ve had to make several potentially helpful features be “opt in” in the tools, meaning that users have to explicitly turn on feature flag type settings for these features. A common cause of this is any change that introduces database schema changes as we try really hard to only do that in major version releases (Wolverine 5.0 added some new tables to SQL Server or PostgreSQL storage for example).
And yes, we’ve still introduced regression bugs in Marten or Wolverine far more times than I’d like, even with trying to be careful. In the end, I think the only guaranteed way to constantly and safely improve tools like the Critter Stack is to just be responsive to whatever problems slip through your quality gates and try to fix those problems quickly to regain trust.
With all that being said, let’s pretend we’re starting a greenfield project with the Critter Stack and we want to build in the best performing system possible with some added options for improved resiliency as well. To jump to the end state, this is what I’m proposing for a new optimized greenfield setup for users:
var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(m =>
{
// Much more coming...
m.Connection(builder.Configuration.GetConnectionString("marten"));
// 50% improvement in throughput, less "event skipping"
m.Events.AppendMode = EventAppendMode.Quick;
// or if you care about the timestamps -->
m.Events.AppendMode = EventAppendMode.QuickWithServerTimestamps;
// 100% do this, but be aggressive about taking advantage of it
m.Events.UseArchivedStreamPartitioning = true;
// These cause some database changes, so can't be defaults,
// but these might help "heal" systems that have problems
// later
m.Events.EnableAdvancedAsyncTracking = true;
// Enables you to mark events as just plain bad so they are skipped
// in projections from here on out.
m.Events.EnableEventSkippingInProjectionsOrSubscriptions = true;
// If you do this, just now you pretty well have to use FetchForWriting
// in your commands
// But also, you should use FetchForWriting() for command handlers
// any way
// This will optimize the usage of Inline projections, but will force
// you to treat your aggregate projection "write models" as being
// immutable in your command handler code
// You'll want to use the "Decider Pattern" / "Aggregate Handler Workflow"
// style for your commands rather than a self-mutating "AggregateRoot"
m.Events.UseIdentityMapForAggregates = true;
// Future proofing a bit. Will help with some future optimizations
// for rebuild optimizations
m.Events.UseMandatoryStreamTypeDeclaration = true;
// This is just annoying anyway
m.DisableNpgsqlLogging = true;
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()
.IntegrateWithWolverine(x =>
{
// Let Wolverine do the load distribution better than
// what Marten by itself can do
x.UseWolverineManagedEventSubscriptionDistribution = true;
});
builder.Services.AddWolverine(opts =>
{
// This *should* have some performance improvements, but would
// require downtime to enable in existing systems
opts.Durability.EnableInboxPartitioning = true;
// Extra resiliency for unexpected problems, but can't be
// defaults because this causes database changes
opts.Durability.InboxStaleTime = 10.Minutes();
opts.Durability.OutboxStaleTime = 10.Minutes();
// Just annoying
opts.EnableAutomaticFailureAcks = false;
// Relatively new behavior that will store "unknown" messages
// in the dead letter queue for possible recovery later
opts.UnknownMessageBehavior = UnknownMessageBehavior.DeadLetterQueue;
});
using var host = builder.Build();
return await host.RunJasperFxCommands(args);
Now, let’s talk more about some of these settings…
Lightweight Sessions with Marten
The first option we’re going to explicitly add is to use “lightweight” sessions in Marten:
var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(m =>
{
// Elided configuration...
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()
By default, Marten will use a heavier version of IDocumentSession that incorporates an Identity Map internally to track documents (entities) already loaded by that session. Likewise, when you request to load an entity by its identity, Marten’s session will happily check if it has already loaded that entity and gives you the same object back to you without making the database call.
The identity map usage is mostly helpful when you have unclear or deeply nested call stacks where different elements of the code might try to load the same data as part of the same HTTP request or command handling. If you follow “Critter Stack” and what we call the best practices especially for Wolverine usage, you’ll know that we very strongly recommend against deep call stacks and excessive layering.
Moreover, I would argue that you should never need the identity map behavior if you were building a system with an idiomatic Critter Stack approach, so the default session type is actually harmful in that it adds extra runtime overhead. The “lightweight” sessions run leaner by completely eliminating all the dictionary storage and lookups.
Why you ask is the identity map behavior the default?
- We were originally designing Marten as a near drop in replacement for RavenDb in a big system, so we had to mimic that behavior right off the bat to be able to make the replacement in a timely fashion
- If we changed the default behavior, it can easily break code in existing systems that upgrade in ways that are very hard to predict and unfortunately hard to diagnose. And of course, this is most likely a problem in the exact kind of codebases that are hard to reason about. How do I know this and why am I so very certain this is so you ask? Scar tissue.


