
JasperFx Software has been busy lately create a new set of AI Skill files that incorporate our recommendations for using the Critter Stack tools (Marten, Polecat, Wolverine, Weasel, Alba, and the soon forthcoming CritterWatch tool). As of the end of the day tomorrow (April 16th, 2026), these skill files will be available to all current JasperFx Software support clients.
If you’re interested in gaining access to these skills, just message us at sales@jasperfx.net.
Let me tell you, this has been an exercise in sheer T*E*D*I*U*M. In some order, the Critter Stack core team and I:
Whew, take my word for it, that was exhausting. But, the result is that I feel good about letting these new skill files out into the wild! Even knowing that we’ll 100% have to continue to refine and curate these things over time.
Each skill file is a structured Markdown document that gives AI assistants deep knowledge about a specific pattern, API, or migration path. When an AI assistant has access to these skills, it can generate idiomatic Critter Stack code, follow established conventions, and avoid common pitfalls — rather than guessing from generic .NET patterns.
Getting Started (6 skills)
Wolverine Handlers (8 skills)
[Entity] and IStorageAction<T>Wolverine HTTP (3 skills)
[WolverineGet], [WolverinePost], etc.[Aggregate] and [WriteAggregate]Wolverine Messaging (2 skills)
Marten Event Sourcing (14 skills)
[AggregateHandler]Polecat (1 skill)
Migration Skills (7 skills)
Messaging Transports (9 skills)
Testing (7 skills)
CleanAllMartenDataAsync, DocumentStore())Observability (5 skills)
describe, codegen-preview, db-apply)CritterWatch Integration (1 skill)
The skills encode battle-tested patterns refined through real-world sample conversions:
IResult — return concrete types for OpenAPI inference[Entity] aggressively — declarative entity loading replaces manual LoadAsync + null checksValidate/ValidateAsync — keep handlers focused on the happy pathResults.NoContent() over [EmptyResponse] — more intention-revealing for 204 responsesIntegrateWithWolverine() + AutoApplyTransactions() — the foundation for everythingCreateOrder, not OrderRequestThese skills were tested and refined by converting 10 real-world open-source projects in the JasperFx/CritterStackSamples repository — from MediatR, MassTransit, Clean Architecture, EventStoreDB, and modular monolith patterns to the Critter Stack. 107 Alba integration tests pass across all samples.
We’ve long fielded complaints, with some legitimate validity, that the Critter Stack needed to have more sample projects. Luckily, as a side effect of all this AI skill file work, we now have the CritterStackSamples repository with all these converted projects! So far this is mostly showing Wolverine + Marten work with not much asynchronous messaging, but we’ll continue to add to these samples over time. I know the next sample application I’m building is going to involve Marten’s new DCB capability. And we’ll surely add more samples for Polecat too.
Really just two reasons:
And in other words, I just want some sweeteners for folks considering JasperFx support contracts!
No, and for all of you just ready to scream at us if we even flirt with making the same licensing change as MediatR or MassTransit, we’re still committed to the “Open Core” model for the Critter Stack. I.e., the currently MIT-licensed core products will remain that way.
But, as I said before, I’m concerned about the consulting and services model being insufficient in the future, so we’re pivoting to a services + commercial add on product model.
We’ve all been there: a bug report lands in your inbox with a title like “App crashes sometimes” and zero reproduction steps. Your morning, which was supposed to be spent building new features, is now a forensic investigation. You’re setting scattershot breakpoints, staring at the call stack, and trying to guess what the original reporter was thinking.
Debugging isn’t just about fixing code; it’s about reducing uncertainty. Today, we’re taking a massive leap toward solving that problem by introducing a new, guided workflow within the Debugger Agent in Visual Studio.
Let’s be honest: traditional debugging is full of friction. You manually parse a vague report, hunt for the right file, and spend twenty minutes just trying to see if you’re in the right ballpark. This new workflow flips the script, transforming the Debugger Agent from a chatbot into an interactive partner plugged directly into your live runtime.
To get started, simply open your solution in Visual Studio, switch to Debugger mode in Copilot Chat, and point it to the problem with a GitHub/ADO URL or a quick sentence like:
“The app crashes when saving a file.”
The workflow is interactive and powered by runtime debugging, meaning the Agent doesn’t just read your code; it feels how it’s running. It immediately builds a mental model of the failure and walks you through a structured, real-time process:
Note: If your project can’t be started automatically, just manually start your code, attach the debugger, and tell the Agent you’re ready.
This iterative flow is designed to keep you “in the zone.” By handling the manual setup and state analysis, the Agent lets you move from a bug report to a verified fix with significantly less mental context switching.
18.5 GA releases deliver the foundational experience of the guided workflow, specifically optimized for high-value, reproducible scenarios like exceptions, logic inconsistencies, and state corruption.
As we look forward, we are already evolving this foundation to be even more robust. Our goal is to progressively automate the end-to-end workflow, maturing the Debugger Agent into a comprehensive, seamless debugging companion that anticipates your needs.
The new workflow in the Debugger Agent represents a fundamental shift in how we think about IDEs. We’re excited to see how you use this in your own workflows whether you’re untangling a complex race condition in a multi-threaded service or simply trying to figure out why a UI element isn’t updating as expected.
Stay connected with the Visual Studio team by following us on Twitter @VS_Debugger, Twitter @VisualStudio, YouTube, and LinkedIn.
The post Stop Hunting Bugs: Meet the New Visual Studio Debugger Agent appeared first on Visual Studio Blog.
[I]f an application meets expectations and passes all external tests (functional and quality), should one care what’s inside? Development would consist of coming up with those tests: TDD
— Ken Pugh, in a comment on LinkedIn
I love comments such as these, because even though they are short, they act as a rich source of practice in various aspects of what I affectionately call “Playing Well With Others”. Analyzing comments such as these along with my immediate response to them helps me both understand and more-effectively use the various principles that form the basis of how I Play Well (at least Better) With Others.
Let me list the ideas, principles, guidelines, and practices I used to think about this comment:
The Law of Generous Interpretation
The powerful question template “What would have to be true for me to…?”
The Satir Interaction Model, also known as The Ingredients of An Interaction, notably the Interpretation part
Betteridge’s Law of Headlines
The risks of Best Practices thinking
The risks associated with the word “should”
Transforming Survival Rules Into Guides
Bringing assumptions to the surface in order to challenge them
That’s a lot! It’s a lot to do, to remember, to learn. I didn’t learn it all at once and I needed years to practise it all confidently and effectively. Would you like help learning and practising? because I know a guy.
Let me analyze the comment for a moment:
It seems like a rhetorical question, so I can interpret it as a statement, and using Betteridge’s Law, the answer is probably “no”.
It tries to identify a simple rule to govern complex behavior. Some folks would interpret this as a firm rule that one must always follow; others would interpret this as a guideline that one ought to keep in mind, but freely break.
It seems to promote a Best Practice, which always makes me nervous.
The word “should” raises many alarms.
The indirect statement “One should never care…” almost always stands in for the much more personal statement “I would rather never care…”.
That’s a lot for just a short comment!
Based on my analysis, the statement seems more like this:
“If an application meets expectations and passes all external tests, both functional and quality, then I would rather never care about what’s inside. I would focus instead on coming up with those tests, which I do when I practise TDD.”
This starts to sound more like a statement I could easily agree with, rather than a statement that I would quickly and easily tear apart. That sounds like progress to me towards Playing Well With Others.
When someone proposes an absolute rule, I tend to ask about the situations in which I would break the rule. When someone proposes an absolute rule with assumptions, I tend to challenge the assumptions or, at least, ask what would make those assumptions false. Let me try that here.
“An application meets expectations and passes all external tests, both functional and quality.”
I mean… it could happen. I guess it sometimes happens. Whose expectations? The “people who matter”, I suppose, whoever they are in the organization where we might apply this rule.
I get nervous when folks present these as binary assumptions/outcomes, but I can usually safely interpret that as shorthand for something closer to reality:
“An application meets the expectations of the people who matter well enough for their purposes and passes all external tests, both functional and quality.”
I note that the second part of this assumption contains evidence of the first part: I imagine that those tests include tests that encode some of the expectations of the people who matter. I feel confident that there are some Programmer Tests in there, too. Let me tweak the comment to make this relationship clearer.
“An application meets the expectations of the people who matter well enough for their purposes, as evidenced by passing all external tests, both functional and quality.”
I like that. Now: when faced with such an application, when would I care about what’s inside?
What would have to be true for me to care about what’s inside an application that meets the expectations of the people who matter well enough for their purposes, as evidenced by passing all external tests, both functional and quality?
I don’t understand the tests well enough.
Even if I understand individual tests well enough, I don’t understand well enough how the tests work together to describe the system so that I can build a workable mental model of it.
I don’t have enough confidence in the tests being exhaustive enough to specify a system that does what we (the entire project community, customer base, and user base) need it to do.
I’m curious about what’s inside.
I want to build a similar application.
I want to extract reusable/recombinable components from it.
It’s often easier to build the system than to specify it with examples.
It is theoretically impossible to specify everything I care about a system from a combination of example-based tests and property-based tests.
I don’t consider this list complete, but I only thought about it for a few minutes. This provides enough to continue.
I judge some of these reasons as arbitrary and personal, such as my curiosity. I judge some of them as perhaps begging the question. For example, wanting to build a similar application: if I can regenerate this application from its tests, then presumably I also have the tools to generate a new application from similar tests. I judge some of these reasons as true, but useless. For example, it’s theoretically impossible to specify a system completely through tests, but we can usually get close enough to drive the risk down far enough for our purposes. None of this makes these reasons bad or pointless, but it merely reminds us that these reasons are not enough on their own to answer a clear “yes” or “no” to Ken’s opinion.
The question “What would have to be true (for me to behave (like that))?” engages my creativity to look for unspoken assumptions about the context, so that I can speak them, pay attention to them, and not become blindsided by being wrong about them. I find this question particularly powerful as a result.
For example: sometimes I’m just curious about what’s inside. I feel better when I retain the option to look and the skill to at least try to understand. Why not? I mean… you could pay me not to ask, but you don’t have enough money to make me not wonder.
Roughly speaking, I think Ken is claiming something like this:
“If you practise TDD, then you have the option to stop caring about what’s inside the application, as long as it passes all its tests and you have written tests that specify everything that the project community cares about.”
I’m taking liberties now, so I might have missed something important. Let me know. I tried to paraphrase and infer faithfully, interpreting generously.
Even in this form, I can’t quite agree, and for one key reason: it’s not theoretically possible to specify everything that the project community cares about through tests. We have been thinking about this for decades now: if we have tests, can we simply throw the production code away and rewrite it from scratch? Is this possible even in principle?
No.
We could get arbitrarily close, but then we would have another problem: eventually it would become “too expensive” to write those tests, even when we knew exactly what to write. I see two possibilities:
We would then need something like TDD to help us write the tests. The tests would become a new kind of production code, and we would end up back where we are now.
And all that ignores the fact that (some, many!) programmers (still) like to program (even though the industry seems to be trying very hard to stamp that out of them). Many of them would want to look inside; they would be curious.
…over a LinkedIn comment? Rly?
Rly.
This feels so much better than merely telling Ken that he’s wrong in public. For twelve reasons, maybe fifteen.
I mean, what he wrote is wrong on its face, but there is true and helpful content in there. I prefer to focus on that part. Doing this makes me feel a lot better and makes me a lot more enjoyable to work with and to be around.
That’s my gift to you.

Microsoft is tapping Model Context Protocol (MCP) to give customers’ AI agents simplified access to the vast amounts of corporate data stored in SQL databases.
The company’s new SQL MCP Server is open source and free, and it will work with any cloud or on-premises database including Microsoft SQL, PostgreSQL, Azure Cosmos DB, and MySQL. Microsoft describes the server as its “prescriptive” method to provide enterprise database access to agents without any language or framework requirements and no drivers or libraries to install.
Because of MCP, customers also don’t need to expose the database schema, risk undermining data consistency, or introduce dependencies for data access.
That simplified model and the streamlined access it affords is one of the major selling points that has driven broad-based support of MCP by vendors and extensive usage by customers.

Community Summit North America is the largest independent innovation, education, and training event for Microsoft business applications delivered by Expert Users, Microsoft Leaders, MVPs, and Partners. Register now to attend Community Summit in Nashville, TN from October 11-15.
SQL MCP Server exposes data operations as MCP tools so agents can interact with databases through secure connections – and there’s more detail on the server’s security below. Microsoft cited these potential use cases as well-suited to SQL MCP Server:
SQL MCP Server is a feature of Data API Builder (DAB), which uses an abstraction layer that lists all tables, views, and database stored procedures that are exposed through the API. This lets developers assign alias names and columns and limit which fields are available to different roles.
SQL MCP Server is built around DML (Data Manipulation Language), the database language used for CRUD functions in existing tables and views. As a result, SQL MCP Server works with data, not schema.
Because SQL MCP Server uses DAB’s role-based access control system, each entity in a company’s configuration defines which roles may perform CRUD functions and which fields are included or excluded for those roles. This prevents the internal schema from being exposed to external consumers and allows a user or developer to define complex, and even cross-data-source, families of objects and relationships.
SQL MCP Server produces logs and telemetry that let enterprises monitor and validate activity from a single screen. This capability includes Azure Log Analytics, Application Insights, and local file logs inside a container.
SQL MCP Server uses the DAB’s entity abstraction layer and the built-in Query Builder to produce accurate, well-formed queries in a fully deterministic way — this means the same input always produces the same output. This approach removes the risk, overhead, and nuisance associated with randomness or variations while preserving safety and reliability for agent-generated queries.
SQL MCP Server implements MCP protocol version 2025-06-18 by default. During initialization, the server advertises tool and logging capabilities and returns server metadata so agents can understand the server’s intent.
More Data and MCP Insights:
The post MCP Server Gives AI Agents Simplified Access To Data Stored in SQL Databases appeared first on Cloud Wars.
We’re sharing a few timely updates on developer policy that reflect GitHub’s ongoing work on transparency, developer protections, and copyright policy engagement—issues that directly affect how developers build, share, and maintain software.
This March, the U.S. Supreme Court issued its decision in Cox v. Sony, a case addressing the limits of secondary copyright liability for online services. GitHub and developer platforms served as key examples in an industry amicus brief to explain why clear and balanced liability standards are essential for developer platforms and other intermediaries that host or enable user‑generated content.
The Court’s opinion reinforced that service providers are not automatically liable for copyright infringement by users without evidence of intent to encourage or materially contribute to infringement. By clarifying this standard, the decision helps avoid overly expansive liability theories that would make it difficult for intermediaries to exist or operate at scale. For developers, this legal certainty supports innovation, collaboration, and the continued availability of neutral infrastructure that enables lawful activity like GitHub.
Another important copyright process is approaching: the next triennial review of exemptions under Section 1201 of the DMCA. DMCA Section 1201 is the part of U.S. copyright law that restricts bypassing digital access controls. It matters for developers because it can affect activities like security research, interoperability, repair, accessibility, and other lawful work unless temporary exemptions are in place. The most recent triennial cycle concluded in 2024, setting exemptions that remain in effect for the current three-year period.
GitHub has a long history of engaging in the Section 1201 process and publishing resources to explain why these exemptions matter to developers. In 2021, we filed comments in support of a broad safe harbor for good-faith security research. The 2024 cycle included several submissions of interest to developers such as the Authors Alliance exemption expansion petition, which addressed access and preservation concerns, as well as a security research petition focused on AI safety‑related research and analysis which drew thoughtful comments in support from HackerOne, the Hacking Policy Council, and academic researchers.
Although the generative AI security research petition was not ultimately adopted in the 2024 cycle, it raised important questions about how existing DMCA frameworks apply to emerging AI‑related research and development practices. As the software ecosystem continues to evolve, new Section 1201 challenges are emerging—particularly around AI systems, model inspection, safety research, and interoperability.
Looking ahead to the 2027 triennial review, we’re interested in hearing from developers about the issues they’re encountering, which use cases matter most, and how these questions should be explored in future DMCA triennial reviews.
We’ve also updated GitHub’s Transparency Center with the full year of 2025 data. For this update, we made improvements to the site, including clearer charts and updated visualizations for our abuse-related restrictions, appeals, and reinstatements designed to make the information easier to understand. 2025 had the highest count of DMCA circumvention claims since we started our transparency reporting. While this can be attributed to a few very large takedowns, it also underscores how important a balanced approach to the DMCA is for software developers, code collaboration platforms, and the open source ecosystem.
We’re hearing growing concern from the developer community about age assurance laws emerging in U.S. states, Brazil, and Europe, particularly where requirements aimed at commercial, consumer‑facing products could unintentionally sweep in open source operating systems, package managers, and other critical digital infrastructure. These issues reinforce the value of ongoing collaboration with policymakers to promote informed, balanced policies that support open source developers. We’ll continue to advocate for policies that reflect technical realities and support open development, including through an upcoming educational developer policy blog post and a May Maintainer Month session focused on these topics.
The post Developer policy update: Intermediary liability, copyright, and transparency appeared first on The GitHub Blog.