Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151765 stories
·
33 followers

Subagents have arrived in Gemini CLI

1 Share
Gemini CLI has introduced subagents, specialized expert agents that handle complex or high-volume tasks in isolated context windows to keep the primary session fast and focused. These agents can be customized via Markdown files, run in parallel to boost productivity, and are easily invoked using the @agent syntax for targeted delegation. This architecture prevents "context rot" by consolidating intricate multi-step executions into concise summaries for the main orchestrator.
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Critter Stack Sample Projects and Our Curated AI Skills

1 Share

JasperFx Software has been busy lately create a new set of AI Skill files that incorporate our recommendations for using the Critter Stack tools (Marten, Polecat, Wolverine, Weasel, Alba, and the soon forthcoming CritterWatch tool). As of the end of the day tomorrow (April 16th, 2026), these skill files will be available to all current JasperFx Software support clients.

If you’re interested in gaining access to these skills, just message us at sales@jasperfx.net.

How did you build the Skills files (so far)?

Let me tell you, this has been an exercise in sheer T*E*D*I*U*M. In some order, the Critter Stack core team and I:

  • Started with a Google doc laying out the subjects we needed to include in our skills files along with the key points about usage, design, and software architecture we wanted to enforce
  • I used Claude to build a plan using that document after the team reviewed it, all the documentation websites, my blog, and by running through Discord discussions to identify common areas of questions or confusion (which was also used to improve the documentation).
  • I admittedly let Claude take the first pass at the skill files, then reviewed each file by hand and made some course corrections as I went
  • I then let Claude use the new skills to convert several sample projects published online to Wolverine + Marten, then reviewed each conversion and corrected or added to the skills content as needed
  • Yet another pass at converting some additional sample projects with the corrected skills
  • Had Claude run the AI skills against a pair of large JasperFx client systems to identify issues in their code, and painstakingly reviewed that report while making yet more refinements and additions to the skills — partially to identify what advice is strongly recommended for green field systems and what can be bypassed in existing systems. As well as exception cases in plenty of cases. This also turned into an exercise in identifying performance optimization opportunities. One of my thoughts about the AI Skills is saying that developers can write the conceptual code they can, then let the AI skills move the (hopefully covered by tests) code to more idiomatic Wolverine usage and opt into some non-obvious Wolverine usage that can lead to better performance.
  • Let a friendly community member check out the AI skills against their new system, and we again refined the skils based on what it found — and for the record, it absolutely identified some important changes that needed to be made.

Whew, take my word for it, that was exhausting. But, the result is that I feel good about letting these new skill files out into the wild! Even knowing that we’ll 100% have to continue to refine and curate these things over time.

Summary of the AI Skills (So Far)

Each skill file is a structured Markdown document that gives AI assistants deep knowledge about a specific pattern, API, or migration path. When an AI assistant has access to these skills, it can generate idiomatic Critter Stack code, follow established conventions, and avoid common pitfalls — rather than guessing from generic .NET patterns.

Skill Categories

Getting Started (6 skills)

  • New project bootstrapping for Wolverine + Marten, Wolverine + EF Core, Wolverine + Polecat, and Wolverine + CosmosDB
  • Vertical slice architecture fundamentals
  • Modular monolith patterns with Wolverine

Wolverine Handlers (8 skills)

  • Building handlers with convention-based discovery
  • Pure function handlers and A-Frame Architecture
  • Declarative persistence with [Entity] and IStorageAction<T>
  • EF Core integration patterns
  • Middleware, Railway Programming, and FluentValidation
  • IoC and service optimization

Wolverine HTTP (3 skills)

  • HTTP endpoint fundamentals with [WolverineGet][WolverinePost], etc.
  • HTTP + Marten integration with [Aggregate] and [WriteAggregate]
  • Hybrid handlers (HTTP + messaging)

Wolverine Messaging (2 skills)

  • Message routing, outbox, scheduled messages, partitioning
  • Resiliency policies, retry strategies, circuit breakers, DLQ handling

Marten Event Sourcing (14 skills)

  • Aggregate handler workflow with [AggregateHandler]
  • Event subscriptions and forwarding
  • 5 projection types: Single Stream, Multi Stream, Flat Table, Composite, Event Enrichment
  • 7 advanced topics: Async Daemon, Cross-Stream Operations, Dynamic Consistency Boundary, Indexes, Load Distribution, Multi-Tenancy, Optimization

Polecat (1 skill)

  • Setup guide and decision criteria for SQL Server 2025 with native JSON

Migration Skills (7 skills)

  • Converting from MediatR, MassTransit, NServiceBus, EventStoreDB/Eventuous, Minimal API, and MVC Core
  • Each includes API mapping tables, before/after code examples, and migration checklists

Messaging Transports (9 skills)

  • RabbitMQ, Azure Service Bus, AWS SQS/SNS, Kafka, SignalR, NATS, Redis, Apache Pulsar, MQTT
  • Each covers configuration, topology, error handling, and best practices

Testing (7 skills)

  • Alba HTTP testing, Wolverine tracked sessions, Wolverine + Marten integration testing
  • Marten-specific test patterns (CleanAllMartenDataAsyncDocumentStore())
  • Test parallelization for xUnit, TUnit, NUnit, MSTest

Observability (5 skills)

  • OpenTelemetry setup, Prometheus metrics, Grafana dashboards
  • CLI diagnostics (describecodegen-previewdb-apply)
  • Code generation strategies

CritterWatch Integration (1 skill)

  • Installing and configuring CritterWatch monitoring
  • Adding monitoring to Wolverine applications
  • Aspire orchestration patterns

Key Principles Taught

The skills encode battle-tested patterns refined through real-world sample conversions:

  1. Prefer synchronous handlers — let Wolverine middleware handle async persistence
  2. Avoid IResult — return concrete types for OpenAPI inference
  3. Use [Entity] aggressively — declarative entity loading replaces manual LoadAsync + null checks
  4. Move sad-path validation into Validate/ValidateAsync — keep handlers focused on the happy path
  5. Use Results.NoContent() over [EmptyResponse] — more intention-revealing for 204 responses
  6. Use IntegrateWithWolverine() + AutoApplyTransactions() — the foundation for everything
  7. Name commands in imperative form — CreateOrder, not OrderRequest
  8. One file per request type — colocate command record, validator, and endpoint class

Validated by Real Conversions

These skills were tested and refined by converting 10 real-world open-source projects in the JasperFx/CritterStackSamples repository — from MediatR, MassTransit, Clean Architecture, EventStoreDB, and modular monolith patterns to the Critter Stack. 107 Alba integration tests pass across all samples.

The Sample Projects

We’ve long fielded complaints, with some legitimate validity, that the Critter Stack needed to have more sample projects. Luckily, as a side effect of all this AI skill file work, we now have the CritterStackSamples repository with all these converted projects! So far this is mostly showing Wolverine + Marten work with not much asynchronous messaging, but we’ll continue to add to these samples over time. I know the next sample application I’m building is going to involve Marten’s new DCB capability. And we’ll surely add more samples for Polecat too.

Why aren’t these skills free?

Really just two reasons:

  1. These skills have been primarily built through lessons learned as JasperFx has assisted our clients and even trained and corrected through usage on the code from JasperFx customers. Moreover, the skills will be constantly improved based on JasperFx client usage
  2. The long term viability of the Critter Stack depends on there being a successful company behind the tools. Especially in the .NET ecosystem, it is not feasible to succeed as an OSS project of this complexity without commercial support. This is part of the answer to that need.

And in other words, I just want some sweeteners for folks considering JasperFx support contracts!

Are you changing your mind about licensing?

No, and for all of you just ready to scream at us if we even flirt with making the same licensing change as MediatR or MassTransit, we’re still committed to the “Open Core” model for the Critter Stack. I.e., the currently MIT-licensed core products will remain that way.

But, as I said before, I’m concerned about the consulting and services model being insufficient in the future, so we’re pivoting to a services + commercial add on product model.



Read the whole story
alvinashcraft
21 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Stop Hunting Bugs: Meet the New Visual Studio Debugger Agent

1 Share

We’ve all been there: a bug report lands in your inbox with a title like “App crashes sometimes” and zero reproduction steps. Your morning, which was supposed to be spent building new features, is now a forensic investigation. You’re setting scattershot breakpoints, staring at the call stack, and trying to guess what the original reporter was thinking. 

Debugging isn’t just about fixing code; it’s about reducing uncertainty. Today, we’re taking a massive leap toward solving that problem by introducing a new, guided workflow within the Debugger Agent in Visual Studio.

debugger agent image

Ending the “Guessing Game” with a Guided Debugger Loop 

Let’s be honest: traditional debugging is full of friction. You manually parse a vague report, hunt for the right file, and spend twenty minutes just trying to see if you’re in the right ballpark. This new workflow flips the script, transforming the Debugger Agent from a chatbot into an interactive partner plugged directly into your live runtime. 

To get started, simply open your solution in Visual Studio, switch to Debugger mode in Copilot Chat, and point it to the problem with a GitHub/ADO URL or a quick sentence like: 

 “The app crashes when saving a file.”

debugger agent1 image  

The workflow is interactive and powered by runtime debugging, meaning the Agent doesn’t just read your code; it feels how it’s running. It immediately builds a mental model of the failure and walks you through a structured, real-time process: 

  • Hypothesis & Preparation: The Agent analyzes the issue and proposes a root cause. If the reasoning looks solid, it sets intelligent breakpoints and prepares to launch your project. 

Note: If your project can’t be started automatically, just manually start your code, attach the debugger, and tell the Agent you’re ready. 

  • Active Reproduction: The Agent stays “on the line” while you trigger the bug, watching the runtime state as you move through the repro steps. 
  • Real-Time Validation: As breakpoints hit, the Agent evaluates variables and the call stack to systematically confirm its hypothesis or eliminate potential causes. 
  • The Final Fix: Once the root cause is isolated, the Agent proposes a solution. If you approve, it applies the fix and reruns the session to validate the resolution. 

This iterative flow is designed to keep you “in the zone.” By handling the manual setup and state analysis, the Agent lets you move from a bug report to a verified fix with significantly less mental context switching.  

Our Vision: Foundational Quality and Beyond 

18.5 GA releases deliver the foundational experience of the guided workflow, specifically optimized for high-value, reproducible scenarios like exceptions, logic inconsistencies, and state corruption 

As we look forward, we are already evolving this foundation to be even more robust. Our goal is to progressively automate the end-to-end workflow, maturing the Debugger Agent into a comprehensive, seamless debugging companion that anticipates your needs. 

Debug Smarter, Not Harder 

The new workflow in the Debugger Agent represents a fundamental shift in how we think about IDEs. We’re excited to see how you use this in your own workflows whether you’re untangling a complex race condition in a multi-threaded service or simply trying to figure out why a UI element isn’t updating as expected. 

Stay connected with the Visual Studio team by following us on Twitter @VS_DebuggerTwitter @VisualStudio, YouTube, and LinkedIn. 

 

 

 

The post Stop Hunting Bugs: Meet the New Visual Studio Debugger Agent appeared first on Visual Studio Blog.

Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Playing Well With Others: An Example

1 Share

[I]f an application meets expectations and passes all external tests (functional and quality), should one care what’s inside? Development would consist of coming up with those tests: TDD

— Ken Pugh, in a comment on LinkedIn

I love comments such as these, because even though they are short, they act as a rich source of practice in various aspects of what I affectionately call “Playing Well With Others”. Analyzing comments such as these along with my immediate response to them helps me both understand and more-effectively use the various principles that form the basis of how I Play Well (at least Better) With Others.

TL;DR

Let me list the ideas, principles, guidelines, and practices I used to think about this comment:

  • The Law of Generous Interpretation

  • The powerful question template “What would have to be true for me to…?”

  • The Satir Interaction Model, also known as The Ingredients of An Interaction, notably the Interpretation part

  • Betteridge’s Law of Headlines

  • The risks of Best Practices thinking

  • The risks associated with the word “should”

  • Transforming Survival Rules Into Guides

  • Bringing assumptions to the surface in order to challenge them

That’s a lot! It’s a lot to do, to remember, to learn. I didn’t learn it all at once and I needed years to practise it all confidently and effectively. Would you like help learning and practising? because I know a guy.

Back To the Comment

Let me analyze the comment for a moment:

  • It seems like a rhetorical question, so I can interpret it as a statement, and using Betteridge’s Law, the answer is probably “no”.

  • It tries to identify a simple rule to govern complex behavior. Some folks would interpret this as a firm rule that one must always follow; others would interpret this as a guideline that one ought to keep in mind, but freely break.

  • It seems to promote a Best Practice, which always makes me nervous.

  • The word “should” raises many alarms.

  • The indirect statement “One should never care…” almost always stands in for the much more personal statement “I would rather never care…”.

That’s a lot for just a short comment!

Based on my analysis, the statement seems more like this:

“If an application meets expectations and passes all external tests, both functional and quality, then I would rather never care about what’s inside. I would focus instead on coming up with those tests, which I do when I practise TDD.”

This starts to sound more like a statement I could easily agree with, rather than a statement that I would quickly and easily tear apart. That sounds like progress to me towards Playing Well With Others.

The Statement Itself

When someone proposes an absolute rule, I tend to ask about the situations in which I would break the rule. When someone proposes an absolute rule with assumptions, I tend to challenge the assumptions or, at least, ask what would make those assumptions false. Let me try that here.

“An application meets expectations and passes all external tests, both functional and quality.”

I mean… it could happen. I guess it sometimes happens. Whose expectations? The “people who matter”, I suppose, whoever they are in the organization where we might apply this rule.

I get nervous when folks present these as binary assumptions/outcomes, but I can usually safely interpret that as shorthand for something closer to reality:

“An application meets the expectations of the people who matter well enough for their purposes and passes all external tests, both functional and quality.”

I note that the second part of this assumption contains evidence of the first part: I imagine that those tests include tests that encode some of the expectations of the people who matter. I feel confident that there are some Programmer Tests in there, too. Let me tweak the comment to make this relationship clearer.

“An application meets the expectations of the people who matter well enough for their purposes, as evidenced by passing all external tests, both functional and quality.”

I like that. Now: when faced with such an application, when would I care about what’s inside?

A Magic Question

What would have to be true for me to care about what’s inside an application that meets the expectations of the people who matter well enough for their purposes, as evidenced by passing all external tests, both functional and quality?

  • I don’t understand the tests well enough.

  • Even if I understand individual tests well enough, I don’t understand well enough how the tests work together to describe the system so that I can build a workable mental model of it.

  • I don’t have enough confidence in the tests being exhaustive enough to specify a system that does what we (the entire project community, customer base, and user base) need it to do.

  • I’m curious about what’s inside.

  • I want to build a similar application.

  • I want to extract reusable/recombinable components from it.

  • It’s often easier to build the system than to specify it with examples.

  • It is theoretically impossible to specify everything I care about a system from a combination of example-based tests and property-based tests.

I don’t consider this list complete, but I only thought about it for a few minutes. This provides enough to continue.

I judge some of these reasons as arbitrary and personal, such as my curiosity. I judge some of them as perhaps begging the question. For example, wanting to build a similar application: if I can regenerate this application from its tests, then presumably I also have the tools to generate a new application from similar tests. I judge some of these reasons as true, but useless. For example, it’s theoretically impossible to specify a system completely through tests, but we can usually get close enough to drive the risk down far enough for our purposes. None of this makes these reasons bad or pointless, but it merely reminds us that these reasons are not enough on their own to answer a clear “yes” or “no” to Ken’s opinion.

The question “What would have to be true (for me to behave (like that))?” engages my creativity to look for unspoken assumptions about the context, so that I can speak them, pay attention to them, and not become blindsided by being wrong about them. I find this question particularly powerful as a result.

For example: sometimes I’m just curious about what’s inside. I feel better when I retain the option to look and the skill to at least try to understand. Why not? I mean… you could pay me not to ask, but you don’t have enough money to make me not wonder.

The Claim Behind the Claim

Roughly speaking, I think Ken is claiming something like this:

“If you practise TDD, then you have the option to stop caring about what’s inside the application, as long as it passes all its tests and you have written tests that specify everything that the project community cares about.”

I’m taking liberties now, so I might have missed something important. Let me know. I tried to paraphrase and infer faithfully, interpreting generously.

Even in this form, I can’t quite agree, and for one key reason: it’s not theoretically possible to specify everything that the project community cares about through tests. We have been thinking about this for decades now: if we have tests, can we simply throw the production code away and rewrite it from scratch? Is this possible even in principle?

No.

We could get arbitrarily close, but then we would have another problem: eventually it would become “too expensive” to write those tests, even when we knew exactly what to write. I see two possibilities:

  1. It would not be worth the effort, even if we did it perfectly correctly, or
  1. It would become at least as difficult as writing the code. Getting the tests wrong would become as likely getting the production code wrong.

We would then need something like TDD to help us write the tests. The tests would become a new kind of production code, and we would end up back where we are now.

And all that ignores the fact that (some, many!) programmers (still) like to program (even though the industry seems to be trying very hard to stamp that out of them). Many of them would want to look inside; they would be curious.

Why Go Through All This?!

…over a LinkedIn comment? Rly?

Rly.

This feels so much better than merely telling Ken that he’s wrong in public. For twelve reasons, maybe fifteen.

I mean, what he wrote is wrong on its face, but there is true and helpful content in there. I prefer to focus on that part. Doing this makes me feel a lot better and makes me a lot more enjoyable to work with and to be around.

That’s my gift to you.

Read the whole story
alvinashcraft
40 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

MCP Server Gives AI Agents Simplified Access To Data Stored in SQL Databases

1 Share

Microsoft is tapping Model Context Protocol (MCP) to give customers’ AI agents simplified access to the vast amounts of corporate data stored in SQL databases.

The company’s new SQL MCP Server is open source and free, and it will work with any cloud or on-premises database including Microsoft SQL, PostgreSQL, Azure Cosmos DB, and MySQL. Microsoft describes the server as its “prescriptive” method to provide enterprise database access to agents without any language or framework requirements and no drivers or libraries to install.

Because of MCP, customers also don’t need to expose the database schema, risk undermining data consistency, or introduce dependencies for data access.

That simplified model and the streamlined access it affords is one of the major selling points that has driven broad-based support of MCP by vendors and extensive usage by customers.

Community Summit North America is the largest independent innovation, education, and training event for Microsoft business applications delivered by Expert Users, Microsoft Leaders, MVPs, and Partners. Register now to attend Community Summit in Nashville, TN from October 11-15.

Range of Use Cases

SQL MCP Server exposes data operations as MCP tools so agents can interact with databases through secure connections – and there’s more detail on the server’s security below. Microsoft cited these potential use cases as well-suited to SQL MCP Server:

  • Allow agents or copilots or to perform safe Create, Read, Update, and Delete (CRUD) database operations
  • Build internal automations without writing SQL
  • Add agent capabilities without exposing the database structure directly
  • Interoperate with multiple databases, including those on-premises and in the cloud
  • Integrate an agent into a REST-based line-of-business application. REST, or Representational State Transfer, exposes databases as web resources for functions including CRUD operations

Security Controls

SQL MCP Server is a feature of Data API Builder (DAB), which uses an abstraction layer that lists all tables, views, and database stored procedures that are exposed through the API. This lets developers assign alias names and columns and limit which fields are available to different roles.

SQL MCP Server is built around DML (Data Manipulation Language), the database language used for CRUD functions in existing tables and views. As a result, SQL MCP Server works with data, not schema.

Because SQL MCP Server uses DAB’s role-based access control system, each entity in a company’s configuration defines which roles may perform CRUD functions and which fields are included or excluded for those roles. This prevents the internal schema from being exposed to external consumers and allows a user or developer to define complex, and even cross-data-source, families of objects and relationships.

SQL MCP Server produces logs and telemetry that let enterprises monitor and validate activity from a single screen. This capability includes Azure Log Analytics, Application Insights, and local file logs inside a container.

SQL MCP Server uses the DAB’s entity abstraction layer and the built-in Query Builder to produce accurate, well-formed queries in a fully deterministic way — this means the same input always produces the same output. This approach removes the risk, overhead, and nuisance associated with randomness or variations while preserving safety and reliability for agent-generated queries.

SQL MCP Server implements MCP protocol version 2025-06-18 by default. During initialization, the server advertises tool and logging capabilities and returns server metadata so agents can understand the server’s intent.

More Data and MCP Insights:


The post MCP Server Gives AI Agents Simplified Access To Data Stored in SQL Databases appeared first on Cloud Wars.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Developer policy update: Intermediary liability, copyright, and transparency

1 Share

We’re sharing a few timely updates on developer policy that reflect GitHub’s ongoing work on transparency, developer protections, and copyright policy engagement—issues that directly affect how developers build, share, and maintain software.

What the Supreme Court’s decision in Cox v. Sony means for developers

This March, the U.S. Supreme Court issued its decision in Cox v. Sony, a case addressing the limits of secondary copyright liability for online services. GitHub and developer platforms served as key examples in an industry amicus brief to explain why clear and balanced liability standards are essential for developer platforms and other intermediaries that host or enable user‑generated content.

The Court’s opinion reinforced that service providers are not automatically liable for copyright infringement by users without evidence of intent to encourage or materially contribute to infringement. By clarifying this standard, the decision helps avoid overly expansive liability theories that would make it difficult for intermediaries to exist or operate at scale. For developers, this legal certainty supports innovation, collaboration, and the continued availability of neutral infrastructure that enables lawful activity like GitHub.

Looking ahead: the upcoming DMCA Section 1201 triennial review

Another important copyright process is approaching: the next triennial review of exemptions under Section 1201 of the DMCA. DMCA Section 1201 is the part of U.S. copyright law that restricts bypassing digital access controls. It matters for developers because it can affect activities like security research, interoperability, repair, accessibility, and other lawful work unless temporary exemptions are in place. The most recent triennial cycle concluded in 2024, setting exemptions that remain in effect for the current three-year period.

GitHub has a long history of engaging in the Section 1201 process and publishing resources to explain why these exemptions matter to developers. In 2021, we filed comments in support of a broad safe harbor for good-faith security research. The 2024 cycle included several submissions of interest to developers such as the Authors Alliance exemption expansion petition, which addressed access and preservation concerns, as well as a security research petition focused on AI safety‑related research and analysis which drew thoughtful comments in support from HackerOne, the Hacking Policy Council, and academic researchers.

Although the generative AI security research petition was not ultimately adopted in the 2024 cycle, it raised important questions about how existing DMCA frameworks apply to emerging AI‑related research and development practices. As the software ecosystem continues to evolve, new Section 1201 challenges are emerging—particularly around AI systems, model inspection, safety research, and interoperability.

Looking ahead to the 2027 triennial review, we’re interested in hearing from developers about the issues they’re encountering, which use cases matter most, and how these questions should be explored in future DMCA triennial reviews.

Transparency update: full‑year 2025 data now available

We’ve also updated GitHub’s Transparency Center with the full year of 2025 data. For this update, we made improvements to the site, including clearer charts and updated visualizations for our abuse-related restrictions, appeals, and reinstatements designed to make the information easier to understand. 2025 had the highest count of DMCA circumvention claims since we started our transparency reporting. While this can be attributed to a few very large takedowns, it also underscores how important a balanced approach to the DMCA is for software developers, code collaboration platforms, and the open source ecosystem.

Next up for the policy blog: age assurance and what it means for developers

We’re hearing growing concern from the developer community about age assurance laws emerging in U.S. states, Brazil, and Europe, particularly where requirements aimed at commercial, consumer‑facing products could unintentionally sweep in open source operating systems, package managers, and other critical digital infrastructure. These issues reinforce the value of ongoing collaboration with policymakers to promote informed, balanced policies that support open source developers. We’ll continue to advocate for policies that reflect technical realities and support open development, including through an upcoming educational developer policy blog post and a May Maintainer Month session focused on these topics.

The post Developer policy update: Intermediary liability, copyright, and transparency appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories