Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153305 stories
·
33 followers

What’s New in Aspire 13.3

1 Share

Aspire 13.3 is here, and even though it’s only been five weeks since 13.2, this one is still packed.

We have the new Aspireify skill for agent-assisted onboarding. We have command results that bring structured output from resource commands into the dashboard, CLI, MCP tools, and integrations. We have browser logs, network capture, and screenshots flowing into Aspire. And FINALLY, we have first-class Kubernetes and AKS deployment support with Helm.

This release includes 45 new features, 134 improvements, and 93 bug fixes across the AppHost, CLI, dashboard, extensions, and integrations. That’s a lot! If you want every single detail, head to the full What’s New in Aspire 13.3. For now, let’s look at some of my faves.

💫 Aspireify anything

One of my favorite things about Aspire is how quickly it can make an existing app feel less like a pile of processes and more like a real system. But anyone who has watched an AspiriFridays stream knows that aspirification takes real work. We’ve spent almost a year adding Aspire to community apps live on Fridays, and every app has its own little personality.

What services are in here? Which ports matter? Which environment variables are real dependencies? Which Docker Compose services should map to first-party Aspire integrations? What looks like “just add an AppHost” is actually a lot of careful repo archaeology, a lot of “wait, why does this service need that?”, and a lot of tiny decisions that make the final Aspire experience feel good.

That is exactly what the new Aspireify skill is for.

When aspire init drops a skeleton AppHost into an existing app, the new Aspireify skill gives your coding agent a guided workflow for finishing the job. It is a one-time setup skill that helps the agent inspect the repo, understand how the app already runs, and wire the AppHost to fit the app instead of forcing the app to fit Aspire.

That last part is important. The default stance is “minimize changes to the user’s code.” If your app already reads DATABASE_URL, the agent can map that with WithEnvironment() instead of asking you to rewrite your configuration. If a port is hardcoded, the skill tells the agent when to preserve it and when to ask if Aspire should manage it. If Docker Compose already exists, the agent maps what is there before trying to get clever.

This is the kind of agent workflow I get excited about: source-aware, compiler-checked, and opinionated about asking when there is a real tradeoff. After the repo is Aspireified, the regular Aspire skill takes over for day-to-day AppHost work like starting resources, checking logs, and debugging.

Don’t worry, we still have plenty of ideas and side quests for AspiriFridays, even if an agent helps us out along the way.

🔔 Command results are more than just a boolean

Resource commands are one of those features that sound small until you use them a lot. Being able to click a dashboard command or run a CLI command against a resource is great, but success/failure is only the start. Sometimes the whole point of the command is the output.

In Aspire 13.3, resource commands can return structured results back to the caller. Text and JSON results now flow through the model, gRPC, backchannel, dashboard UI, CLI, and MCP tools. That means commands can return rich, markdown-formatted responses, not just “yep, it ran.”

If you watched Michael Cummings’ Beyond Telemetry: Supercharging DevEx with the Aspire Dashboard session at Aspire Conf, this fits right into that story. Resource commands are one of the best ways to turn repeated local-dev chores into safe, discoverable buttons and CLI commands. Command results make those buttons much more useful.

HTTP commands can return response bodies too, surfaced through the CLI, dashboard, and generated polyglot SDKs. The resource rebuild command now returns build output as structured text data, so tools and agents can consume the result directly without scraping resource logs.

The dashboard ties this together with a new notification center in the header. Command execution results show up as timestamped success/failure notifications with markdown rendering and a View response action for the full output.

This also makes Aspire integrations more extensible. Integrations can add commands that return meaningful results instead of just changing state somewhere in the background. I actually just opened a dev tunnels integration PR for 13.4 that uses this model to show tunnel URLs directly from the command result. Very into this.

👀 Browser logs give Aspire even more eyes

We always talk about Aspire giving agents eyes with logs, traces, metrics, and resource state. So I guess this new capability is our… third eye?

If your app has a frontend, a lot of the interesting failures happen in the browser. Console errors, failed network requests, weird client-side exceptions, layout bugs that only show up after clicking three things in exactly the wrong order. This is another AspiriFridays special – Damian trying to tell me what button to click on the Network tab in browser developer tools.

The new WithBrowserLogs() API attaches a tracked browser resource to any endpoint-capable resource. Aspire launches Chromium using a private CDP pipe instead of an exposed TCP debug endpoint, then streams console logs, network requests, and errors into the resource log stream.

var frontend = builder.AddViteApp("frontend", "../frontend")
    .WithHttpEndpoint(port: 3000)
    .WithBrowserLogs();

And in a TypeScript AppHost:

const frontend = await builder.addViteApp("frontend", "../frontend")
    .withHttpEndpoint({ port: 3000 })
    .withBrowserLogs();

The implementation ships from the new Aspire.Hosting.Browsers prerelease package. A dashboard command lets you configure scope, browser, and user data mode at runtime, and a screenshot command saves PNGs as durable local artifacts.

This is amazing for human debugging, and it is great for agent workflows too. Your agent can run the app, inspect browser logs, capture what changed, fix the code, restart the resource, and keep going without asking you to paste screenshots back into chat. Build, run, observe, fix, rinse, repeat.

🌍 TypeScript AppHost is getting closer to GA

Aspire 13.2 brought TypeScript AppHost authoring, which was a huge step toward making Aspire feel natural no matter what language you are comfortable in. In 13.3, we kept pushing on that foundation.

TypeScript, Python, and Java AppHosts now expose the complete set of Aspire.Hosting extension methods. If it works in C#, it should work from your language too: execution configuration, eventing, pipeline steps, Docker Compose customization, Dockerfile building, and all the other bits that make an AppHost more than a startup script.

We also spent time making the APIs feel idiomatic. Methods like addProject, withEnvironment, and withReference are consolidated so they read naturally in each target language.

Python joins as a new AppHost code generator, and Java AppHost support is much closer to the TypeScript experience with unions, optional/nullability, callbacks, and a new Empty (Java AppHost) starter template.

If you want more on how TypeScript AppHost works under the covers, check out Sebastien Ros’ post on TypeScript AppHost in Aspire 13.2 and the TypeScript AppHost project structure docs. The short version: Aspire generates a typed SDK into .modules/, your apphost.ts imports from it, and the SDK regenerates when you add or update integrations.

Also, shout-out to the new ASPIREEXPORT013 analyzer diagnostic, which catches duplicate capability IDs at build time. Agents love compilers, humans love not finding out about generated SDK problems at runtime, everybody wins.

☸ Kubernetes and AKS, finally!

By far the most asked-for Aspire deployment feature has been Kubernetes. Aspire has had a really nice deployment story for Azure Container Apps and Docker Compose, but a lot of you are running Kubernetes in production and have been waiting for Aspire to meet you there.

With 13.3, Kubernetes joins the party as a first-class deployment target. FINALLY.

The new Aspire.Hosting.Azure.Kubernetes package adds AddAzureKubernetesEnvironment(), so you can define AKS clusters, node pools, SKU tiers, private clusters, and Azure Container Insights right from your AppHost. aspire deploy uses Helm under the covers, and you can configure namespace and release names with WithHelm().

var aks = builder.AddAzureKubernetesEnvironment("prod-aks")
    .WithHelm();

builder.AddCSharpApp("api", "../api")
    .PublishTo(aks);

And the same idea from a TypeScript AppHost:

const aks = await builder.addAzureKubernetesEnvironment("prod-aks")
    .withHelm();

await builder.addCsharpApp("api", "../api")
    .publishTo(aks);

You also get declarative Ingress and Gateway API routing with AddIngress() and AddGateway(), including route, TLS, hostname, and class configuration. Gateway TLS can even auto-discover the controller assigned FQDN when you use WithTls() without WithHostname().

And when you are ready to tear things down, aspire destroy will run helm uninstall for Kubernetes deployments. No more half-manual cleanup scripts living in a README somewhere. We love to see it.

We will have a deeper Kubernetes post soon, because there is a lot more to say here.

🧺 A grab bag of very good stuff

There is a lot more in 13.3 that is worth calling out, so let’s do a quick grab bag.

EF Core migration management now lives directly in your AppHost with the new Aspire.Hosting.EntityFrameworkCore package. Six commands show up in the dashboard and CLI: Update Database, Drop Database, Reset Database, Add Migration, Remove Migration, and Get Database Status. For local development, you can run migrations automatically when the AppHost starts. For production, you can generate idempotent SQL scripts at publish time.

Azure networking gets a bunch of love too: Azure Front Door, Network Security Perimeters, private endpoints for Azure OpenAI and Foundry, private ACR endpoints, and automatic HTTPS upgrades for App Service. Fewer “now go configure the real production network in some totally different place” moments.

JavaScript publishing gets three new models: PublishAsStaticWebsite(), PublishAsNodeServer(), and PublishAsNpmScript(). There is also a dedicated AddNextJsApp() integration with standalone output support and correctly configured Dockerfiles. Because yes, your Next.js deployment should know it is a Next.js deployment.

The CLI now detects Bun, Yarn, and pnpm from lockfiles and package.json metadata, then adapts install, run, and watch commands to your toolchain automatically. I am legally required to say “it just works” here, because that is exactly the goal.

Other goodies:

  • aspire destroy tears down deployments across Azure Container Apps, App Service, Kubernetes, and Docker Compose.
  • aspire dashboard launches a standalone dashboard without an AppHost, with OTLP endpoints for external telemetry collection.
  • aspire docs api lets you browse Aspire API reference docs from the terminal with list, search, and get.
  • --list-steps previews pipeline steps before you run aspire deploy or aspire publish.
  • Docker Compose deployments now support Podman through automatic runtime detection.
  • The VS Code extension gets CodeLens and gutter icons in AppHost files, a built-in Simple Browser for the dashboard, workspace auto-restore, and better resource state indicators.
  • Durable Task Scheduler support for Azure Functions, RabbitMQ v7 publisher and subscriber OpenTelemetry tracing, stable Aspire.Microsoft.Azure.StackExchangeRedis, and new Foundry models.

⚠ A few breaking changes

There are a few breaking changes you should know about before upgrading, especially if you are using early Kubernetes/Docker Compose/AKS startup hooks, emulator management endpoints, the dashboard MCP server, the Python starter template, or Azure network output names.

The full breaking changes section has the details and migration notes.

💫 Get started

Already using Aspire? Update the CLI:

aspire update --self

New to Aspire? Head to get.aspire.dev to install the CLI, then add Aspire to an existing app:

aspire init

We will publish deeper dives into some of the meatiest 13.3 features over the next few weeks, especially Aspireify, command results, browser logs, and Kubernetes and AKS deployment.

As always, we would love to hear what you think. Share feedback on GitHub, join us on Discord, follow us on X, or find us on BlueSky.

Happy Aspirifying! 💫

The post What’s New in Aspire 13.3 appeared first on Aspire Blog.

Read the whole story
alvinashcraft
16 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Perplexity’s Personal Computer is now available everyone on Mac

1 Share
Perplexity's Personal Computer brings AI agents to your Mac, and is now open to everyone.
Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Apple’s AirPods with cameras for AI are apparently close to production

1 Share
Hands holding the open case for the AirPods Pro 3 above multi-colored books on a wooden table.
AirPods Pro 3 | Photo by Amelia Holowaty Krales / The Verge

Apple's rumored AirPods with cameras are nearing a stage where the company will test early mass production, Bloomberg's Mark Gurman reports. Currently, Apple testers are "actively using" prototypes that are in the design validation test stage, which is one step before the production validation test stage.

The AirPods' cameras "aren't designed" to snap photos or video but instead can take in "visual information in low resolution" that users can query Siri about, like asking the AI assistant what they should cook with the ingredients they have in front of them, according to Gurman. They may also use the cameras to help with things like turn-b …

Read the full story at The Verge.

Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Did Microsoft just tease a new Xbox UI?

1 Share

Microsoft showed off a "consistent" Xbox UI across handhelds, consoles, and cloud gaming during its Xbox keynote at the Game Developers Conference in March. At the time it was difficult to see if there was anything new about the UI from the videos and photos captured during the event, but Microsoft has now given us a closer look at it thanks to a new video of the keynote that was published earlier today.

Jason Ronald, VP of next generation at Xbox, showed off the UI while mentioning that players had been noticing "a lot of fragmentation within the experience" across devices, and an overall lack of consistency. "What the team has been doing …

Read the full story at The Verge.

Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Agent pull requests are everywhere. Here’s how to review them.

1 Share

You’ve probably already approved one without realizing it. The tests passed. The code was clean. You merged it.

But it was agent-generated—and that ease of approval is exactly the problem.

A January 2026 study, “More Code, Less Reuse”, found that agent-generated code introduces more redundancy and more technical debt per change than human-written code. The surface looks clean. The debt is quiet. And reviewers, according to the same research, actually feel better about approving it.

This isn’t an argument to slow down. It’s an argument to be intentional. There’s a difference.

Agent pull requests are already saturating review bandwidth

The volume is already staggering. GitHub Copilot code review has processed over 60 million reviews, growing 10x in less than a year. More than one in five code reviews on GitHub now involve an agent. That’s just the automated review pass. The pull request themselves are multiplying faster than reviewers can handle.

The traditional loop—request review, wait for code owner, merge—breaks down when one developer can kick off a dozen agent sessions before lunch. Throughput has scaled exponentially. Human review capacity hasn’t. The gap is widening.

You’re going to review agent pull requests. The question is whether you’ll catch what matters when you do.

Who (or what) actually wrote this pull request

Before you look at a single line of diff, you need a model for what you’re reviewing.

A coding agent is a productive, literal, pattern-following contributor with zero context about your incident history, your team’s edge case lore, or the operational constraints that don’t live in the repository. It will produce code that looks complete. But that “looks complete” failure mode is dangerous.

You’re the one who carries that context. That’s not a burden. It’s the actual job. The part of review that doesn’t get automated is judgment, and judgment requires context only you have.

Now, back to reviewers. The pull request lands in your queue. The author did their part. Here’s what to watch for.

Red flags to watch for

1. CI gaming

Agents fail CI. When they do, they have an obvious path to get tests passing: remove the tests, skip the lint step, add || true to test commands. Some agents take it.

Any change that weakens CI is a blocker. Full stop. Before approving any agent pull request, check:

  1. Did coverage thresholds change?
  2. Were any tests removed, renamed, or marked as skipped?
  3. Did the workflow stop running on forks or pull requests?
  4. Are any CI steps now gated behind conditions they weren’t before?

Yes, to any of those means you need an explicit justification before you continue.

2. Code reuse blindness

This is the highest-ROI thing you can do as a reviewer. Agents look for prior art. They’ll find a pattern in the codebase and replicate it, often without checking whether a utility that already does the same thing exists somewhere else. The symptoms: new utility functions that duplicate existing ones with slightly different names, validation logic reimplemented in multiple places, middleware written from scratch that already lives in a shared module, helpers that are “almost the same” but with different names.

The agent’s local context doesn’t include the full picture of what exists across your repository. You do.

For every new helper or utility in an agent pull request, do a quick search. If you find an equivalent, don’t leave a comment. Require consolidation before merge. The cost of leaving duplicated logic is that agents will find it as prior art and replicate it further.

💡Pro tip: Require justification for adding new utilities in agent pull requests above a size threshold. This catches the duplication problem early.

3. Hallucinated correctness

The obvious hallucination (calling an API that doesn’t exist, referencing a variable out of scope) gets caught in CI. The dangerous one is subtler: code that compiles, passes every test, and is wrong.

Off-by-one errors in pagination. Missing permission checks on a branch that’s never hit in tests. Validation that short-circuits under an edge case the agent never considered. Wrong behavior under a race condition that only surfaces at scale.

Trace it, don’t just scan it. Pick the most critical path in the diff. Follow it from input through every transform to output. Check boundary conditions (zero, max, empty), missing validation on external values, permission checks on every branch, and surprising conditional logic.

Require a new test that fails on the pre-change behavior. If the agent can’t write a test that would have caught the bug it claims to fix, the fix is incomplete or the understanding is wrong.

4. Agentic ghosting

You leave a thorough review. You explain the issue, provide context, suggest a direction. The pull request goes quiet. Or the agent responds and misses the point entirely and runs in circles. You invest another round. Still nothing useful.

Larger pull requests with no structured plan correlate strongly with agent abandonment or misalignment. The larger and less scoped the pull request, the more likely you’re going to sink review time into something that goes nowhere.

Before you invest deep review on a large agent pull request check the pull request history. Has it been responsive in previous rounds? Does it have a clear implementation plan, or did the agent just start writing code?

If there’s no plan, request a breakdown before you write a single comment. Copy-paste version:

This pull request is too large for me to review without a clearer implementation plan. Can you break it into smaller scoped units, or add a summary of what each part does and why it’s structured this way? Happy to review after that.

Firm, short, not personal. And it saves you an hour.

5. Untrusted input in workflows

Prompt injection in CI agents is real and underappreciated. Here’s the pattern: an agent workflow reads content from a pull request body, an issue, or a commit message. That content gets interpolated into a prompt. The prompt goes to a model. The model output gets piped to a shell command. The whole thing runs with GITHUB_TOKEN permissions.

When you’re reviewing any workflow that calls an LLM, these are blockers:

  • Is untrusted user input, pull requestbodies, issue bodies, commit messages, being interpolated into prompts without sanitization?
  • Is GITHUB_TOKEN write-scoped when it only needs read access?
  • Is model output being executed as shell commands without validation?
  • Are secrets accessible to the agent step or being printed to logs?

What to require before merge: least-privilege permissions in the workflow YAML (permissions: read-all is a reasonable default), sanitize and quote untrusted content before it touches a prompt, separate the “analysis” step from the “execution” step with a human approval gate for anything touching production, never eval model output.

Time Step What to do 
1–2 min Scan and classify Look at the file list and diff size. Narrow task (docs, CI, small change) or complex (multi-file, logic, performance, tests)? That classification sets your review depth for everything that follows. 
2–3 min Check CI changes first Before reading a single line of app code, look at anything touching .github/workflows, test configs, coverage settings, or build scripts. Flag anything that weakens CI. Stop sign check. 
3–5 min Scan for new utilities Search for new functions, helpers, or modules. For each one, do a quick repo search to check for duplicates. Flag anything that reinvents existing functionality. 
5–8 min Trace one critical path Pick the most important logic change. Trace it end-to-end: input → transforms → output. Check boundary conditions, permissions, unexpected branching. This is the step you can’t skip. 
8–9 min Security boundaries If this PULL REQUEST touches any workflow that calls an LLM or handles untrusted input, run through the security checklist above. 
9–10 min Require evidence For any non-trivial logic change, require a test that fails on the pre-change behavior. No rollback plan for risky changes? Ask for one. 

When to request a smaller pull request:

  1. The diff touches more than five unrelated files
  2. You can’t describe the purpose of the pull request in one sentence
  3. The agent has no implementation plan or the pull request body is empty
  4. CI is failing and the only changes in the diff are to test files

Let Copilot review it first

Use automated review for what it’s good at: catching the mechanical stuff before a human has to. Copilot code review flags style inconsistencies, obvious logic errors, missing error handling, and type mismatches. It handles the low-level scan. That frees you up for the judgment work, which is where your time actually matters.

Treat it as a prerequisite, not a replacement. Let Copilot run first. If it catches something obvious, let the author address it before you invest your review time.

You can tune this with custom instructions specific to your team: flag anything that modifies CI thresholds, surface new utilities for deduplication review, check that every external input is validated. The more specific your instructions, the more useful the automated pass.

💡 Pro tip: I recently experimented with codifying my own review checklist using the Copilot SDK. Instead of remembering to run the same security checks on every pull request, I built a workflow that takes my personal checklist—auth on admin endpoints, tests actually running, safe env variable handling—and runs it against the diff automatically. If it finds critical issues, it blocks the merge.

Judgment is the bottleneck, and that’s fine

The surface area of code is growing. pull request volume is growing. The time you spend scanning boilerplate should shrink.

What doesn’t shrink is the context you carry. The things you know about your system that aren’t written down anywhere. That’s what makes your review valuable, and it’s the part that doesn’t get automated.

Three takeaways:

  1. Any CI weakening is a hard stop.
  2. Let the agents scan first. You trace the critical path.
  3. Red flag checklist as your default on complex agent pull requests.

Read the docs >

The post Agent pull requests are everywhere. Here’s how to review them. appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Seeing MCP

1 Share

I am talking to a number of folks about documenting their MCP servers. Others about discovering them. Others about governing them. Generally, we are mostly talking about being able to just see the MCP wave of API expansion that has occurred across your average enterprises. This expansion phase isn’t much different than previous waves of REST, GraphQL, gRPC, Websockets, and Kafka expansions–it just happened faster and wider that most of those.

I published a prototype API docs that generates documentation for API, MCP, and Agent Skills side by side last week. I called the POC, “See”. I am a big fan of “seeing APIs” and “seeing integrations”. I’ve been doing about an hour of research a week into what is happening when it comes to MCP documentation, discovery, and other goings on with MCP at the core and the edges. There isn’t a lot of service or tooling for the seeing of MCP that is required for governance of APIs, and general attitudes seem to be that AI will do the seeing for us.

While I am not convinced that what has helped us find and see APIs historically will translate into helping us see the next generation of APIs, but I am also not convinced that AI will help us discover and see all of our APIs, and the skills, SDKs, and clients needed to engage with those APIs. I want to clarify here–MCP is an API. I see Microsoft, Google, and others going all in on their developer education being delivered via MCP, and I suspect more of the resources that occurs within developers will be shifted to be available via MCP, with as much of the activity as we can will be driven by skills.

I don’t think we will be able see what we need to see in an API-powered chat interface. And since agent’s don’t see, I know they won’t be able to see everything we need. They will help see a lot of what exists in the cracks and shadows that we couldn’t see historically with APIs, but there will be entirely new blind spots to wrestle with. I am finding some really interesting ways of seeing APIs and their properties at scale using machine-readable artifacts, which is enabled using Claude, ChatGPT, and Gemini. I will keep pushing forward automation to help discover and document APIs, as well as visualizations that help us see all of that. I am most interested in doing it in ephemeral and evolving ways, rather than the static or even dynamic ways we’ve done historically.

Seeing APIs is a massively unsolved problems. We just expanded that problem 1000x with AI and MCP. Just as we were beginning to get a hadle on the governance of HTTP APIs, we’ve expanded our API sprawl using GraphQL, Kafka, gRPC, and now MCP. There is so much more to see. There is so much more work to be done. With most people’s strategy that AI will sort it out for us. I hope y’all are right.



Read the whole story
alvinashcraft
31 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories