Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150914 stories
·
33 followers

PowerShell 7.6 release postmortem and investments

1 Share

We recently released PowerShell 7.6, and we want to take a moment to share context on the delayed timing of this release, what we learned, and what we’re already changing as a result.

PowerShell releases typically align closely with the .NET release schedule. Our goal is to provide predictable and timely releases for our users. For 7.6, we planned to release earlier in the cycle, but ultimately shipped in March 2026.

What goes into a PowerShell release

Building and testing a PowerShell release is a complex process with many moving parts:

  • 3 to 4 release versions of PowerShell each month (e.g. 7.4.14, 7.5.5, 7.6.0)
  • 29 packages in 8 package formats
  • 4 architectures (x64, Arm64, x86, Arm32)
  • 8 operating systems (multiple versions each)
  • Published to 4 repositories (GitHub, PMC, winget, Microsoft Store) plus a PR to the .NET SDK image
  • 287,855 total tests run across all platforms and packages per release

What happened

The PowerShell 7.6 release was delayed beyond its original target and ultimately shipped in March 2026.

During the release cycle, we encountered a set of issues that affected packaging, validation, and release coordination. These issues emerged late in the cycle and reduced our ability to validate changes and maintain release cadence.

Combined with the standard December release pause, these factors extended the overall release timeline.

Timeline

  • October 2025 – Packaging-related changes were introduced as part of ongoing work for the 7.6 release.

    • Changes to the build created a bug in 7.6-preview.5 that caused the Alpine package to fail. The method used in the new build system to build the Microsoft.PowerShell.Native library wasn’t compatible with Alpine. This required additional changes for the Alpine build.
  • November 2025 – Additional compliance requirements were imposed requiring changes to packaging tooling for non-Windows platforms.

    • Because of the additional work created by these requirements, we weren’t able to ship the fixes made in October until December.
  • December 2025 – We shipped 7.6-preview.6, but due to the holidays there were complications caused by a change freeze and limited availability of key personnel.

    • We weren’t able to publish to PMC during our holiday freeze window.
    • We couldn’t publish NuGet packages because the current manual process limits who can perform the task.
  • January 2026 – Packaging changes required deeper rework than initially expected and validation issues began surfacing across platforms.

    • We also discovered a compatibility issue in RHEL 8. The libpsl-native library must be built to support glibc 2.28 rather than glibc 2.33 used by RHEL 9 and higher.
  • February 2026 – Ongoing fixes, validation, and backporting of packaging changes across release branches continued.
  • March 2026 – Packaging changes stabilized, validation completed, and PowerShell 7.6 was released.

What went wrong and why

Several factors contributed to the delay beyond the initial packaging change.

  • Late-cycle packaging system changes A compliance requirement required us to replace the tooling used to generate non-Windows packages (RPM, DEB, PKG). We evaluated whether this could be addressed with incremental changes, but determined that the existing tooling could not be adapted to meet requirements. This required a full replacement of the packaging workflow.Because this change occurred late in the release cycle, we had limited time to validate the new system across all supported platforms and architectures.
  • Tight coupling to packaging dependencies Our release pipeline relied on this tooling as a critical dependency. When it became unavailable, we did not have an alternate implementation ready. This forced us to create a replacement for a core part of the release pipeline, from scratch, under time pressure, increasing both risk and complexity.
  • Reduced validation signal from previews Our preview cadence slowed during this period, which reduced opportunities to validate changes incrementally. As a result, issues introduced by the packaging changes were discovered later in the cycle, when changes were more expensive to correct.
  • Branching and backport complexity Because of new compliance requirements, changes needed to be backported and validated across multiple active branches. This increased the coordination overhead and extended the time required to reach a stable state.
  • Release ownership and coordination gaps Release ownership was not explicitly defined, particularly during maintainer handoffs. This made it difficult to track progress, assign responsibility for blockers, and make timely decisions during critical phases of the release.
  • Lack of early risk signals We did not have clear signals indicating that the release timeline was at risk. Without structured tracking of release health and ownership, issues accumulated without triggering early escalation or communication.

How we responded

As the scope of the issue became clear, we shifted from attempting incremental fixes to stabilizing the packaging system as a prerequisite for release.

  • We evaluated patching the existing packaging workflow versus replacing it, and determined a full replacement was required to meet compliance requirements.
  • We rebuilt the packaging workflows for non-Windows platforms, including RPM, DEB, and PKG formats.
  • We validated the new packaging system across all supported architectures and operating systems to ensure correctness and consistency.
  • We backported the updated packaging logic across active release branches to maintain alignment between versions.
  • We coordinated across maintainers to prioritize stabilization work over continuing release progression with incomplete validation.

This shift ensured a stable and compliant release, but extended the overall timeline as we prioritized correctness and cross-platform consistency over release speed.

Detection gap

A key gap during this release cycle was the lack of early signals indicating that the packaging changes would significantly impact the release timeline.

Reduced preview cadence and late-cycle changes limited our ability to detect issues early. Additionally, the absence of clear release ownership and structured tracking made it more difficult to identify and communicate risk as it developed.

What we are doing to improve

This experience highlighted several areas where we can improve how we deliver releases. We’ve already begun implementing changes:

  • Clear release ownership We have established explicit ownership for each release, with clear responsibility and transfer mechanisms between maintainers.
  • Improved release tracking We are using internal tracking systems to make release status and blockers more visible across the team.
  • Consistent preview cadence We are reinforcing a regular preview schedule to surface issues earlier in the cycle.
  • Reduced packaging complexity We are working to simplify and consolidate packaging systems to make future updates more predictable.
  • Improved automation We are exploring additional automation to reduce manual steps and improve reliability in the face of changing requirements.
  • Better communication signals We are identifying clearer signals in the release process to notify the community earlier when timelines are at risk. Going forward, we will share updates through the PowerShell repository discussions.

Moving forward

We understand that many of you rely on PowerShell releases to align with your own planning and validation cycles. Improving release predictability and transparency is a priority for the team, and these changes are already in progress.

We appreciate the feedback and patience we received from the community as we worked through these changes, and we’re committed to continuing to improve how we deliver PowerShell.

— The PowerShell Team

The post PowerShell 7.6 release postmortem and investments appeared first on PowerShell Team.

Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

AI Tools for Developers –

1 Share

Using AI tools is an important part of being a software developer.

We just posted a course on the freeCodeCamp.org YouTube channel that will teach you how to use AI tools to become more productive as a developer. I created this course!

In this course, you will master AI pair programming and agentic terminal workflows using top-tier tools like GitHub Copilot, Anthropic's Claude Code, and the Gemini CLI. The course also covers open-source automation with OpenClaw, teaching you how to set up a highly customizable, locally hosted AI assistant for your development environment. Finally, you will learn how to maintain high code quality and streamline your team's workflow by integrating CodeRabbit for automated, AI-driven pull request analysis.

Watch the full course on the freeCodeCamp.org YouTube channel (1.5 hour watch).



Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The Future of Tech Blogging in the Age of AI

1 Share

I've been blogging on this site for almost 20 years now, and the majority of my posts are simple coding tutorials, where I share what I've learned as I explore various new technologies (my journey on this blog has taken me through Silverlight, WPF, IronPython, Mercurial, LINQ, F#, Azure, and much more).

My process has always been quite simple. First, I work through a technical challenge and eventually get something working. And then, I write some instructions for how to do it.

Benefits of tech blogging

There are many benefits to sharing your progress like this:

  1. The process of putting it into writing helps solidify what you learned
  2. Despite this I still often forget how I achieved something, so my blog functions as a journal I can refer back to later
  3. You're supporting the wider developer community by sharing proven ways to get something working
  4. Thanks to "Cunningham's Law" ("the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer."), your post may lead you to discover a better way to achieve the same goal, or a fatal flaw in your approach
  5. And gradually it builds your personal reputation and credibility, as eventually you'll build up visitors (although you may find that your most popular post of all time is on the one topic that you most certainly aren't an expert on!)

Are LLMs going to ruin it all?

But recently I've been wondering - are LLM's going to put an end to coding tutorial blogs like mine? Do they render it all pointless?

For starters, GitHub Copilot and Claude Code have already dramatically changed the way I go about exploring a new technique or technology. Instead of slogging through Bicep documentation, and endlessly debugging why my template didn't work, I now just ask the AI model to create one for me.

Refreshingly, I notice that it gets it wrong just as frequently as I do, but it doesn't get frustrated - it just keeps battling away until eventually it gets something working.

But now it feels like a hollow victory. Is there even any point writing a tutorial about it? If you can simply ask an agent to solve the problem, why would anyone need to read my tutorial? Are developers even going to bother visiting blogs like mine in the future?

And then there's the question of who writes the tutorial? Not only is the agent much quicker than me at solving the technical challenge, it's also significantly faster at writing the tutorial, and undeniably a better writer than me too. So maybe I should just let it write the article for me? But the internet is already full of AI-generated slop...

Should you let AI write your blog posts?

This is a deeply polarizing question. There's a number of possible options:

Level 1: Human only

You could insist on hand-writing everything yourself, with strictly no AI assistance. That's what you're reading right now (if you can't already tell from the decidedly mediocre writing style!)

This mirrors a big debate going on in the world of music production at the moment. If AI tools like Suno can generate an entire song from a single prompt, that sounds far more polished than anything I've ever managed to produce, then does that spell the end of real humans writing and recording songs? And should we fight against it or just embrace it as the future?

I think tech tutorials do fall into a different category to music though. If I want to learn how to achieve X with technology Y, I just want clear, concise and correct instructions - and I'm not overly bothered whether it came 100% from a human mind or not.

Having said that, we've already identified a key benefit of writing your own tutorials: it helps solidify what you've learned. Doing your own writing will also improve your own powers of communication. For those reasons alone I have no intention of delegating all my blog writing to LLMs.

Level 2: Human writes, AI refines

On the other hand, it seems churlish to refuse to take advantage of the benefits of LLM for proof reading, fact checking, and stylistic improvements. When I posted recently about does code quality still matter this is exactly what I did. I wrote the post myself, and then asked Claude Code to help me refine it, by critiquing my thoughts and providing counter-arguments.

To be honest, I ignored most of the feedback, but undoubtedly it improved the final article. This is the approach I've been taking with my Pluralsight course scripts - I first write the whole thing myself, and then ask an LLM to take me to task and tell me all the things I got wrong. (Although they're still ridiculously sycophantic and tell me it's the greatest thing they've ever read on the topic of lazy loading!)

Level 3: AI writes, human refines

But of course, my time is at a premium. A blog tutorial often takes me well over two hours to write. That's a big time investment for something that will likely barely be read by anyone.

And if all I'm producing is a tutorial, perhaps it would be better for me to get the LLM to do the leg-work of creating the structure and initial draft, and then I can edit afterwards, adapting the language to sound a bit more in my voice, and deleting some of the most egregious AI-speak.

That's exactly what I tried with a recent post on private endpoints. Claude Code not only created the Bicep and test application, but once it was done I got it to write up the instructions and even create a GitHub repo of sample code. The end result was far more thorough than I would have managed myself, and although I read the whole thing carefully and edited it a bit, I have to admit that most of the time I couldn't think of better ways to phrase each sentence, so a lot of it ended up unchanged.

That left a bad taste in my mouth to be honest. If I do that too often will I lose credibility and scare away readers? And yet I do feel like it was a genuinely valuable article that shows how to solve a problem that I'd been wanting to blog about for a long time.

Level 4: AI only

Of course, there is a level further, and now we are getting to the dark side. Could I ask Claude or ChatGPT to write me a blog post and just publish it without even reading it myself? I could instruct it to mimic my writing style, and it might even do a good enough job to go unnoticed? Maybe at some time in the future, Claude can dethrone my most popular article with one it wrote entirely itself.

To be honest, I have no interest in doing that at all - it undermines the purpose of this blog which is a way for me to share the things that I have learned. So I can assure you I have no intention of filling this site up with "slop" articles where the LLM has come up with the idea, written and tested the code, and published the article all without me having to be involved at all.

But interestingly, this approach might make sense for back-filling the documentation for my open-source project NAudio. Over the years I've written close to one hundred tutorials but there are still major gaps in the documentation.

I'm thinking of experimenting with asking Claude Code to write a short tutorial for every public class in the NAudio repo, and to then check its work by following the tutorial and making sure it really works.

I expect we're going to see an explosion of this approach to, and it could be a genuine positive for the open source community, where documentation is often lacking and outdated. If LLMs are to make a positive contribution to the world of coding tutorials, this is probably one of the best ways they can be utilized.

Why tech blogging still matters

If you're still with me at this point, well done - I know I've gone on too long. Even humans can be as long-winded as LLMs sometimes. But the process of writing down my thoughts on this issue has helped me gain some clarity, and made me realise that it doesn't necessarily matter whether or not I take an AI-free, AI-assisted or even a AI-first approach to my posts.

The value of sharing these coding tutorials is that the problems I'm solving are real-world problems. They are tasks that I genuinely needed to accomplish, and came with unique constraints and requirements that are specific to my circumstances. That gives them an authenticity that an AI can't fake. At best it can guess at what humans might want to achieve, and create a tutorials about that.

So when I'm reading your tech blog (which I hope you'll share a link to), I won't really care whether or not you used ChatGPT to create the sample code, or make you sound like a Pulitzer prize winner. I'll be interested because you're sharing your experience of how you solved a problem using the tools at your disposal.

Read the whole story
alvinashcraft
33 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Multi-Tenancy in the Critter Stack

1 Share

We put on another Critter Stack live stream today to give a highlight tour of the multi-tenancy features and support across the entire stack. Long story short, I think we have by far and away the most comprehensive feature set for multi-tenancy in the .NET ecosystem, but I’ll let you judge that for yourself:

The Critter Stack provides comprehensive multi-tenancy support across all three tools — Marten, Wolverine, and Polecat — with tenant context flowing seamlessly from HTTP requests through message handling to data persistence. Here’s some links to various bits of documentation and some older blog posts at the bottom as well.

Marten (PostgreSQL)

Marten offers three tenancy strategies for both the document database and event store:

  • Conjoined Tenancy — All tenants share tables with automatic tenant_id discrimination, cross-tenant querying via TenantIsOneOf() and AnyTenant(), and PostgreSQL LIST/HASH partitioning on tenant_id (Document Multi-TenancyEvent Store Multi-Tenancy)
  • Database per Tenant — Four strategies ranging from static mapping to single-server auto-provisioning, master table lookup, and runtime tenant registration (Database-per-Tenant Configuration)
  • Sharded Multi-Tenancy with Database Pooling — Distributes tenants across a pool of databases using hash, smallest-database, or explicit assignment strategies, combining conjoined tenancy with database sharding for extreme scale (Database-per-Tenant Configuration)
  • Global Streams & Projections — Mix globally-scoped and tenant-specific event streams within a conjoined tenancy model (Event Store Multi-Tenancy)

Wolverine (Messaging, Mediator, and HTTP)

Wolverine propagates tenant context automatically through the entire message processing pipeline:

  • Handler Multi-Tenancy — Tenant IDs tracked as message metadata, automatically propagated to cascaded messages, with InvokeForTenantAsync() for explicit tenant targeting (Handler Multi-Tenancy)
  • HTTP Tenant Detection — Built-in strategies for detecting tenant from request headers, claims, query strings, route arguments, or subdomains (HTTP Multi-Tenancy)
  • Marten Integration — Database-per-tenant or conjoined tenancy with automatic IDocumentSession scoping and transactional inbox/outbox per tenant database (Marten Multi-Tenancy)
  • Polecat Integration — Same database-per-tenant and conjoined patterns for SQL Server (Polecat Multi-Tenancy)
  • EF Core Integration — Multi-tenant transactional inbox/outbox with separate databases and automatic migrations (EF Core Multi-Tenancy)
  • RabbitMQ per Tenant — Map tenants to separate virtual hosts or entirely different brokers (RabbitMQ Multi-Tenancy)
  • Azure Service Bus per Tenant — Map tenants to separate namespaces or connection strings (Azure Service Bus Multi-Tenancy)

Polecat (SQL Server)

Polecat mirrors Marten’s tenancy model for SQL Server:

Related Blog Posts

DatePost
Feb 2024Dynamic Tenant Databases in Marten
Mar 2024Recent Critter Stack Multi-Tenancy Improvements
May 2024Multi-Tenancy: What is it and why do you care?
May 2024Multi-Tenancy: Marten’s “Conjoined” Model
Jun 2024Multi-Tenancy: Database per Tenant with Marten
Sep 2024Multi-Tenancy in Wolverine Messaging
Dec 2024Message Broker per Tenant with Wolverine
Feb 2025Critter Stack Roadmap Update for February
May 2025Wolverine 4 is Bringing Multi-Tenancy to EF Core
Oct 2025Wolverine 5 and Modular Monoliths
Mar 2026Announcing Polecat: Event Sourcing with SQL Server
Mar 2026Critter Stack Wide Releases — March Madness Edition


Read the whole story
alvinashcraft
45 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

General Availability: Social Identity Providers for Native Authentication via Browser‑Delegated Flows (web-view) in Microsoft Entra External ID

1 Share

We’re excited to announce the General Availability of Social Identity Provider (IdP) support for Native Authentication in Microsoft Entra External ID. This release enables developers to integrate popular social sign‑in options such as Google, Facebook, and Apple — into native and single‑page applications that use Native Authentication. Importantly, social identity providers are supported through a browser‑delegated (web‑view) authentication flow. This approach ensures compatibility with social providers while maintaining the security posture expected of enterprise‑grade identity systems.

Clarifying native vs. browser‑delegated social authentication

Native Authentication in Entra External ID supports integrating Social Identity Providers while maintaining application‑centric user experiences.

Social sign‑in is currently supported:

Authentication stage What’s supported
Native app UX App‑owned native sign‑in or sign‑up screen
Social IdP authentication (GA) Google, Facebook, Apple — via browser‑delegated (web‑view) flow
Post‑social authentication (GA) Entra External ID authentication steps (for example, MFA via Conditional Access) — via browser‑delegated (web‑view) flow
Fully native post‑social UX (future) Planned — Entra External ID authentication steps (for example, MFA) performed via native API‑driven experience instead of browser‑delegated flow

After a user selects a Social Identity Provider, authentication continues in a browser‑delegated (web‑view) experience to comply with provider OAuth requirements. Subsequent authentication steps such as MFA when Conditional Access is enabled — are also completed within this delegated flow. This model enables Social IdP support in Native Authentication today. A future release will introduce native UX for post‑social authentication steps, replacing the current browser‑delegated experience where applicable.

Why Social Identity Providers matter for native apps

Consumer and external‑facing applications increasingly need to offer familiar sign‑in options such as Google, Facebook, or Apple without compromising security or standards compliance.

  • When social sign‑in is required — for example, to streamline onboarding, improve conversion, or support bring‑your‑own‑identity scenarios.
  • While preserving app‑centric experiences — the initial sign‑in or sign‑up screens remain native within the application.
  • Without handling user credentials in application code — authentication with social providers is performed using a browser‑delegated (web‑view) flow that aligns with OAuth requirements.

Native Authentication enables developers to integrate Social Identity Providers into native experiences while maintaining security boundaries enforced by the provider and Entra External ID. Subsequent authentication steps such as MFA when Conditional Access is enabled — continue within the same browser‑delegated flow.

What’s now generally available

With this GA release, developers can now:

  • Enable Social Identity Providers (such as Google and Facebook) in native sign‑in and sign‑up experiences.
  • Allow users to authenticate with supported social providers using a browser‑delegated (web‑view) flow within the application.
  • Leverage standards‑compliant OAuth redirect flows required by social identity providers.
  • Rely on Entra External ID to issue ID and access tokens after successful social authentication—without handling user credentials in application code.
  • Present a native sign‑in or sign‑up screen within the app, after which authentication continues in a browser‑delegated (web‑view) experience for:

    • The selected social identity provider (for example, Google, Facebook, or Apple), and
    • Any subsequent Entra External ID authentication steps (such as MFA when Conditional Access is enabled).

Native Authentication continues to issue tokens only after the selected social provider has successfully completed authentication through the browser‑delegated flow.

Ready to get started?

To begin using Social Identity Providers with Native Authentication, configure the provider in your Entra External ID tenant and integrate using the Native Authentication SDKs. Social sign‑in is supported through a browser‑delegated (web‑view) authentication flow.

Stay connected and informed

To learn more or test out features in the Microsoft Entra suite of products, visit our developer center. Make sure you subscribe to the Identity blog for more insights and to keep up with the latest on all things Identity. And, follow us on YouTube for video overviews, tutorials, and deep dives.

The post General Availability: Social Identity Providers for Native Authentication via Browser‑Delegated Flows (web-view) in Microsoft Entra External ID appeared first on Microsoft Entra Identity Platform.

Read the whole story
alvinashcraft
56 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – April 1, 2026 (#754)

1 Share

Don’t let some of my downbeat reading items make you think I had a bad day. It was a good one! For some reason, I came across a handful of cautionary words today. Which is ok, because we should be asking questions and looking around corners.

[report] The State of Java 2025. Lots of data here, some of it surprising to me. Amazing to see so much Java in China, and from a generally young crowd. Also that more and more people are forgoing any framework.

[blog] Five techniques to reach the efficient frontier of LLM inference. Important stuff here for ML engineers and platform teams.

[blog] Some uncomfortable truths about AI coding agents. Everything isn’t awesome all the time. Maintain healthy cynicism with tech progress to ensure you’re not doing worse work.

[article] What kinds of new debt are teams accumulating with AI? Smart question to ask, and account for.

[article] Lagging cloud maturity threatens enterprise AI plans. If your cloud adoption has stalled—both usage of services and adoption of cloud-style practices—I don’t see how you’ll be widely successful deploying AI for your teams and customers.

[blog] Cloud Run Jobs vs. Cloud Batch: Choosing Your Engine for Run-to-Completion Workloads. The major hyperscalers offer more than one service to do the same task. But there are subtle differences that help you pick which service to use, as pointed out here.

[blog] Six Takeaways From KubeCon EU 2026. I liked this roundup, as the Intuit Engineering team covers a good range of topics from the event.

[blog] Developer’s Guide to Building ADK Agents with Skills. Learn these patterns, as you’ll likely see them become common in your agent framework of choice.

[blog] Building my Comic Trip agent with ADK Java 1.0. Enterprise use cases are helpful for 1:1 mapping to your day job. But we can also get work inspiration from fun examples of new technologies.

[article] What next for junior developers? Get good at communication, analyzing the world around you, and understanding the big picture.

[blog] Google Cloud: Investing in the future of PostgreSQL. The big focus lately has been around replication capabilities for active-active setups.

[blog] Cloud CISO Perspectives: RSAC ’26: AI, security, and the workforce of the future. You’ll see some RSA recaps this week. This, and a bunch of security-focused links, can be found here.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories