For many technologists, the Microsoft MVP Award feels inspiring - but also a little mysterious. That is exactly why MVPs Sharon Weaver and Christian Buckley host a monthly AMA (Ask Me Anything) call for aspiring MVPs. Their goal is simple: create a welcoming space where people can ask honest questions, better understand what meaningful community contribution looks like, and feel less alone on the journey. What started as a way to answer the same questions more efficiently has grown into a supportive cohort that helps future MVPs build confidence and momentum.
Sharon knows firsthand how often people ask, “How do I become an MVP?” After hearing that question again and again, she realized aspiring MVPs did not just need information - they needed community. “I kept getting lots of people asking me, and I was giving the same answers out over and over and over,” Sharon said. So she and Christian decided to create one place where people could learn together, ask questions openly, and hear practical advice from people who understood the process.
Sharon believes that people do not need to become someone else to be recognized as an MVP. “You don’t need to be anything other than who you are. You just need to understand that what you do has value, how to show that value, and then be really good at that and make that visible.” That message resonates because it replaces pressure with purpose. Instead of chasing a checklist or trying to become an influencer overnight, attendees are encouraged to focus on contributions they genuinely enjoy and can sustain over time.
“You don’t need to be anything other than who you are. You just need to understand that what you do has value.” - MVP Sharon Weaver
The monthly AMA also helps make a big goal feel more attainable. Sharon shared, “Having other people who are not there yet to support you through that journey makes as big a difference as having people who have already been awarded.” Over the past two years, that support has mattered: Sharon said the cohort has helped around 15 people who attended the calls go on to receive the MVP Award. For Sharon, the joy is not in doing the work for anyone else; it is in opening the door, answering questions, and helping others see that their efforts already have value.
“One piece of advice from the AMA calls that stayed with me was to make sure your contributions are things you enjoy doing and would do regardless of the MVP title.” - MVP Rachel Sullivan
One of the biggest myths Sharon hears is that aspiring MVPs need a huge platform to be considered. “Everybody thinks you need to be a speaker or an influencer,” she said. “Pick the things you do, do them well, and be visible.” That advice has helped attendees reframe the process around authentic contribution instead of comparison. MVP Rachel Sullivan reflected, “One piece of advice from the AMA calls that stayed with me was to make sure your contributions are things you enjoy doing and would do regardless of the MVP title.” MVP Karinne Bessette shared a similar takeaway: “The AMA calls made the MVP process feel more approachable because it gave real perspectives from other MVPs and people on the MVP path, which helped fight imposter syndrome.”
The monthly calls also help people understand that visibility matters. Sharon encourages attendees to connect with product groups, communicate their impact clearly, and advocate for their work in ways that feel genuine. The path is rarely instant - Sharon estimates many people spend two to three years on the journey - but the combination of clarity, encouragement, and community makes a real difference. Just as importantly, the calls remind people that not receiving the award the first time is not the end of the story. It is simply part of a longer journey of growth, contribution, and persistence.
“The AMA calls made the MVP process feel more approachable because it gave real perspectives from other MVPs and people on the MVP path, which helped fight the imposter syndrome.” - MVP Karinne Bessette
The value of these AMA calls goes far beyond helping one person earn an award. They remind people that they are not alone, that their voice matters, and that there is space for them in this community exactly as they are. For someone who feels uncertain, overwhelmed, or unsure whether what they do is enough, that kind of encouragement can be transformative. It can spark confidence, create connection, and turn self-doubt into action. When people feel seen, supported, and inspired to keep going, the impact reaches far beyond a single moment - it deepens the sense of belonging that makes this community so special.
“If these chats help people realize one thing, I hope it’s that there is no specific checklist of tasks to complete to become an MVP. It can be a very different path for each MVP, because there are countless ways to give back to the community. You don’t need to follow someone else’s formula—you need to find a contribution path that’s authentic and sustainable for you." – MVP Christian Buckley
If you are curious about becoming a Microsoft MVP, consider joining Sharon and Christian’s monthly AMA calls and taking the next step alongside others who are asking the same questions join Sharon and Christian Buckley’s AMA calls. And if you are already an MVP, think about how you might create a similar space in your own region or community. Sometimes the most powerful thing we can do is make the path feel more visible for someone else. Learn more from Sharon’s blog post, Navigating the Microsoft MVP Nomination Process: Tips and Insights, and meet these community leaders: Sharon Weaver, Christian Buckley, Rachel Sullivan, and Karinne Bessette.
To find an MVP and learn more about the MVP Program visit the MVP Communities website and follow our updates on LinkedIn or #mvpbuzz.
Join us for a future live session through the Microsoft Reactor  where we walk through what the MVP program is about, what we look for, and how nominations work. These sessions are designed to help you connect the dots between the work you’re already doing and the impact the MVP Program recognizes - with time for questions, examples, and real conversations. 
We’re excited to introduce a new step in the Terraform on Azure experience: Open in VS Code, now available directly from Azure Copilot in the Azure Portal. This capability helps you move seamlessly from AI‑generated Terraform code to real Azure deployments - within a connected, guided workflow designed for enterprise scenarios.
Infrastructure as Code with Terraform is powerful, but moving from generated configuration to a deployed environment typically involves multiple tools and handoffs. Teams need to understand Terraform state, work with remote backends, and integrate their code into version‑controlled CI/CD pipelines - often backed by Terraform Cloud or Azure‑native backends in enterprise environments.
Open in VS Code brings these steps together. It bridges the gap between AI‑assisted authoring in the Azure Portal and the real‑world workflows required to validate, manage state, and deploy infrastructure with confidence.
With Azure Copilot, you can describe your infrastructure in natural language and generate Terraform configurations in seconds. For example:
“Create an Azure Container App using Terraform with a managed environment, Log Analytics enabled, and a system‑assigned managed identity to securely pull images from Azure Container Registry.”
Copilot generates the Terraform configuration for you. From there, you can select Open full view to enter a full‑screen Terraform editor, and then choose Open in VS Code to launch the configuration in an Azure‑hosted VS Code environment.
There’s no need to download files or set up a local development environment. VS Code for the web opens with Azure authentication already configured, along with commonly used extensions, so you can immediately focus on refining, validating, and preparing your infrastructure for deployment.
Beyond editing, the VS Code experience includes built‑in, step‑by‑step guidance to help you deploy your Terraform configuration into your own Azure environment - whether you’re experimenting or preparing for production.
Because Terraform relies on state management, the workflow starts by helping you choose and configure a backend.
Option 1: Azure Storage Account as a remote backend
A natural fit for Azure‑native and enterprise environments. The experience guides you through creating or selecting a storage account and configuring Terraform to store state securely in Azure.
Option 2: HCP Terraform (Terraform Cloud) as a remote backend
Ideal for teams already using Terraform Cloud. The guided flow helps you authenticate, connect to an existing organization and workspace, and generate the required backend configuration directly into your Terraform files.
Option 3: Temporary workspace for quick validation
Designed for learning and experimentation. You can run terraform plan and terraform apply directly in the Azure workspace with temporary state, without committing to a long‑term backend - ideal for quick validation, but not intended for production use.
Each option includes an end‑to‑end walkthrough, so you can complete backend setup and run Terraform commands without leaving the VS Code environment or searching through external documentation.
This experience connects three essential parts of the Terraform workflow:
Together, these pieces make it easier to move from idea to infrastructure in a structured, supported way—whether you’re new to Terraform or managing production workloads with established CI/CD pipelines.
The Open in VS Code experience for Terraform is now public preview in Azure Portal Copilot.
We’re continuing to invest in this workflow, including clearer deployment guidance, future integration with GitHub Actions and other CI/CD pipelines, and deeper enhancements to the full‑screen Terraform editor experience.
If you haven’t tried it yet, generate a Terraform configuration with Azure Copilot and open it in VS Code to go from prompt to production end to end in one connected workflow.

In this edition:
Introducing Docker AI Governance: centralized control over how agents execute, what they can reach on the network, which credentials they can use, and which MCP tools they can call, so every developer in your company can run AI agents safely, wherever they work.
Agents are the biggest productivity unlock the modern workplace has seen in a generation, and engineering is where the shift is most obvious. Developers aren’t using agents to autocomplete a function anymore. They’re using them to read whole codebases, refactor across services, and ship entire products, end to end. Vibe coding is real, it’s shipping to main, and it’s happening on laptops everywhere today.
The same shift is moving through every other function. A new class of agents called Claws is already in production, sending emails, managing calendars, booking travel, pulling CRM data, reconciling reports, and querying production systems. Marketing, finance, sales, and support are adopting them as fast as engineering is, because the productivity gains are too large to ignore and the companies that move first will out-execute the ones that don’t. Org-wide rollouts that used to take quarters are landing in weeks.
What’s more interesting than the speed of adoption is where all of this actually runs. Agents and Claws live outside the systems enterprises spent two decades hardening. They don’t sit behind your CI/CD pipeline, they don’t live inside your VPC, and they don’t follow your IAM model. They run on the developer’s machine, with the developer’s credentials, reaching into private repos, production APIs, customer records, and the open internet, often in the same session. The laptop just became the most powerful node in your enterprise, and it also became the most exposed. Laptop and agent environments are the new prod, and they need to be governed like prod.
The instinct in most enterprises is to reach for the tools that already exist, but none of them see what an agent is doing. CI/CD doesn’t see it because the agent isn’t a pipeline. The VPC doesn’t see it because the laptop is outside the perimeter. IAM doesn’t see it because the agent is acting as the developer. The result is that CISOs can’t tell what an agent touched, what it ran, or where the data went, and they also can’t tell the business to slow down. This is the bind every security leader is in right now.
Strip the problem to first principles and an agent has two paths to do significant harm. It either executes code itself, touching files and opening network connections, or it calls a tool through an MCP server to act on an external system. Govern both paths and you’ve governed the agent. Miss either one and you haven’t.
That’s the test for any AI governance solution worth taking seriously, and it has two parts. The controls have to live at the runtime layer where the agent actually executes, not as advisory rules layered on top that a clever prompt can route around. And they have to work consistently wherever the agent ends up running, because agents don’t stay on the laptop. They migrate to CI runners, to staging clusters, to production. A policy that only holds in one of those places is a gap waiting to be found.
Docker is the only company that meets both parts of that test, and the reason is structural.
Docker built the sandbox that contains the first path. Every agent session runs inside an microVM-based isolated environment where filesystem and network access are controlled by a hard boundary, which means enforcement happens at the level of the process, not as a suggestion the agent can ignore. Docker built the MCP Gateway that contains the second path. Every tool call routes through a single chokepoint where it can be authenticated, authorized, and logged before it reaches the external system. These controls at a primitive level, Docker Sandboxes and Docker MCP Gateway, make enforcement strict instead of advisory. We own the substrate the agent is running on, so the policy isn’t a wrapper around someone else’s runtime, it’s the runtime.
The second part is what makes this durable. The same sandbox primitive runs on the developer’s laptop, inside Kubernetes, and across cloud environments, with the same policy model and the same enforcement guarantees. When an agent moves from a developer’s machine to a CI runner to a production cluster, the policy moves with it, because the runtime underneath is the same in all three places. No other vendor can say that, because no other vendor is the runtime. Endpoint security tools don’t extend to clusters. Cluster security tools don’t reach the laptop. Cloud security tools don’t run on either. Docker covers all three because Docker is what’s actually executing the agent in all three.
Docker AI Governance is the control plane that sits on top of that runtime. It turns the sandbox and the MCP Gateway into centralized policy, defined once in the admin console, enforced at every node the agent touches, and auditable from end to end.
From a single admin console, security teams define and enforce policy across four control surfaces: network, filesystem, credentials, and MCP tools. One policy layer that doesn’t need a per-machine setup and that consistently works across thousands of developers.
Sandbox policy for network and filesystem. Admins define allow and deny rules for domains, IPs, and CIDRs, alongside mount rules for filesystem paths with read-only or read-write scope. Every agent session runs inside an isolated sandbox where only approved endpoints are reachable and only approved directories are mountable, with enforcement happening at the proxy and mount level rather than as an advisory layer the agent can ignore.
Credential governance. Agents are dangerous in proportion to what they can authenticate as, so Docker AI Governance controls which credentials, tokens, and secrets an agent session can see, scopes them to the duration of that session, and blocks exfiltration to unapproved destinations. Developers stop pasting tokens into prompts, and security stops wondering where those tokens ended up.
MCP tool governance. Admins control which MCP servers and tools are available through organization-wide managed policies, with unapproved servers blocked by default. Every MCP call flows through the same policy engine as network, filesystem, and credential requests, so there’s no separate surface to configure and no bypass path.
Role-based policy assignment. Different teams need different levels of access, and security research will reasonably require broader MCP usage than finance. Create policy groups, assign users through your IdP, and layer team-specific rules on top of organization-wide guardrails that can’t be overridden. It scales to thousands of developers through existing SAML and SCIM integrations with no per-user setup.
Audit and visibility. Every policy evaluation generates a structured event with user identity, timestamp, session context, and the rule that triggered the decision, and logs export cleanly to your existing SIEM and compliance systems. This is the evidence CISOs need to approve AI usage at scale rather than tolerate it under the table.
Automatic policy propagation. When a developer authenticates, their machine pulls the latest policy, and updates reach every device automatically. Admins define policy once and Docker enforces it everywhere.
CISOs get the governance layer they’ve been missing and the confidence to approve agent usage at scale rather than block it. Platform teams get an easy way to set up governance: by defining a policy once and having it enforced everywhere with full audibility. This removes the operational burden of scaling AI adoption across the company. Developers get what agents promised in the first place: real speed and autonomy, with governance that stays out of the way. We built Docker AI Governance with these principles in mind: agents should be autonomous and governance should be invisible.
Docker AI Governance is available now. If you’re a security leader trying to close the AI governance gap, or a platform team ready to roll out agents without compromising control, it was built for you.
Contact sales to learn more.
Welcome to our combined .NET servicing updates for May 2026. Let’s get into the latest release of .NET & .NET Framework, here is a quick overview of what’s new in our servicing releases:
.NET and .NET Framework have been refreshed with the latest update as of May 12, 2026. This update contains security and non-security fixes.
This month you will find that these CVEs have been fixed:
| CVE # | Title | Applies to |
|---|---|---|
| CVE-2026-32177 | .NET Elevation of Privilege Vulnerability | .NET 10.0, .NET 9.0, .NET 8.0, .NET Framework 3.5, 4.6.2, 4.7, 4.7.2, 4.8, 4.8.1 |
| CVE-2026-35433 | .NET Elevation of Privilege Vulnerability | .NET 10.0, .NET 9.0, .NET 8.0 |
| CVE-2026-32175 | .NET Tampering Vulnerability | .NET 10.0, .NET 9.0, .NET 8.0 |
| CVE-2026-42899 | .NET Denial of Service Vulnerability | .NET 10.0, .NET 9.0, .NET 8.0 |
| .NET 10.0 | .NET 9.0 | .NET 8.0 | |
|---|---|---|---|
| Release Notes | 10.0.8 | 9.0.16 | 8.0.27 |
| Installers and binaries | 10.0.8 | 9.0.16 | 8.0.27 |
| Container Images | images | images | images |
| Linux packages | 10.0 | 9.0 | 8.0 |
| Known Issues | 10.0 | 9.0 | 8.0 |
Share feedback about this release in the Release feedback issue.
This month, there are new security updates and new non-security updates available. For recent .NET Framework servicing updates, be sure to browse our release notes for .NET Framework for more details.
That’s it for this month, make sure you update to the latest service release today.
The post .NET and .NET Framework May 2026 servicing releases updates appeared first on .NET Blog.