Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150931 stories
·
33 followers

Azure IaaS: Keep critical applications running with built-in resiliency at scale

1 Share

Disruption should not be treated as an edge case. It is a reality organizations must be prepared to navigate. That preparation starts with resiliency as a core design principle, not an afterthought. Businesses depend on a broad set of applications to run daily operations, from essential internal systems to mission-critical workloads. And across that landscape, hardware issues, maintenance events, zonal disruptions, and even regional incidents can all affect availability.

The goal of a resilient infrastructure is not to assume disruptions will never happen. It is to ensure services remain available, impacts stay contained, and recovery happens quickly when events occur. In that sense, resiliency is what helps organizations maintain continuity, protect customer trust, and operate with confidence even when conditions change.

Azure IaaS is purpose-built to offer a resilient operating environment, delivering enterprise grade-resiliency. But outcomes ultimately depend on how product features across compute, storage, and networking are brought together within customer environments to help maintain availability through disruptions. Resiliency is a shared responsibility: Azure IaaS helps organizations start from a resilient platform foundation with built-in capabilities for availability, continuity, and recovery, while customers design and configure workloads to meet their specific business and operational requirements.

Designing for resiliency is not a one-time decision, and it is rarely simple. As architectures grow more distributed and workload requirements become more demanding, the Azure IaaS Resource Center provides a centralized destination for tutorials, best practices, and guidance organizations need to build and operate resilient infrastructure with greater confidence.

Resiliency built into the foundation of mission-critical applications

When an application is truly mission critical, downtime is not just inconvenient; it can disrupt customer transactions, delay operations, interrupt employee productivity, and create real financial and reputational impact. That is why resilient design starts with one important shift in mindset: not asking whether disruption will happen but designing for how the application will behave when it does.

Azure IaaS helps customers do that with built-in capabilities that support isolation, redundancy, failover, and recovery across the infrastructure stack. The value of those capabilities is not just technical. It is operational. They help organizations reduce the blast radius of disruption, improve continuity, and recover with greater predictability when critical services are under pressure.

Keep applications available with resilient compute design

Compute resiliency starts with placement and isolation. For example, if all the virtual machines supporting an application sit too close together from an infrastructure perspective, a localized event can affect more of the workload than expected.

For applications that need both scale and availability, Virtual Machine Scale Sets help automate deployment and management while distributing instances across availability zones and fault domains. This is especially valuable for front-end tiers, application tiers, and other distributed services where maintaining enough healthy instances is key to staying online.

For broader protection, availability zones provide datacenter-level isolation within a region. Each zone has independent power, cooling, and networking, which allows organizations to architect applications across zones so that if one zone is affected, healthy instances in another zone can continue serving the workload.

Together, these capabilities help organizations reduce single points of failure and design compute architectures that are better prepared to absorb localized infrastructure events, planned maintenance, and zonal disruptions.

3 D resilient apps flowchart including Azure Portal, Azure Copilot, and Powershell C L I

Build continuity and recovery on a resilient storage foundation

When disruption occurs, organizations need confidence that application data is still durable, accessible, and recoverable. Azure provides multiple storage redundancy models to support those needs. Locally redundant storage (LRS) keeps multiple copies of data within a single datacenter. Zone-redundant storage (ZRS) replicates data synchronously across availability zones within a region, helping protect against zonal failures. For broader cross-geographical resiliency scenarios, geo-redundant storage (GRS) and read-access geo-redundant storage (RA-GRS) extend protection to a secondary region.

For managed disks and virtual machine-based workloads, recovery is also shaped by capabilities such as snapshots, Azure Backup, and Azure Site Recovery. These are not just backup features in the abstract. They are mechanisms that help define how much data an organization could lose and how quickly an application can be restored after an incident.

That is why storage decisions should not be treated as only a performance or capacity conversation. For stateful applications especially, storage is central to recovery point objectives, recovery time objectives, and the broader question of how the business resumes operation after disruption.

Keep network traffic moving when conditions change

A workload is not truly available if users and dependent services cannot reach it. Even when compute and storage remain healthy, traffic disruption can still turn a manageable infrastructure event into a customer-facing outage.

That is where networking plays a distinct resiliency role. Azure networking services help maintain reachability by distributing traffic across healthy resources and redirecting around issues when conditions change. Azure Load Balancer helps spread traffic across available instances. Application Gateway adds intelligent Layer 7 routing for web applications. Traffic Manager uses DNS-based routing across endpoints, while Azure Front Door helps direct and failover internet traffic at a global level.

For customers, the value here is practical. Good networking design means that when one instance, zone, or endpoint becomes unavailable, traffic can move to a healthy path instead of stopping altogether. That can be the difference between a brief, invisible reroute and an outage your users immediately feel.

In mission-critical environments, resilient networking is what connects healthy infrastructure to real-world continuity.

Tailor resiliency to what each workload demands

Not all workloads require the same resiliency approach, and recognizing those differences is central to effective architecture and design. A stateless application tier may benefit most from autoscaling, zone distribution, and rapid instance replacement. A stateful workload may require stronger replication, backup, and failover planning because continuity depends just as much on the integrity of the data as the availability of the compute layer.

Mission-critical workloads often demand more from every layer of the stack. They may need tighter recovery targets, broader failure isolation, and more rigorously tested recovery paths than lower-priority internal systems. That does not mean every workload requires the highest possible level of redundancy. It means resiliency architecture should be guided by business impact.

Azure IaaS gives customers flexibility. The same platform can support different patterns depending on workload criticality, operational needs, and acceptable tradeoffs around cost, complexity, and recovery speed.

Make every migration a chance to build greater resiliency

Whether organizations are migrating existing applications or deploying new ones on Azure, the transition point is one of the best opportunities to build resiliency in from the start. It is the moment to reexamine architecture choices, eliminate inherited single points of failure, and design for stronger continuity across compute, storage, and networking.

Too often, a move to the cloud simply recreates existing infrastructure patterns and carries forward the same risks. But migration or new deployment can be much more valuable than that. For example, Carne Group recently shared how its move to Azure helped turn migration into a broader resiliency strategy, combining Azure Site Recovery with Terraform-based landing zones to streamline cutover while strengthening recovery readiness and operational resilience.

With IaC in place, we could easily build a duplicate site in another region. Even in the event of a worst-case scenario, we could be back up and running more or less in the same day.

Stéphane Bebrone, Global Technology Lead at Carne Group

This is also where infrastructure as code and deployment automation play an important role. Using repeatable deployment templates and CI/CD workflows helps teams standardize resilient architectures, reduce configuration drift, and recover environments more consistently when changes or disruptions occur.

Azure Site Recovery is a foundational Azure capability for regional resilience, enabling workloads to be replicated and restarted in another Azure region on demand. Customers retain control over where and when workloads move, aligning recovery behavior with capacity, compliance, and regional availability needs.

Services such as Azure Migrate, Azure Storage Mover, and Azure Data Box support different migration scenarios. GitHub and pipeline-based deployment practices then help operationalize resiliency over time.

In that sense, this is bigger than migration alone. Whether a workload is being moved, modernized, or built new on Azure, resiliency should be part of the deployment strategy from the beginning, not added later.

Maintain resiliency after deployment as workloads evolve

Resiliency must also be maintained over time. As workloads grow and change, configuration drift, new dependencies, and evolving recovery expectations can weaken the architecture originally put in place. The most resilient organizations periodically validate readiness through testing, drills, fault simulations, and observability practices that help teams identify issues early, understand root cause, and make informed corrections. Resiliency in Azure was released in preview at Ignite to help organizations assess, improve, and validate application resiliency, with a public preview planned for Microsoft Build 2026.

Azure IaaS provides foundational capabilities across compute, storage, and networking, but resilient outcomes result from how those capabilities are combined and operationalized. By designing with disruption in mind, organizations can create architectures that stay available more consistently, protect critical data more effectively, and recover more predictably when incidents occur.

To go deeper, explore the Azure IaaS Resource Center for tutorials, best practices, and guidance across compute, storage, and networking to help you design and operate resilient infrastructure with greater confidence.

The post Azure IaaS: Keep critical applications running with built-in resiliency at scale appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

ASP.NET Core in 2026 with Daniel Roth

1 Share
ASP.NET Core continues to evolve in 2026! Carl and Richard talk to Daniel Roth about all the goodness in the ASP.NET Core space, including MVC, Razor, and Blazor! Daniel talks about the publicly visible ASP.NET Core Roadmap on GitHub - where you can support ideas, add your own, and debate implementations! The conversation dives into the focus on Blazor - MVC and Razor aren't going away anytime soon, or perhaps ever. Still, the energy is definitely on Blazor, and its potential to provide a great development experience that scales effectively and provides the features your applications need. And Daniel reminds us that the teams all work closely together, including the broader .NET and language teams, so new features are in the right place and available to everyone!



Download audio: https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/71055004/dotnetrocks_1996_asp_dot_net_core_in_2026.mp3
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

SQL Server Integration Services Projects 2022+ GA

1 Share

We're pleased to announce the General Availability release of SQL Server Integration Services (SSIS) Projects 2022+, bringing full support for SQL Server 2025 and Visual Studio 2026.

With this release, SSIS Projects now supports target server versions from SQL Server 2017 through SQL Server 2025 and works with both Visual Studio 2022 and Visual Studio 2026.

Download

SQL Server Integration Services Projects 2022+ on Visual Studio Marketplace

Version: 2.2
Build Version: 17.0.1010.6
Release Date: April 1, 2026

For detailed release notes, see the Marketplace release notes. For troubleshooting and offline installation, visit the SSIS in VS2022+ troubleshooting guide.

Share your feedback

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

AI DevOps: Use Cases, Agents & Safe Adoption

1 Share
Learn what AI DevOps means, how it differs from MLOps, where AI agents help across the lifecycle, and how to adopt tools safely.
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Using LLMs and MCP in .NET

1 Share

Introduction

This post picks up where this one left. It will be my second post on using LLMs, and AI, in general. This time I'm going to cover integrating MCP tools with the LLM's response.

MCP stands for Model Context Protocol, and it is an open standard. In a nutshell, it is a protocol designed to help LLMs communicating with the real world, for example, accessing a database, getting real-time weather information for a specific location, creating a ticket in some system, sending out an email, etc.

We need an MCP host and some tools registered with it. There are many ways by which LLMs can communicate with the MCP host, always using JSON-RPC 2.0 for messaging:

  • Standard input/output, if running on the same machine
  • HTTP calls
  • Server-Sent Events (SSE)
  • Custom-defined

Now, I won't go through all of them now, I'll just pick HTTP transport, as it's probably the most usual one. Also, I will be using the OpenAI API.

Essentially, we register tools with an MCP host, give them some semantics - title, name, description, description of parameters - and the LLM, if it so wishes, can request for one of the registered tools to be invoked with some parameters passed in. This process is normally manual, meaning, we have to do it ourselves. Let's see how. Mind you, this won't be an in-depth article, but should be enough to get you started.

Creating an MCP Host

Let's create a simple ASP.NET Core web application, for which we will need the ModelContextProtocol NuGet package from the official Model Context Protocol maintainers:

public static void Main(string[] args)
{
    var builder = WebApplication.CreateBuilder(args);

    builder.Services.AddMcpServer()
        .WithHttpTransport()
        .WithToolsFromAssembly();

    var app = builder.Build();

    app.MapMcp();
    app.Run();
}

Let's define the port where we want it to run, say, 5000. We set it on the Properties/launchSettings.json file, for example:

{
  "$schema": "https://json.schemastore.org/launchsettings.json",
  "profiles": {
    "http": {
      "commandName": "Project",
      "dotnetRunMessages": true,
      "launchBrowser": false,
      "applicationUrl": "http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

Of notice:

  • AddMcpServer registers some common services
  • WithToolsFromAssembly registers all tools found in the current assembly (or we could call WithTools<MySpecificTool> for each individual tools)
  • WithHttpTransport uses HTTP for the communication (could be WithStdioServerTransport instead for standard input/output or WithStreamServerTransport for Server-Sent Events)
  • MapMcp exposes the chosen endpoint

Now, what is a tool? A tool is anything that you can call, possibly with parameters, that either performs some action or returns some value. A few examples:

[McpServerToolType]
public sealed class DayOfWeekTool
{
    [McpServerTool(Title = "Day Of Week")]
    [Description("Get the current day of the week")]
    public string GetDayOfWeek()
    {
        var today = DateTime.Now.ToString("dddd");
        return $"Today is {today}.";
    }
}

[McpServerToolType]
public sealed class CalculatorTool
{
    [McpServerTool(Title = "Calculator")]
    [Description("Returns the result of a calculation.")]
    public float Calculate(
        [Description("The first value.")] float first,
        [Description("The second value.")] float second,
        [Description("The operation.")] string operation)
    {
        return operation switch
        {
            "multiply" => first * second,
            "add" => first + second,
            "subtract" => first - second,
            "divide" => second / first,
            _ => throw new ArgumentException(nameof(operation))
        };
    }
}

[McpServerToolType]
public sealed class EmailTool
{
    [McpServerTool(Title = "Email")]
    [Description("Sends emails.")]
    public bool SendEmail(
        [Description("The recipient's email address.")] string recipient,
        [Description("The subject of the email.")] string subject,
        [Description("The body of the email.")] string body)
    {
        // Code to send the email would go here
         return true;
    }
}

I think these are self-explanatory. As you can see, tools are essentially methods with some attributes on them to provide metadata. I gave three examples of tools:

  • DayOfWeekTool: a simple tool for returning the current day of the week (no parameters)
  • CalculatorTool: for doing basic math operations (three parameters)
  • EmailTool: for sending out an email (three parameters)

Some notes:

  • Tool classes need to be public and have the [McpServerToolType] attribute in order to being registered by WithToolsFromAssembly; alternatively, we can register them explicitly using WithTools<MySpecificTool>
  • There is no base class, interface, or whatever: all methods can be static, for example
  • Since tools are instantiated using Dependency Injection (DI), we can inject services into the tool class' constructors
  • Make sure the [McpServerTool] and [Description] attributes are well specified, otherwise the LLM won't have a clue on how to use them, as the actual method and class names are not important

This host must be running and accessible from the main program, which we’ll see in a moment.

The beauty of this is that LLMs, knowing what tools are available, and based on the prompt, can automatically figure out what tool to call and the parameter mappings!

Let us now proceed with the implementation of the calls.

Invoking MCP Tools

We need to instantiate an MCP client (McpClient):

await using var mcpClient = await McpClient.CreateAsync(
    new HttpClientTransport(new HttpClientTransportOptions { Endpoint = new("http://localhost:5000") }));

The endpoint address and port is, of course, that of the MCP host we created earlier. To test that we can retrieve the registered tools we call ListToolsAsync:

var tools = await mcpClient.ListToolsAsync();

And to test, for example, the day of the week tool we might use this prompt:

OpenAI.Chat.ChatMessage[] messages =
[
    OpenAI.Chat.ChatMessage.CreateUserMessage("What day of the week is today?")
];

We get back:

"Today is quarta-feira."

("quarta-feira" means "wednesday" in portuguese!)

Putting it all together:

var chatOptions = new ChatCompletionOptions();

foreach (var tool in tools)
{
    chatOptions.Tools.Add(tool.AsOpenAIChatTool()); //need to convert Microsoft.Extensions.AI classes to OpenAI
}

var response = await chatClient.CompleteChatAsync(messages, chatOptions);

if (response.Value.FinishReason == OpenAI.Chat.ChatFinishReason.ToolCalls)
{
    foreach (var toolCall in response.Value.ToolCalls)
    {
        var parameters = toolCall.FunctionArguments.ToObjectFromJson<IReadOnlyDictionary<string, object?>>();
        var mcpResponse = await mcpClient.CallToolAsync(toolCall.FunctionName, parameters);;
    }
}

So, for each response that is a tool call (ChatToolCall), we invoke the MCP tool by passing it the tool name (FunctionName) and arguments (FunctionArguments). This is to say that the LLM does not invoke it implicitly, we must do it explicitly. If we want to, we can also provide our own arguments.

For a simple calculation, we just change the prompt:

OpenAI.Chat.ChatMessage[] messages =
[
    OpenAI.Chat.ChatMessage.CreateUserMessage("Calculate 2 x 3")
];

We get (unsurprisingly):

"6"

And for sending out an email, a new prompt might be:

OpenAI.Chat.ChatMessage[] messages =
[
    OpenAI.Chat.ChatMessage.CreateUserMessage("Send an email to rjperes@hotmail.com about MCP with content \"MCP rules\".")
];

We get back:

"true"

In any of these cases, the LLM knows what tool to select, from the registered ones! The parameters are extracted from ChatToolCall.FunctionArguments and the name of the tool to call comes from ChatToolCall.FunctionName. These are all inferred by the LLM based on the prompt! How cool is that?

Using Microsoft.Extensions.AI API

It is also possible to use the Microsoft.Extensions.AI instead of the OpenAI API. To quote from Reddit:

"Microsoft.Extensions.AI is a unified .NET abstraction layer designed to allow developers to swap AI providers (OpenAI, Ollama, Mistral) with minimal code changes. Conversely, the OpenAI .NET SDK is the dedicated, vendor-specific library for accessing OpenAI/Azure OpenAI services directly. Extensions.AI offers flexibility and middleware, while OpenAI SDK offers native, up-to-date access."

Simply put, Microsoft.Extensions.AI acts as a façade (IChatClient), allowing you to switch backends (e.g., from OpenAI to Ollama or others) without changing the API calls and code structure.

To use Microsoft.Extensions.AI, the OpenAI ChatClient needs to be converted to an IChatClient by means of the AsIChatClient extension method:

//this was introduced in the last post:
AzureOpenAIClient azureClient = new(options!.Endpoint, new AzureKeyCredential(options.ApiKey));
var azureChatClient = azureClient.GetChatClient(options.Deployment);
var chatClient = azureChatClient.AsIChatClient(); var messages = new Microsoft.Extensions.AI.ChatMessage[] { new(ChatRole.User, "What day of the week is today?") }; var chatOptions = new ChatOptions { Tools = [.. tools] }; var response = await chatClient.GetResponseAsync([.. messages], chatOptions); if (response.FinishReason == Microsoft.Extensions.AI.ChatFinishReason.ToolCalls) { foreach (var message in response.Messages.Where(x => x.Contents.First() is FunctionCallContent)) { var toolCall = message.Contents.First() as FunctionCallContent; var mcpResponse = await mcpClient.CallToolAsync(toolCall!.Name, (IReadOnlyDictionary<string, object?>)toolCall.Arguments!); } }

As you can see, there are some differences in the syntax, but, in the end, it all works the same.

Conclusion

I hope this was sufficient to catch your interest! I barely scratched the surface, will probably go a little bit deeper in future posts. For now, I advise you to play with some prompts and tools and see what you can achieve. As always, looking forward to hearing your thoughts!

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Awesome GitHub Copilot just got awesommer (if that’s a word)

1 Share

If you've been following the GitHub Copilot ecosystem, you've probably heard of the Awesome GitHub Copilot repo. It launched back in July 2025 with a straightforward goal: give the community a central place to share custom instructions, prompts, and chat modes for tailoring Copilot's AI responses.

A lot of people contributed. As a result, the repo now contains 175+ agents, 208+ skills, 176+ instructions, 48+ plugins, 7 agentic workflows, and 3 hooks.

And now the maintainers took it one step further and created an Awesome GitHub Copilot website and Learning hub.


A website that actually helps you find things

The new site lives at awesome-copilot.github.com and wraps the repo in a browsable interface built on GitHub Pages. The headline feature is full-text search across every resource — agents, skills, instructions, hooks, workflows, and plugins — with category filters to narrow things down.


Each resource has its own page with a modal preview, so you can see exactly what you're getting before committing.


And if you find something that you like, there's a one-click install directly into VS Code or VS Code Insiders.

The Learning Hub: making sense of a fast-moving space

One of the more additions is the Learning Hub. If you've felt like the GitHub Copilot customization landscape moves faster than you can keep up with — you're not imagining it. 

The Learning Hub is designed to cut through that churn by focusing on fundamentals: what are agents, skills, and instructions, and how do they actually differ? What's a hook versus a plugin? And once you understand the concepts, how do you take an existing resource and adapt it for your own needs, or build something from scratch?


It's the kind of documentation that tends to get skipped in fast-growing open-source projects, so it's good to see it getting proper attention here.

Plugins and the new resource types

The plugin system is where things get practically interesting. A plugin bundles related agents, skills, and commands into a single installable package — think themed collections for frontend development, Python, Azure, or whatever your team's stack looks like. Awesome GitHub Copilot is now a default plugin marketplace for both GitHub Copilot CLI and VS Code, which means installing something is as simple as:

copilot plugin install <plugin-name>@awesome-copilot


Or search for agent plugins through the extensions in VSCode by typing @agentPlugins:

Check it out!

If you use GitHub Copilot regularly and haven't explored what's possible with custom agents and skills, the new website is a much friendlier starting point than diving into the raw repo. Browse at awesome-copilot.github.com, or head straight to the Learning Hub if you want the conceptual grounding first. And if you've built something useful for your own workflow, the repo is wide open for contributions.

More information

Awesome GitHub Copilot | Awesome GitHub Copilot

Learning Hub | Awesome GitHub Copilot

Awesome GitHub Copilot just got a website, and a learning hub, and plugins! - Microsoft for Developers

Finding inspiration for good custom instructions for GitHub Copilot

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories