Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150679 stories
·
33 followers

Azure Broadens AI Options from Models to Hybrid Deployment

1 Share
Microsoft is expanding Azure's AI stack with more model choices in Microsoft Foundry and more flexible hybrid and sovereign deployment paths, reinforcing a build-on-Azure-AI, deploy-where-needed approach.
Read the whole story
alvinashcraft
45 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Changing the physics of cyber defense

1 Share

The Deputy CISO blog series is where Microsoft  Deputy Chief Information Security Officers (CISOs) share their thoughts on what is most important in their respective domains. In this series, you will get practical advice, tactics to start (and stop) deploying, forward-looking commentary on where the industry is going, and more. In this article, John Lambert, Chief Technology Officer, Corporate Vice President and Security Fellow at Microsoft dives into the future of cyber defense.

Ten years ago, as threat actors began following our growing customer base to the Microsoft Cloud, I founded the Microsoft Threat Intelligence Center (MSTIC), which focuses deeply on addressing this type of cyberattacker. One of the first things we learned was that to find threat actors you need to think like them. That’s what led me to begin thinking in graphs. Any infrastructure you need to defend is conceptually a directed graph of credentials, dependencies, entitlements, and more. Cyberattackers find footholds, pivot within infrastructure, and abuse entitlements and secrets to expand further. Software systems and online services are built from components—many of these components have logs of what’s happening, but this results in a lot of siloed logs. To see what a threat actor is doing, you have to reconstruct that red thread of activity from logs. Then, from those logs you can create a graph. 

By adopting this same graph-based thinking, we put ourselves on more even footing with cyberattackers. But we don’t really want to be on even footing. We want to retake the advantage for ourselves. That’s why it’s also important to keep our best practices up, making sure our infrastructure is well managed, maintaining a well-educated team of analysts on our team, and collaborating with our competitors on defense. All together, this is of course a lot of work. It’s easy to see why some security professionals out there see the physics of defense as being against them. And in some ways, it has been. So, let’s change that.

We’ve got more data and more advanced tools at our fingertips than ever before, including some very good AI. Let’s take a look at each of these best practices, as well as how we can use our new tools to reduce the cost and effort involved in maintaining the advantage against threat actors.

The defense benefits of attack graphs

Most defenders today live in a tabular, relational world of data and the databases in which that data lives. At Microsoft, this is Azure Data Explorer databases queried using Kusto Query Language (KQL). And we know that if we can represent data in other ways, like in a graph, we can suddenly look at our data in ways that are difficult to do in traditional databases. This is a chief reason why threat actors build attack graphs of their targets. The graph lets them more easily see the many ways they can break into the target’s network, pivot to the things they need, get the credentials they need, and exploit things within the blast radius those credentials give them. That’s why it’s important to build a great attack graph for all the things that you must defend and equip your defenders with it. With a graph, you can ask questions like “what’s the blast radius of this kind of access?”, “can I get from identity A to infrastructure B?”, or “if a threat actor has taken over this specific node, can they get to our crown jewels?” With an attack graph in hand, those questions become easier to answer.

Relational tables and graphs are just two of the ways to represent security data. We’re currently working on broadening those ways to also include anomalies and vectors over time. All together, these four data representations are what I refer to as the algebras of defense. As a defender equipped with these algebras, you can easily represent security data in multiple different ways. You can ask it questions in domains they are highly specialized in answering and get the answers you need from your security data in ways that drive you very quickly to the outcomes you need. What’s really exciting about this concept is that the benefits don’t just extend to your security team. Your advanced AI can use them to similar effect, turning each algebra into a new way to detect, for instance, what constitutes an anomaly and what does not. It’s giving AI the ability to use the same intuitions that human experts use but in a much more highly dimensional space.

Building difficult terrain through proper cyber defense hygiene

A well-managed target is a harder target to attack. Defenders that excel in security don’t just react to cyberthreats, they proactively shape their environments to be inhospitable to bad actors. This begins with investing in preventative controls. Rather than waiting for incidents to occur, successful defenders deploy technologies and processes that anticipate and block cyberattacks before they materialize. This includes endpoint protection, network segmentation, behavioral analytics, threat modeling, and more.

It’s also important to deprecate legacy systems as they often harbor vulnerabilities that cyberattackers exploit. By retiring outdated solutions and replacing them with modern, secure alternatives, organizations reduce their exposure and simplify their defense posture. The same goes for entitlement management. By continuously reviewing who has what access, organizations can help prevent lateral threat actor movement.

You’ll also want to make sure you’re conducting top-tier asset management. You can’t protect what you don’t know exists. Maintaining an accurate, real-time inventory of devices, applications, and identities helps defenders monitor, patch, and secure every component of the environment. Removing orphaned elements goes hand-in-hand with this concept. Unused accounts, forgotten servers, and abandoned cloud resources—all of these remnants of past projects can easily become low-hanging fruit for cyberattackers.

You should invest time and effort into creating difficult terrain for attackers, making it harder for them to traverse your networks. Phishing-resistant multifactor authentication is a way to do this. So is not just having strong identity management, but requiring it to be used from expected, well-defined places on the network. For example, forcing admin access to be used from hardened, pre-identified locations.

Layered defenses with multiple controls working in concert help quiet your network. By reducing randomness and enforcing predictability, you can eliminate much of the noise that threat actors rely on to hide, ultimately removing entire classes of threat actors from the equation.

Invest in internal expertise and collaborate with others who do the same

While preventative controls are essential for raising the cost of cyberattacks, no defense is impenetrable. That’s why remediation remains a critical pillar of cyber hygiene. Organizations must be equipped to both block threats and to detect and respond to those that slip through.

This begins with data visibility. Security teams need to be on top of their telemetry so they can spot anomalies quickly. And you’ll need a team of educated analysts who understand cyberattacker behavior and can distinguish signal from noise. With their expertise, you’ll be better equipped to identify subtle indicators of compromise and initiate swift, effective remediation efforts.

It’s also important to work on cyber defense together with organizations that you otherwise view as your competitors. And, thankfully, here’s where I get to impart a bit of good news. Over the past decade, the tech industry has undergone a profound shift in how it approaches this concept. As organizations, we’re now way better about taking news about the security events happening to us to trusted spaces and talking about them in trusted ways than we were 10 years ago. What was once taboo, like the sharing of breach details with competitors, is now a mainstay of our collective defense. This cultural shift has led to the rise of trusted security forums, cross-industry intelligence sharing, and joint incident response efforts, allowing all of our defenders to learn from each other and respond faster to emerging threats.

Optimizing the defense curve

We now operate in a world where vast, high-fidelity data sets and advanced AI systems can amplify our reach, sharpen our detection, and accelerate our response. By embracing graph-based thinking, cultivating difficult terrain, and investing in collaborative intelligence, defenders can fundamentally shift the physics of defense beneath their would-be attackers’ feet.

With the algebras of defense, defenders can interrogate their environments in ways that were previously impossible, surfacing insights that drive proactive, precision-based security. And with AI as a partner, we can turn complexity into clarity, noise into signal, and partner swift remediations with anticipation. By rewriting the physics of defense, we can reclaim the advantage and redefine what it means to be secure.

Microsoft
Deputy CISOs

To hear more from Microsoft Deputy CISOs, check out the OCISO blog series:

To stay on top of important security industry updates, explore resources specifically designed for CISOs, and learn best practices for improving your organization’s security posture, join the Microsoft CISO Digest distribution list.

Man with smile on face working with laptop

Learn more

To hear more from Microsoft Deputy CISOs, check out the OCISO blog series. To stay on top of important security industry updates, explore resources specifically designed for CISOs and best practices for improving your organization’s security posture  join the Microsoft CISO Digest (sent every two months) distribution list, go to this webpage.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Changing the physics of cyber defense appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
45 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Speed is nothing without control: How to keep quality high in the AI era

1 Share

What’s the point of moving faster if you can’t trust the code you’re shipping?

We’ve all been using AI in our workflows for a while now, and there’s no denying how much faster everyday development has become. Tasks that once took hours now finish in minutes. Entire features come together before you’ve even finished your morning coffee.

But we’ve also experienced the other side of that speed: when AI is used without clear direction or guardrails, it can generate what’s often called AI slop—semi-functional code stitched together without context, quietly piling up bugs, broken imports, and technical debt.

In this new era, being fast isn’t enough. Precision and quality are what set teams apart.

“The best drivers aren’t the ones who simply go the fastest, but the ones who stay smooth and in control at high speed,” said Marcelo Oliveira, GitHub VP of product at GitHub Universe 2025. “Speed and control aren’t trade-offs. They reinforce each other.”

So how do you get the best of both? How do you move fast and keep your code clean, reliable, and firmly under your direction? Here are three essential strategies:

Tip #1: Treat speed and quality as a package deal 

It’s very easy to accept AI-generated code that appears polished but hides underlying issues. However, speed without quality doesn’t help you ship faster, it just increases the risk of issues compounding down the road. That’s why the teams and organizations that succeed are the ones that pair AI-driven velocity with real guardrails.

And that’s exactly what GitHub Code Quality (currently in public preview) helps you do. GitHub Code Quality is an AI- and CodeQL-powered analysis tool that surfaces maintainability issues, reliability risks, and technical debt across your codebase, right as you work. Here’s how to start using it:

  1. Enable with one click
    Turn it on at the repository level and GitHub will analyze your code using a combination of CodeQL and LLM-based detection. This will give you a clear view of the maintainability and reliability issues in your codebase.
  2. Get automatic fixes inside every pull request
    As soon as you open a pull request, GitHub Code Quality flags unused variables, duplicated logic, runtime errors, and more. Here’s an example of pull request code that “works,” but isn’t production-ready:
// fuelCalculator.js

export function calculateFuelUsage(laps, fuelPerLap) {
  const lastLap = laps[laps.length - 1]; // unused variable

  function totalFuel(laps, fuelPerLap) {
    return laps.length * fuelPerLap;
  }

  // duplicated function
  function totalFuel(laps, fuelPerLap) {
    return laps.length * fuelPerLap;
  }

  return totalFuel(laps, fuelPerLap);

GitHub Code Quality responds with AI + CodeQL-powered suggestions, including a one-click fix:

-export function calculateFuelUsage(laps, fuelPerLap) {
-  const lastLap = laps[laps.length - 1]; // unused variable
-
-  function totalFuel(laps, fuelPerLap) {
-    return laps.length * fuelPerLap;
-  }
-
-  // duplicated function
-  function totalFuel(laps, fuelPerLap) {
-    return laps.length * fuelPerLap;
-  }
-
-  return totalFuel(laps, fuelPerLap);
-}
+export function calculateFuelUsage(laps, fuelPerLap) {
+  if (!Array.isArray(laps) || typeof fuelPerLap !== "number") {
+    throw new Error("Invalid input");
+  }
+  return laps.length * fuelPerLap;
+}

No triage or slowdown, just clean, reliable code.

  1. Enforce your quality bar
    Rulesets let you block merges that don’t meet your team’s standards. This keeps quality consistent without relying on reviewer willpower and without killing your velocity.
  2. Reveal (and fix) legacy technical debt
    The AI Findings page highlights issues in files your team is already working in, helping you fix problems while they’re top of mind and reduce context switching.

Bottom line: AI gives you speed. GitHub Code Quality gives you control. Together, they let you move faster and build better without ever trading one for the other.

Learn more about GitHub Code Quality 👉

Tip #2: Be the driver, not the passenger 

AI can generate code quickly, but quality has never come from automation alone. GitHub has always believed in giving you the tools to write your best code—from Copilot in the IDE, to GitHub Copilot code review in pull requests, to GitHub Code Quality—providing visibility into long-standing issues and tech debt, along with actionable fixes to help you address them.

These features give you the power to set direction, standards, and constraints. The clearer your intent, the better AI can support you.

Here’s a simple prompting framework that helps you do just that:

  1. Set the goal, not just the action
    Think of your prompts like giving direction to another engineer: the more clarity you provide, the better the final output. 

Bad prompt:

refactor this file

Better prompt:

refactor this file to improve readability and maintainability while preserving functionality, no breaking changes allowed
  1. Establish constraints
    Examples:
    • “No third-party dependencies”
    • “Must be backwards compatible with v1.7”
    • “Follow existing naming patterns”
  2. Provide reference context
    Link to related files, docs, existing tests, or architectural decisions.
  3. Decide the format of the output
    Pull request, diff, patch, commentary, or code block.

With GitHub Copilot coding agent, you can even assign multi-step tasks like:

Create a new helper function for formatting currency across the app.
- Must handle USD and EUR
- Round up to two decimals
- Add three unit tests
- Do not modify existing price parser
- Return as a pull request

Notice how you remain accountable for the thinking and the agent becomes accountable for the doing.

Bottom line: AI accelerates execution, but your clarity—and GitHub’s guardrails—are what turn that acceleration into high-quality software.

Learn more about coding agent 👉

Tip #3: Build visible proof of your thinking, not just your output

As AI takes on more execution work, what sets effective developers apart is how clearly they communicate decisions, trade-offs, and reasoning. It’s no longer enough to write code, you need to show how you think, evaluate, and approach problems across the lifecycle of a feature. 

Here’s a best practice to level up your documentation signal: 

  1. Create an issue that captures the why
    Write a brief summary of the problem, what success looks like, constraints, and any risks.
  2. Name your branch clearly and commit thoughtfully
    Use meaningful names and commit messages that narrate your reasoning, not just your keystrokes.
  3. Use Copilot and coding agent to build, then document decisions
    Include short notes on why you chose one approach over another and what alternatives you considered.
  4. Open a pull request with signal-rich context
    Add a short “Why,” “What changed,” and “Trade-offs” section, plus screenshots or test notes.

For example, instead of:

Added dark mode toggle

Try this:

- Added dark mode toggle to improve accessibility and user preference support.
- Chose localStorage for persistence to avoid server dependency.
- Kept styling changes scoped to avoid side effects on existing themes.

Bottom line: Your code shows what you did, but your documentation shows why it matters. In this new AI era, the latter is just as critical as the former.  

Learn more about effective documentation 👉

Moving forward together 

At the end of the day, quality is everything. While AI may accelerate the pace of work, it can also turn that speed on its head if the output isn’t guided with intent. But when you combine AI with clear direction, strong guardrails, and visible thinking, you help your team deliver cleaner, more reliable code at scale—and position your organization to move quickly without compromising on what matters most.

Get started with GitHub Copilot >

The post Speed is nothing without control: How to keep quality high in the AI era appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
45 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Agent Mode in Excel is now generally available on Excel for Web

1 Share

We’re excited to announce that Agent Mode in Excel is now generally available on Excel for Web, rolling out to users with a commercial Microsoft 365 Copilot license or a Microsoft 365 Premium subscription. This launch marks a major shift in how you work with Copilot in Excel—moving from basic assistance to an agentic experience with capabilities to build multi-step workflows directly in your workbook.

Agent Mode in Excel

Agent Mode delivers:

  • Multi-step workflows: Move beyond single-turn commands. Ask, refine, and build iteratively with Copilot.
  • Direct workbook manipulation: Copilot applies changes directly inside your workbook, no clicks or copy/paste needed.
  • Transparency and reasoning: See how Copilot interprets your request, the steps it takes along the way, and explanations of each output along with verification.

Hear from Carlos Otero, a member of the Excel team, who recently chatted with Excel MVP Kevin Stratvert on why Agent Mode is a game-changer for Copilot users:

What’s possible with Agent Mode

Agent Mode unlocks scenarios that go far beyond traditional chatbots. Some examples now possible include:

  • Create workbooks: Generate new content directly in Excel, grounded in both existing workbook data and web search results to bring in relevant context.
  • Scenario modeling: Run what-if analyses for revenue, budgets, or forecasts and model advanced scenarios with adjustable assumptions.
  • Data analysis: Generate analyses of large datasets, highlight anomalies, and surface trends with formula-driven analysis.
  • Formula generation: Fix broken formulas or and generate dynamic formulas that connect across your workbook data, including explanations for complex calculations.
  • Data visualization: Create pivot tables, charts, and dashboards—all through natural conversation. Generate native Excel artifacts that recalculate and update based on changes to the underlying data.

What’s generally available in Agent Mode today

  • Platform: Excel for Web. Coming in January to Excel for Windows and Mac.
  • Language support: English (US), Spanish (Spain, Mexico, Japanese, French (France, Canada), German, Portuguese (Brazil), Italian and Chinese (Simplified).; additional languages to follow.
  • Web search: Outputs are grounded in web data when needed. File grounding and Work IQ support are planned for early 2026 to enable richer work context.
  • Licenses: Available for commercial Microsoft 365 Copilot licensed users and Microsoft 365 Premium subscribers. Coming in January to Microsoft 365 Personal and Family subscribers.

Ready to experience the future of Excel? Open Excel on the web with an eligible license and start using Agent Mode from the Tools menu. Learn more here.

 

Read the whole story
alvinashcraft
48 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Host Your Node.js MCP Server on Azure Functions in 1 Simple Step

1 Share

Building AI agents with the Model Context Protocol (MCP) is powerful, but when it comes to hosting your MCP server in production, you need a solution that's reliable, scalable, and cost-effective. What if you could deploy your regular Node.js MCP server to a serverless platform that handles scaling automatically while you only pay for what you use?

Let's explore how Azure Functions now supports hosting MCP servers built with the official Anthropic MCP SDK, giving you serverless scaling with almost no changes in your code.

Grab your favorite hot beverage, and let's dive in!

TL;DR key takeaways

  • Azure Functions now supports hosting Node.js MCP servers using the official Anthropic SDK
  • Only 1 simple configuration needed: adding host.json file
  • Currently supports HTTP Streaming protocol with stateless servers
  • Serverless hosting means automatic scaling and pay-per-use pricing
  • Deploy with one command using Infrastructure as Code

What will you learn here?

  • Understand how MCP servers work on Azure Functions
  • Configure a Node.js MCP server for Azure Functions hosting
  • Test your MCP server locally and with real AI agents
  • Deploy your MCP server with Infrastructure as Code and AZD

Reference links for everything we use

Requirements

What is MCP and why does it matter?

Model Context Protocol is an open standard that enables AI models to securely interact with external tools and data sources. Instead of hardcoding tool integrations, you build an MCP server that exposes capabilities (like browsing a menu, placing orders, or querying a database) as tools that any MCP-compatible AI agent can discover and use. MCP is model-agnostic, meaning it can work with any LLM that supports the protocol, including models from Anthropic, OpenAI, and others. It's also worth noting that MCP supports more than just tool calls, though that's its most common use case.

Schema showing MCP interfacing with different tool servers

The challenge? Running MCP servers in production requires infrastructure. You need to handle scaling, monitoring, and costs. That's where Azure Functions comes in.

🚨 Free course alert! If you're new to MCP, check out the MCP for Beginners course to get up to speed quickly.

Why Azure Functions for MCP servers?

Azure Functions is a serverless compute platform that's perfect for MCP servers:

  • Zero infrastructure management: No servers to maintain
  • Automatic scaling: Handles traffic spikes seamlessly
  • Cost-effective: Pay only for actual execution time (with generous free grant)
  • Built-in monitoring: Application Insights integration out of the box
  • Global distribution: Deploy to regions worldwide

The new Azure Functions support means you can take your existing Node.js MCP server and deploy it to a production-ready serverless environment with minimal changes. This comes up as an additional option for native Node.js MCP hosting, but you can still use the Azure Functions MCP bindings that were available before.

1 simple steps to enable Functions hosting

Let's break down what you need to add to your existing Node.js MCP server to run it on Azure Functions. I'll use a real-world example from our burger ordering system.

If you already have a working Node.js MCP server, you can just follow this to make it compatible with Azure Functions hosting.

Step 1: Add the host.json configuration

Create a host.json file at the root of your Node.js project:

{
  "version": "2.0",
  "extensions": {
    "http": {
      "routePrefix": ""
    }
  },
  "customHandler": {
    "description": {
      "defaultExecutablePath": "node",
      "workingDirectory": "",
      "arguments": ["lib/server.js"]
    },
    "enableForwardingHttpRequest": true,
    "enableHttpProxyingRequest": true
  }
}

Note: Adjust the arguments array to point to your compiled server file (e.g., lib/server.js or dist/server.js), depending on your build setup.

The hosts.json file holds metadata configuration for the Functions runtime. The most important part here is the customHandler section. It configures the Azure Functions runtime to run your Node.js MCP server as a custom handler, which allows you to use any HTTP server framework (like Express, Fastify, etc.) without modification (tip: it can do more than MCP servers! 😉).

There's no step 2 or 3. That's it! 😎

Note: We're not covering the authentication and authorization aspects of Azure Functions here, but you can easily add those later if needed.

Automated setup with GitHub Copilot

While this change is pretty straightforward, you might want to automate this (boring) process. That's why we have AI tools for, right?

My friend Anthony Chu created an awesome GitHub Copilot prompt that automates this entire setup process. Just ask Copilot to use the prompt from create-functions-mcp-server and it will:

  • Add the necessary configuration file
  • Set up the Infrastructure as Code

If you're not using Copilot, you can also copy the prompt instructions from the repo in your favorite AI coding assistant.

Real-world example: Burger MCP Server

Let's look at how this works in practice with a burger ordering MCP server. This server exposes 9 tools for AI agents to interact with a burger API:

  • get_burgers - Browse the menu
  • get_burger_by_id - Get burger details
  • place_order - Place an order
  • get_orders - View order history
  • And more...

Here's the complete server implementation using Express and the MCP SDK:

import express, { Request, Response } from 'express';
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
import { getMcpServer } from './mcp.js';

const app = express();
app.use(express.json());

// Handle all MCP Streamable HTTP requests
app.all('/mcp', async (request: Request, response: Response) => {
  const transport = new StreamableHTTPServerTransport({
    sessionIdGenerator: undefined,
  });

  // Connect the transport to the MCP server
  const server = getMcpServer();
  await server.connect(transport);

  // Handle the request with the transport
  await transport.handleRequest(request, response, request.body);

  // Clean up when the response is closed
  response.on('close', async () => {
    await transport.close();
    await server.close();
  });

  // Note: error handling not shown for brevity
});

// The port configuration
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
  console.log(`Burger MCP server listening on port ${PORT}`);
});

The MCP tools are defined using the official SDK:

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { z } from 'zod';

export function getMcpServer() {
  const server = new McpServer({
    name: 'burger-mcp',
    version: '1.0.0',
  });

  server.registerTool(
    'get_burgers',
    { description: 'Get a list of all burgers in the menu' },
    async () => {
      const response = await fetch(`${burgerApiUrl}/burgers`);
      const burgers = await response.json();
      return {
        content: [{
          type: 'text',
          text: JSON.stringify(burgers, null, 2)
        }]
      };
    }
  );

  // ... more tools
  return server;
}

As you can see, the actual implementation of the tool is forwarding an HTTP request to the burger API and returning the result in the MCP response format. This is a common pattern for MCP tools in enterprise contexts, that act as wrappers around one or more existing APIs.

Current limitations

Note that this Azure Functions MCP hosting currently has some limitations: it only supports stateless servers using the HTTP Streaming protocol. The legacy SSE protocol is not supported as it requires stateful connections, so you'll either have to migrate your client to use HTTP Streaming or use another hosting option, like using containers for example.

For most use cases, HTTP Streaming is the recommended approach anyway as it's more scalable and doesn't require persistent connections. Stateful MCP servers comes with additional complexity challenges and have limited scalability if you need to handle many concurrent connections.

Testing the MCP server locally

First let's run the MCP server locally and play a bit with it.

If you don't want to bother with setting up a local environment, you can use the following link or open it in a new tab to launch a GitHub Codespace:

This will open a VS Code environment in your browser with the repo already cloned and all the tools installed and ready to go. Otherwise you can just clone the repo.

Once you have the code ready, open a terminal and run:

# Install dependencies
npm install

# Start the burger MCP server and API
npm start

This will start multiple services locally, including the Burger API and the MCP server, which will be available at http://localhost:3000/mcp. This may take a few seconds, wait until you see this message in the terminal:

🚀 All services ready 🚀

We're only interested in the MCP server for now, so let's focus on that.

Using MCP Inspector

The easiest way to test the MCP server is with the MCP Inspector tool:

$ npx -y @modelcontextprotocol/inspector

Open the URL shown in the console in your browser, then:

  1. Set transport type to Streamable HTTP
  2. Enter your local server URL: http://localhost:3000/mcp
  3. Click Connect

After you're connected, go to the Tools tab to list available tools. You can then try the get_burgers tool to see the burger menu.

MCP Inspector Screenshot

Using GitHub Copilot (with remote MCP)

Configure GitHub Copilot to use your deployed MCP server by adding this to your project's .vscode/mcp.json:

{
  "servers": {
    "burger-mcp": {
      "type": "http",
      "url": "http://localhost:3000/mcp"
    }
  }
}

Click on "Start" button that will appear in the JSON file to activate the MCP server connection.

Now you can use Copilot in agent mode and ask things like:

  • "What spicy burgers do you have?"
  • "Place an order for two cheeseburgers"
  • "Show my recent orders"

Copilot will automatically discover and use the MCP tools! 🎉

Tip: If Copilot doesn't call the burger MCP tools, try checking if it's enabled by clicking on the tool icon in the chat input box and ensuring that "burger-mcp" is selected. You can also force tool usage by adding #burger-mcp in your prompt.

(Bonus) Deploying to Azure with Infrastructure as Code

Deploying an application to Azure is usually not the fun part, especially when it involves multiple resources and configurations.
With the Azure Developer CLI (AZD), you can define your entire application infrastructure and deployment process as code, and deploy everything with a single command.

If you've used the automated setup with GitHub Copilot, you should already have the necessary files. Our burger example also comes with these files pre-configured. The MCP server is defined as a service in azure.yaml, and the files under the infra folder defines the Azure Functions app and related resources.

Here's the relevant part of azure.yaml that defines the burger MCP service:

name: mcp-agent-langchainjs

services:
  burger-mcp:
    project: ./packages/burger-mcp
    language: ts
    host: function

While the infrastructure files can look intimidating at first, you don't need to understand all the details to get started. There are tons of templates and examples available to help you get going quickly, the important part is that everything is defined as code, so you can version control it and reuse it.

Now let's deploy:

# Login to Azure
azd auth login

# Provision resources and deploy
azd up

Pick your preferred Azure region when prompted (if you're not sure, choose East US2), and voilà! In a few minutes, you'll have a fully deployed MCP server running on Azure Functions.

Once the deployment is finished, the CLI will show you the URL of the deployed resources, including the MCP server endpoint.

AZD deployment output for the burger MCP example app

Example projects

The burger MCP server is actually part of a larger example project that demonstrates building an AI agent with LangChain.js, that uses the burger MCP server to place orders. If you're interested in the next steps of building an AI agent on top of MCP, this is a great resource as it includes:

  • AI agent web API using LangChain.js
  • Web app interface built with Lit web components
  • MCP server on Functions (the one we just saw)
  • Burger ordering API (used by the MCP server)
  • Live order visualization
  • Complete Infrastructure as Code, to deploy everything with one command

But if you're only interested in the MCP server part, then you might want to look at this simpler example that you can use as a starting point for your own MCP servers: mcp-sdk-functions-hosting-node is a server template for a Node.js MCP server using TypeScript and MCP SDK.

What about the cost?

Azure Functions Flex Consumption pricing is attractive for MCP servers:

  • Free grant: 1 million requests and 400,000 GB-s execution time per month
  • After free grant: Pay only for actual execution time
  • Automatic scaling: From zero to hundreds of instances

The free grant is generous enough to allow running a typical MCP server with moderate usage, and all the experimentation you might need. It's easy to configure the scaling limits to control costs as needed, with an option to scale down to zero when idle. This flexibility is why Functions is my personal go-to choice for TypeScript projects on Azure.

Wrap up

Hosting MCP servers on Azure Functions gives you the best of both worlds: the simplicity of serverless infrastructure and the power of the official Anthropic SDK. With just one simple configuration step, you can take your existing Node.js MCP server and deploy it to a production-ready, auto-scaling platform.

The combination of MCP's standardized protocol and Azure's serverless platform means you can focus on building amazing AI experiences instead of managing infrastructure. Boom. 😎

Star the repos ⭐️ if you found this helpful! Try deploying your own MCP server and share your experience in the comments. If you run into any issues or have questions, you can reach for help on the Azure AI community on Discord.

Read the whole story
alvinashcraft
49 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Using an API Gateway with Fine-Grained Authorization

1 Share
In this blog post you’ll learn about different API Gateways: Zuplo, Kong, and Apache APISIX, and how each one could work best for a service that uses Fine-Grained Authorization.

Read the whole story
alvinashcraft
49 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories