In my earlier Microsoft Agent series of posts, we’ve explored how AI agents can be extended using function tools and plugins.
In this post, we look at the Model Context Protocol (MCP) -the emerging standard that lets AI assistants discover and call tools exposed by your AI applications.
Specifically, we cover the following:
- What is MCP
- Why MCP matters for AI applications
- Setting up an MCP server in ASP.NET Core
- Transport options: SSE vs STDIO
- Testing and debugging your MCP server
- Configuring popular MCP clients
A practical walkthrough is also included that shows how to setup a local MCP server in Visual Studio, then reference and call this from Code.
Let’s dig in.
~
What Is MCP
The Model Context Protocol (MCP) is an open standard that enables AI assistants to discover and invoke tools exposed by external servers.
You can think of it as a universal adapter that lets any MCP-compatible client communicate with any MCP-compatible server.
Rather than building custom integrations for each AI tool, you build one MCP server and multiple clients can consume it.
In some ways, it reminds me of older web services. ASMX? WCF? remember those?
The protocol uses JSON-RPC 2.0 over different transport mechanisms, making it both standardised and flexible.
~
Why MCP Matters
Traditional AI integrations require custom code for each platform. If you want your documentation search to work in Claude Desktop, Cursor, or VS Code Continue, you’d typically need three separate integrations.
MCP changes this. Build once, connect everywhere.
Key benefits include:
- Tool Discovery: Clients automatically discover what tools your server offers
- Standardised Communication: JSON-RPC 2.0 provides a well-defined message format
- Multiple Transports: Choose HTTP/SSE for development, STDIO for production
- Growing Ecosystem: Major AI tools are adopting MCP support
Ideal if you or your org contains unique IP and want to increase the footprint of your solution to AI agents or AI enabled tools, services, and IDEs.
~
Setting Up an MCP Server in ASP.NET Core
Let’s build an MCP server that exposes a documentation search tool. We’ll use Microsoft’s MCP SDK.
First, add the required NuGet packages:
dotnet add package ModelContextProtocol --prerelease
dotnet add package ModelContextProtocol.AspNetCore --prerelease
Next, configure the MCP server in Program.cs:
var builder = WebApplication.CreateBuilder(args);
// Configure MCP based on environment
if (builder.Environment.IsDevelopment())
{
builder.Services.AddMcpServer()
.WithHttpTransport() // SSE for dev testing
.WithToolsFromAssembly();
}
else
{
builder.Services.AddMcpServer()
.WithStdioServerTransport() // STDIO for production
.WithToolsFromAssembly();
}
var app = builder.Build();
// Map the MCP endpoint
app.MapMcp();
app.Run();
The key decision here is the transport mechanism. This depends on your deployment model.
During development, HTTP/SSE is easier to debug.
HTTP/SSE is ideal when your MCP server runs as a web API that multiple clients can connect to.
STDIO is used when a client like Claude Desktop spawns and manages the MCP server process directly via stdin/stdout – common for simple local-only tools.
For production deployments to Claude Desktop, STDIO is preferred.
~
Creating Your First MCP Tool
MCP tools are like function tools in the Microsoft Agent Framework.
I say this as just like the Agent Framework, with MCP Tools, you annotate methods with descriptive attributes to help AI clients understand when to invoke them.
For example, here’s a documentation search tool:
using ModelContextProtocol.Server;
using System.ComponentModel;
namespace MyApp.MCP.Tools;
[McpServerToolType]
public class SearchDocs
{
private readonly IDocumentSearchService _searchService;
private readonly ILlmService _llmService;
public SearchDocs(
IDocumentSearchService searchService,
ILlmService llmService)
{
_searchService = searchService;
_llmService = llmService;
}
[McpServerTool]
[Description("Searches documentation and returns a synthesised answer")]
public async Task<string> Search(
[Description("The natural language search query")]
string query)
{
// 1. Vector search for relevant chunks
var chunks = await _searchService.SearchAsync(query);
if (!chunks.Any())
{
return "No relevant documentation found for your query.";
}
// 2. Fetch full documents
var documents = await _searchService.GetFullDocumentsAsync(chunks);
// 3. Synthesise answer using LLM
var context = string.Join("\n\n", documents.Select(d => d.Content));
var answer = await _llmService.SynthesiseAsync(query, context);
// 4. Return with source attribution
var sources = string.Join("\n", documents.Select(d => $"- {d.Url}"));
return $"{answer}\n\n**Sources:**\n{sources}";
}
}
The [Description] attributes are crucial. They tell the AI client what the tool does and how to use it.
~
Testing with MCP Explorer
You can test your MCP tool using MCP Explorer on Windows. Download it from https://mcp-explorer.com and install.

Key features include:
- No Node.js Required: A standalone Windows application
- Auto-Detection: Automatically discovers servers from your Claude Desktop config
- Visual Tool Execution: Test tools with a clean UI
- JSON-RPC Monitoring: See the raw protocol messages for debugging
To connect manually, enter the SSE endpoint URL hhttp://localhost:5125/sse, when found, the MCP tool search is discovered from the Visual Studio instance:

You click this and supply the relevant parameter with the value Getting Started.

Clicking Execute Tool invokes the MCP tool and returns data:

MCP Explorer is useful when you want to inspect the JSON-RPC traffic between client and server. Ideal for understanding exactly what’s happening under the hood.
~
Configuring VS Code and the Continue Extension
Once your server is running, you can connect various AI tools. I used VS Code and the Continue extension.
To configure the Continue extension, perform the following steps:
- Install the Continue plugin in Code.
- Next create a file at
~/.continue/mcpServers/my-docs.yaml
Note: You may need to create the .continue and mcpServers folders if they don’t already exist.
Next, add the following content and save:
name: MCP-Demo-Tools
version: 0.0.1
schema: v1
mcpServers:
- name: mcp-docs
type: sse
url: http://localhost:5125/sse
With the configuration defined, it can now be tested.
~
Running Your MCP Server in Visual Studio and Testing with VS Code
A common development workflow is to run your MCP server in Visual Studio while consuming it from VS Code with the Continue extension.
This gives you the best of both worlds: full debugging capabilities in VS and AI-assisted coding in VS Code.
Step 1: Configure Visual Studio
Open your solution in Visual Studio 2022. Ensure your launchSettings.json has the HTTP profile configured:
{
"profiles": {
"http": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": false,
"applicationUrl": "http://localhost:5125",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}
Run the application and start debugging. You’ll see output console confirming the server is listening:

Step 2: Configure VS Code Continue
In VS Code, install the Continue extension if you haven’t already. Ensure you have the MCP server configuration from earlier setup, Ensure it contains the following content:
name: MCP-Demo-Tools
version: 0.0.1
schema: v1
mcpServers:
- name: mcp-docs
type: sse
url: http://localhost:5125/sse
Restart VS Code or reload the Continue extension for the changes to take effect.
Step 3: Configure Continue’s LLM
Continue needs its own LLM to decide when to use tools. Open Continue’s settings and configure your preferred model.

You can use OpenAI, Anthropic, or a local model via Ollama.
Example config.yaml for OpenAI:
models:
- title: GPT-4
provider: openai
model: gpt-4
apiKey: your-api-key-here
Step 4: Verify the Connection
Back in Visual Studio, watch the Output window or console. When Continue connects, you should see requests like: Request: GET /sse and Request: POST /message

The initialize and tools/list JSON-RPC calls confirm Continue has discovered your tools.

Step 5: Use Agent Mode
In VS Code, open the Continue panel and ensure you’re in Agent Mode (not Chat mode).

The mode selector is near the input box.
Now type a prompt that should trigger your tool:

Search the documentation for authentication examples
If everything is configured correctly, you’ll see tools/call requests in Visual Studio’s console, and your breakpoints in the tool method will be hit:

The data is returned from your MCP tool in Visual Studio, back to VS Code:

We can further confirm the mock data we have in our Visual Studio MCP Server instance:
// Sample documents for demo purposes
// In production, this would query a vector database like Elasticsearch
private static readonly List<Document> _sampleDocs = new()
{
new Document(
"1",
"https://docs.example.com/api/authentication",
"Authentication API",
"To authenticate with the API, include your API key in the Authorization header. Use the format: Authorization: Bearer YOUR_API_KEY. API keys can be generated from the dashboard."
),
new Document(
"2",
"https://docs.example.com/tutorials/getting-started",
"Getting Started Tutorial",
"Welcome to our platform! This tutorial will guide you through the initial setup. First, install the SDK using npm install @example/sdk. Then, initialise the client with your credentials."
),
new Document(
"3",
"https://docs.example.com/examples/basic-usage",
"Basic Usage Examples",
"Here are some common usage patterns. To create a new resource, use client.create(). To fetch existing resources, use client.get(id). To update, use client.update(id, data)."
),
new Document(
"4",
"https://docs.example.com/reference/errors",
"Error Reference",
"Common error codes: 400 Bad Request - Invalid input parameters. 401 Unauthorized - Missing or invalid API key. 404 Not Found - Resource does not exist. 429 Too Many Requests - Rate limit exceeded."
)
};
From the above, we can see the content in document with in ID of 2 maps to the response in VS Code.
Troubleshooting
If Continue doesn’t discover your tools:
Check that Visual Studio is running and the server is listening on the correct port
Verify the YAML file is in the correct location: C:\Users\<username>\.continue\mcpServers\
Ensure the URL matches exactly: http://localhost:5125/sse
Restart VS Code after making configuration changes
Check for port conflicts – only one process can bind to port 5125
~
Debugging Connection Issues
When things go wrong, here are some other things you can try.
Add Request Logging
Add middleware to see what requests are hitting your server:
app.Use(async (context, next) =>
{
Console.WriteLine($">>> Request: {context.Request.Method} {context.Request.Path}");
await next();
});
Check for HTTPS Redirect Issues
A common gotcha is HTTPS redirection breaking SSE connections. This caught be out at first. Disable it for development:
if (!app.Environment.IsDevelopment())
{
app.UseHttpsRedirection();
}
Verify Port Configuration
Check your launchSettings.json to confirm the correct ports:
{
"profiles": {
"http": {
"applicationUrl": "http://localhost:5125"
},
"https": {
"applicationUrl": "https://localhost:7073;http://localhost:5125"
}
}
}
Watch for Multiple Instances
If you’re getting unexpected behaviour, check Task Manager for stray dotnet.exe processes. Multiple instances can cause port conflicts.
~
Tips for Triggering Tool Usage
Sometimes you can find your tool doesn’t get called and human intent is not detected. This caught me out initially.
The LLM decides whether to use your tools based on your prompt. It’s important to try and use precise language. To improve tool invocation, here are some tips.
Be Explicit in Prompts
Use prompts like “search the docs for authentication examples” or “use the documentation tool to find API reference info” rather than how do I authenticate?” (might use general knowledge).
Use Clear Tool Descriptions
Your [Description] attributes should be specific and action oriented. The LLM reads these to decide when to invoke your tool.
Check Agent Mode
In tools like Continue, ensure you’re in Agent mode, not Chat mode. Tools only work in Agent mode.
~
Summary
In this post, we’ve covered the fundamentals of building and debugging an MCP server.
In future posts, we’ll explore:
- Query logging for training data collection
- Caching with vector similarity matching
- Authentication and authorisation for MCP endpoints
- Deploying MCP servers to Azure
MCP represents a significant step forward in AI tool interoperability. By adopting this standard, you’re building integrations that will work across an expanding ecosystem of AI assistants.
~
Further Reading and Resources
Some additional resources you might find helpful.
~