Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152008 stories
·
33 followers

Build AI Agents with Microsoft Agent Framework in C#

1 Share

Learn how to build production-ready AI agents in C# using Microsoft Agent Framework. Covers setup, memory management, tools, and multi-agent workflows.

Last Updated: January 2, 2026

I spent the better part of last month trying to figure out which Microsoft AI framework I should actually be using for AI orchestration. Semantic Kernel? AutoGen? Microsoft.Extensions.AI? The answer turned out to be all of them, sort of.

Microsoft Agent Framework is the new kid on the block. It launched in public preview a few months back, and it's basically what happens when the teams behind AutoGen and Semantic Kernel decide to stop maintaining two separate frameworks and build one that doesn't make you choose.

What Is Microsoft Agent Framework?

It's what Microsoft is building to replace both AutoGen and Semantic Kernel. Same teams, one framework.

You get agents that can remember conversations, call C# methods as tools, and coordinate with other agents. The underlying abstraction layer works with OpenAI, Azure OpenAI, Ollama, whatever.

Thread-based state management is built in. So is telemetry, filters, and all the production stuff you'd have to bolt on yourself with the older frameworks.

It's in public preview right now. GA is expected in early 2026.

That means breaking changes could happen. I've already hit a couple while testing. The team removed NotifyThreadOfNewMessagesAsync in one release. Added a breaking change to how you create threads in another. Nothing catastrophic, but worth knowing if you're planning to ship this to production next week.

Why You'd Use This Instead of Semantic Kernel

I asked myself the same question.

Semantic Kernel works fine for prompt chains and function calling. But if you need agents that maintain context across a dozen conversation turns, or coordinate with other agents, Semantic Kernel starts fighting you.

Agent Framework handles that natively. Graph-based execution, conditional routing, persistent threads. The stuff that requires custom plumbing in Semantic Kernel just works here.

Migration path exists if you're already using the older frameworks. They're not going away, just not getting new features.

Setting Up Your First Agent

You'll need .NET 8 or later. I'm using .NET 10, which has Agent Framework baked in with better integration.

Install the packages:

dotnet add package Azure.AI.OpenAI --version 2.1.0
dotnet add package Azure.Identity --version 1.17.1
dotnet add package Microsoft.Extensions.AI.OpenAI --version 10.1.1-preview.1.25612.2
dotnet add package Microsoft.Agents.AI.OpenAI --version 1.0.0-preview.251219.1

The Microsoft.Extensions.AI packages are in preview. The Agent Framework packages (Microsoft.Agents.AI.OpenAI) are also preview as of January 2026.

Here's the simplest possible agent:

using Microsoft.Agents.AI;
using Microsoft.Extensions.AI;
using OpenAI;

AIAgent agent = new OpenAIClient("your-api-key")
  .GetChatClient("gpt-4o-mini")
  .AsIChatClient()
  .CreateAIAgent(instructions: "You help developers find accurate technical information.");

var response = await agent.RunAsync("What is C#?");
Console.WriteLine(response);

That's it. You've got an agent.

It won't do much yet. But it exists, it has a personality (defined by the instructions), and it knows how to talk to OpenAI. You can also use Azure OpenAI by swapping OpenAIClient with AzureOpenAIClient and providing your Azure endpoint.

Adding Memory with Thread Management

Agents need memory. Otherwise every conversation starts from scratch.

Agent Framework handles this with threads. Each thread maintains its own conversation history and context.

AIAgent agent = new OpenAIClient("your-api-key")
  .GetChatClient("gpt-4o-mini")
  .AsIChatClient()
  .CreateAIAgent(instructions: "You are a helpful technical assistant.");

AgentThread thread = agent.GetNewThread();

// First turn
var response1 = await agent.RunAsync(
    "What's the difference between IAsyncEnumerable and Task<List>?",
    thread
);
Console.WriteLine(response1);

// Second turn - agent remembers the context
var response2 = await agent.RunAsync(
    "Which one should I use for streaming large datasets?",
    thread
);
Console.WriteLine(response2);

The thread persists state. Next time you call RunAsync with the same thread, the agent remembers what you talked about.

I tested this with a five-turn conversation about SQL Server indexing. The agent referenced earlier points in the conversation without me having to repeat context. Worked exactly how you'd hope.

Giving Your Agent Tools

Tools are where this framework earned my respect.

You write normal C# methods. Slap some attributes on them. The agent figures out when to call them.

using System.ComponentModel;
using Microsoft.Extensions.AI;

[Description("Gets the current weather for a location")]
async Task<string> GetWeather([Description("City name")] string city)
{
    // Simulate API call
    await Task.Delay(500);
    return $"Sunny, 72°F in {city}";
}

var chatClient = new OpenAIClient("your-api-key")
  .GetChatClient("gpt-4o-mini")
  .AsIChatClient();

AIAgent weatherAgent = chatClient.CreateAIAgent(
    name: "WeatherAgent",
    instructions: "You provide weather information.",
    tools: [AIFunctionFactory.Create(GetWeather)]
);

var response = await weatherAgent.RunAsync("What's the weather in Seattle?");
Console.WriteLine(response);

The agent sees the question, recognizes it needs weather data, calls your GetWeather method, and incorporates the result into its response. You don't write any of that orchestration logic.

You can give an agent multiple tools. The model figures out which ones to use.

I built a documentation agent that could search GitHub, read file contents, and query Stack Overflow. Gave it six different tools. It figured out which ones to use based on the question. Still feels like magic even after testing it fifty times.

Multi-Agent Workflows

Single agents are fine for simple tasks. But some problems need specialization.

You can coordinate multiple agents. Give each one a specific job:

var openAIClient = new OpenAIClient("your-api-key");

var researchAgent = openAIClient
    .GetChatClient("gpt-4o-mini")
    .AsIChatClient()
    .CreateAIAgent(instructions: "You find and verify technical information. Be concise.");

var writerAgent = openAIClient
    .GetChatClient("gpt-4o-mini")
    .AsIChatClient()
    .CreateAIAgent(instructions: "You write clear, concise documentation based on research.");

// Research phase
var researchThread = researchAgent.GetNewThread();
var researchResult = await researchAgent.RunAsync(
    "Provide key technical facts about: async/await in C#",
    researchThread
);
Console.WriteLine($"Research: {researchResult}");

// Writing phase - pass research results to writer
var writerThread = writerAgent.GetNewThread();
var documentation = await writerAgent.RunAsync(
    $"Based on this research, write a brief explanation:\n\n{researchResult}",
    writerThread
);
Console.WriteLine($"Documentation: {documentation}");

You pass a question to the research agent. It does its work. Then you take those results and feed them to the writer agent, which produces documentation.

That's the simple version. You can also build conditional routing, shared state, graph-based patterns. Whatever the workflow needs.

I built a code review workflow with four agents: one that analyzed performance, one that checked security, one that looked for maintainability issues, and one that synthesized everything into actionable feedback. Worked better than I expected.

What About Microsoft.Extensions.AI?

You'll see both names floating around. Here's the distinction.

Microsoft.Extensions.AI is the abstraction layer. It's what lets you write code against IChatClient and swap between OpenAI, Azure OpenAI, or Ollama without changing anything.

Agent Framework sits on top of that. It gives you the agent primitives, thread management, tool orchestration. The actual agentic stuff.

You'll use both. Extensions.AI for the client, Agent Framework for everything else.

Things That Tripped Me Up

Breaking changes. Preview means the API surface can shift. Check the release notes before updating.

Token costs. Agents with memory accumulate conversation history. Long threads mean big token counts. You'll want to implement some kind of summarization or truncation strategy.

Error handling. If a tool throws an exception, you need to catch it and return something the agent can understand. Otherwise the conversation just stops.

Testing. I'm still figuring out the best way to test agent behavior. Unit testing individual tools is straightforward. Testing multi-turn conversations with nondeterministic responses? Harder.

Is It Ready for Production?

Depends on your risk tolerance.

The underlying Microsoft.Extensions.AI layer is GA. Stable. Supported.

Agent Framework is still in preview with GA expected soon. Microsoft says existing workloads on AutoGen or Semantic Kernel are safe. No breaking changes planned for migration paths. But "no breaking changes planned" isn't the same as "no breaking changes will happen."

If you're building something new, the framework is stable enough for most use cases. Just pin your package versions and watch for updates as it approaches GA.

I've been running Agent Framework in a side project for the last month. Zero production traffic, but enough testing to get a feel for it. It's stable enough that I'm not worried. Just keeping an eye on the GitHub releases.

Frequently Asked Questions

What's the difference between Agent Framework and Semantic Kernel?

Agent Framework is the replacement. Microsoft's consolidating both AutoGen and Semantic Kernel into this.

Main difference is state management. Semantic Kernel doesn't have built-in conversation persistence. Agent Framework does. If you're building anything that needs to remember context beyond a single turn, this is the easier path.

Is Microsoft Agent Framework production-ready?

Depends on your definition of production-ready.

The underlying Microsoft.Extensions.AI layer is GA. That part's stable and supported. Agent Framework itself is still in preview as of January 2026, but it's close to GA.

I've been using it for side projects. Haven't hit anything catastrophic. Just pin your package versions and keep an eye on the release notes. Breaking changes are possible until GA, but Microsoft says the migration paths won't break.

Can I migrate from AutoGen or Semantic Kernel to Agent Framework?

Yes. That's exactly what Microsoft designed this for.

I migrated a Semantic Kernel project last month. Thread management replaced some of my orchestration patterns. Agent definitions replaced others. Took about a day for a medium-sized codebase.

The core abstractions are similar enough that you're not rewriting everything from scratch. And both AutoGen and Semantic Kernel still get security updates, so you're not on a hard deadline.

What AI models does Agent Framework support?

Anything that implements IChatClient from Microsoft.Extensions.AI.

I've tested it with Azure OpenAI, OpenAI, and Ollama. All worked without changing agent logic. That's the whole point of the abstraction layer. Write once, swap providers when your budget or requirements change.

Final Thoughts

Microsoft Agent Framework finally gives .NET developers a first-class way to build AI agents without duct-taping together three different libraries.

If you've been waiting for the AutoGen and Semantic Kernel teams to pick a direction, this is it. Start here. The documentation is solid, the patterns are clear, and the migration path from older frameworks exists.

Just remember it's preview. Pin your versions. Watch for breaking changes. Test your tools thoroughly.

The future of AI in .NET looks like this. You might as well get familiar with it now.

About the Author

I'm Mashrul Haque, a Systems Architect with over 15 years of experience building enterprise applications with .NET, Blazor, ASP.NET Core, and SQL Server. I specialize in Azure cloud architecture, AI integration, and performance optimization.

When production catches fire at 2 AM, I'm the one they call.

- Twitter/X: @mashrulthunder

Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

A beginner’s guide to Mastodon, the open source Twitter alternative

1 Share
Unless if you’re really in the know about nascent platforms, you probably didn’t know what Mastodon was until Elon Musk bought Twitter and renamed it X. In the initial aftermath of the acquisition, as users fretted over what direction Twitter would take, millions of users hopped over to Mastodon, a fellow microblogging site. As time […]
Read the whole story
alvinashcraft
40 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

DHS Says REAL ID, Which DHS Certifies, Is Too Unreliable To Confirm US Citizenship

2 Shares
An anonymous reader shares a report: Only the government could spend 20 years creating a national ID that no one wanted and that apparently doesn't even work as a national ID. But that's what the federal government has accomplished with the REAL ID, which the Department of Homeland Security (DHS) now considers unreliable, even though getting one requires providing proof of citizenship or lawful status in the country. In a December 11 court filing [PDF], Philip Lavoie, the acting assistant special agent in charge of DHS' Mobile, Alabama, office, stated that, "REAL ID can be unreliable to confirm U.S. citizenship." Lavoie's declaration was in response to a federal civil rights lawsuit filed in October by the Institute for Justice, a public-interest law firm, on behalf of Leo Garcia Venegas, an Alabama construction worker. Venegas was detained twice in May and June during immigration raids on private construction sites, despite being a U.S. citizen. In both instances, Venegas' lawsuit says, masked federal immigration officers entered the private sites without a warrant and began detaining workers based solely on their apparent ethnicity. And in both instances officers allegedly retrieved Venegas' Alabama-issued REAL ID from his pocket but claimed it could be fake. Venegas was kept handcuffed and detained for an hour the first time and "between 20 and 30 minutes" the second time before officers ran his information and released him.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
41 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How To Build a Developer Career When the First Rung Is Gone

1 Share
Wooden step is broken in woody, leaf-strewn staircase.

As AI tools become increasingly integrated in many industries across all levels, there has been a quiet but undeniable shift. The tasks that used to be prevalent at the start of a developer’s journey are now disappearing because human performance is becoming increasingly unnecessary here.

AI systems now write HTML and CSS layouts practically in an instant. They can configure basic server infrastructure faster than any entry-level flesh-and-blood specialist. They generate boilerplate tests, documentation and repetitive scripts with perfect consistency.

What was once a six-month learning path for a junior engineer has now become something an AI assistant accomplishes in seconds. And it forces a difficult question to come into the spotlight: If the bottom rung of the ladder is gone, how are new developers supposed to climb at all?

The Work That Trained Juniors Is Now AI’s Domain

Looking at things in perspective, the arrival of Claude 4 Opus caused the conversation among engineering leads to change overnight. AI wasn’t just helping with tedious tasks anymore — it started assisting in architectural thinking and solution design.

Senior developers suddenly gained a partner capable of exploring trade-offs, generating alternative patterns and evaluating system behaviors. But juniors ended up losing the very tasks that once helped them get their footing.

Today, AI already confidently handles:

  • HTML/CSS layout and basic frontend scaffolding
  • Repetitive tasks like simple API handlers or routine refactoring
  • Basic documentation and test generation
  • Simple infrastructure configuration.

And it performs all these tasks quickly, consistently, without fuss or losing focus. The honest truth here is that AI’s ability to perform as a junior developer is actually quite impressive. But here’s the problem: If these tasks are no longer done by humans, how can humans learn and grow their own skills?

Learning To Code Feels Like Being Thrown in the Deep End

For a long time, the path into engineering was very hands-on. You just had to do things, plain and simple: Write enough code and eventually you’d build the intuition needed to think like an architect.

Today, with AI taking over a lion’s share of the work, beginners are pretty much asked to skip this part and go straight to senior-level thinking. This is a structural shift, and not at all an easy one.

The entry barrier into this field has suddenly jumped up because AI excels at exactly the kind of tasks juniors were traditionally supposed to struggle through and learn from. Struggle used to be part of the process, but now AI has removed the opportunity for this growth, making things a lot more challenging for newcomers.

And this is where I’d like to give my strongest advice to anyone learning right now: Don’t rely on AI assistants during your training. Yes, it’s tempting. Yes, it feels efficient. But it also prevents you from building the foundation and the instincts you will absolutely need later: understanding how systems behave and how errors emerge. Without personal practice, you won’t develop the thinking patterns that make a good engineer.

What Skills Will Actually Matter?

Looking more broadly, we can already see that the entire profession is shifting. In the future, developers won’t be valued for their coding skills; they’ll be valued for their ability to break down problems into tasks and then guide AI through those tasks and toward a functional solution.

Several qualities will become essential:

  • Structural and architectural thinking: AI can generate implementations, but it still needs human thinking to define the boundaries, constraints and the overall purpose of what it’s supposed to be doing.
  • Product sense: As I see it, future developers will increasingly look more like product managers with a technical background — someone who understands user needs, business value and how to translate that into precise instructions for an AI.
  • Curiosity and resilience: If junior tasks vanish, it means that learning how to be a good developer will become mostly a self-driven process. It will require human specialists to push on entirely through their own determination.

These are the new criteria — or character traits, even — that I suspect will become the new baseline for hiring developers. In five to seven years, the word “developer” itself will likely mean something very different from today.

The role will likely hybridize: part engineer, part product thinker, part AI systems operator. The main responsibilities for these people will be guiding AI, validating its output and ensuring the end outcomes align with real-world business needs.

Coding will still matter, but mostly as a way to refine and debug what AI produces.

Not the End of Development, but the End of How We Start

It’s easy to look at this change and see it as a loss. But depending on how you view it, it could also be called a type of evolution. Junior coding isn’t vanishing because we no longer “need” developers. It’s vanishing because the definition of “developer” is transforming faster than our current education systems and job ladders can keep up.

The big challenge now is to rethink what an early career profile in this field should look like. Which skills truly matter for an up-and-coming professional in a world where AI already handles the code.

If we get this right, the next generation of engineers won’t be weaker. It will simply be different, shaped by new conditions to which they will have learned to adapt. The introduction of AI intro workflows doesn’t mean that issues will disappear entirely. New problems will require new solutions, and coming up with new solutions is, ultimately, still a human job.

The post How To Build a Developer Career When the First Rung Is Gone appeared first on The New Stack.

Read the whole story
alvinashcraft
41 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Windows 95 Special Edition - 100 Years of Microsoft Stories

1 Share
From: Microsoft Developer
Duration: 1:23
Views: 954

Raymond Chen shares a story about the Windows 95 Special Edition.

Go to https://aka.ms/100Years for more stories

Read the whole story
alvinashcraft
42 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

2026 predictions from LP-lead investments to IPO mania: Equity crossover

1 Share

We're bringing you a special TechCrunch podcast crossover episode. Isabelle joins Equity Hosts Kirsten Korosec, Anthony Ha, and Rebecca Bellan to dissect the year's biggest tech developments, from mega AI funding rounds that defied expectations to the rise of "physical AI," and make their calls for 2026. 

The group tackled everything from why AI agents didn't live up to the hype in 2025 (but probably will in 2026), to how Hollywood will push back against AI-generated content, to why VCs are facing a serious liquidity crisis.  

Listen to the full episode to hear: 

  • Why world models are the next big thing in AI and how they're different from large language models 
  • The death of "stealth mode" for AI startups and the rise of alternative funding sources 
  • Predictions on regulatory chaos around AI policy and what Trump's recent executive order means for startups 
  • Hot takes on IPOs: Will OpenAI and Anthropic actually go public in 2026? 
  • Rapid-fire predictions including Johnny Ive and Sam Altman's inevitable public breakup, the return of dumb phones, and why everyone will be calling themselves "AI native" 
  • What's coming in Build Mode season 2: A deep dive into team building, hiring, and finding co-founders 

Chapters:

00:00 Intro - TechCrunch Build Mode & Equity Crossover Episode

00:27 Meet the Hosts - Predictions Episode Introduction

02:49 Reviewing 2024 Predictions - The Mega Funding Rounds

05:40 AI Startup Funding Challenges and Alternative Capital Sources

08:05 2026 AI Predictions - World Models and the Next Evolution

12:41 Physical AI - The Intersection of Robotics and Intelligence

14:07 AI in Media and Content Creation

18:48 Netflix-Warner Brothers Deal and FTC Predictions

21:09 The LP Direct Investment Trend

23:26 IPOs and Deep Tech Capital Challenges

25:49 Startup Battlefield Trends - Verticalized AI Across Industries

28:08 Rapid Fire Predictions - Fashion, Self-Driving Cars, and More

30:25 The Dumb Phone Comeback and Foldable iPhones

32:51 Build Mode Season 2 Preview - People and Team Building

New episodes of Build Mode drop every Thursday. Isabelle Johannessen is our host. Build Mode is produced and edited by Maggie Nye. Audience Development is led by Morgan Little. And a special thanks to the Foundry and Cheddar video teams. 





Download audio: https://traffic.megaphone.fm/TCML5550429529.mp3
Read the whole story
alvinashcraft
42 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories