Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151049 stories
·
33 followers

Microsoft Agent Framework Version 1.0

1 Share

Today, we’re thrilled to announce that Microsoft Agent Framework has reached version 1.0 for both .NET and Python. This is the production-ready release: stable APIs, and a commitment to long-term support. Whether you’re building a single assistant or orchestrating a fleet of specialized agents, Agent Framework 1.0 gives you enterprise-grade multi-agent orchestration, multi-provider model support, and cross-runtime interoperability via A2A and MCP.

When we introduced Microsoft Agent Framework last October, we set out to unify the enterprise-ready foundations of Semantic Kernel with the innovative orchestrations of AutoGen into a single, open-source SDK. When we hit Release Candidate in February, we locked the feature surface and invited the community to put it through its paces. Today, after months of feedback, hardening, and real-world validation with customers and partners, Agent Framework 1.0 is ready for production.

Create Your First Agent

Getting started takes just a few lines of code. Here’s how to create a simple agent in both languages.

Python

# pip install agent-framework
# Use `az login` to authenticate with Azure CLI

import asyncio

from agent_framework import Agent
from agent_framework.foundry import FoundryChatClient
from azure.identity import AzureCliCredential

agent = Agent(
    client= FoundryChatClient(
      project_endpoint="https://your-project.services.ai.azure.com",
      model="gpt-5.3",
      credential=AzureCliCredential(),
    ),
    name="HelloAgent",
    instructions="You are a friendly assistant."
)

print(asyncio.run(agent.run("Write a haiku about shipping 1.0.")))

.NET

// dotnet add package Microsoft.Agents.AI.OpenAI --prerelease
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Foundry
using Azure.Identity;

// Replace the <apikey> with your OpenAI API key.
var agent = new AIProjectClient(endpoint:"https://your-project.services.ai.azure.com")
    .GetResponsesClient("gpt-5.3")
    .AsAIAgent(
        name: "HaikuBot", 
        instructions: "You are an upbeat assistant that writes beautifully."
    );

Console.WriteLine(await agent.RunAsync("Write a haiku about shipping 1.0."));

That’s it – a working AI agent in a handful of lines. From here you can add function tools, sessions for multi-turn conversations, streaming responses, and more.

Multi-Agent Workflows

Single agents are powerful, but real-world applications often need multiple agents working together. Here’s a sequential workflow where a copywriter drafts a tagline and a reviewer provides feedback:

Python

import asyncio

from agent_framework import Agent, Message
from agent_framework.foundry import FoundryChatClient
from agent_framework.orchestrations import SequentialBuilder
from azure.identity import AzureCliCredential


async def main() -> None:
    # credentials read from .env
    client = FoundryChatClient(credential=AzureCliCredential())

    writer = Agent(
        client=client,
        instructions="You are a concise copywriter. Provide a single, punchy marketing sentence.",
        name="writer",
    )

    reviewer = Agent(
        client=client,
        instructions="You are a thoughtful reviewer. Give brief feedback on the previous message.",
        name="reviewer",
    )


    workflow = SequentialBuilder(participants=[writer, reviewer]).build()
    outputs: list[list[Message]] = []
    async for event in workflow.run("Write a tagline for Microsoft Agent Framework 1.0.", stream=True):
        if event.type == "output":
            outputs.append(cast(list[Message], event.data))
    if outputs:
        for msg in outputs[-1]:
            print(f"[{msg.author_name or 'user'}]: {msg.text}")

if __name__ == "__main__":
    asyncio.run(main())

and check out the .NET version here: Getting Started with Workflows

What’s in version 1.0

Version 1.0 represents the features we’ve battle-tested, stabilized, and committed to supporting with full backward compatibility going forward.

  • Single Agent and Service Connectors – The core agent abstraction is stable and production-ready across both .NET and Python. Agent Framework ships with first-party service connectors for Microsoft Foundry, Azure OpenAI, OpenAI, Anthropic Claude, Amazon Bedrock, Google Gemini and Ollama. [Learn more… ]
  • Middleware Hooks – The middleware pipeline lets you intercept, transform, and extend agent behavior at every stage of execution: content safety filters, logging, compliance policies, custom logic – all without modifying agent prompts. [Learn more… ]
  • Agent Memory and Context Providers – Pluggable memory architecture supporting conversational history, persistent key-value state, and vector-based retrieval. Choose your backend: Memory in Foundry Agent Service, Mem0, Redis, Neo4j or use a custom store. [Learn more…]
  • Agent Workflows – The graph-based workflow engine for composing agents and functions into deterministic, repeatable processes is now stable. Build workflows that combine agent reasoning with business logic, branch on conditions, fan out to parallel steps, and converge results. Checkpointing and hydration ensure long-running processes survive interruptions. [Learn more… ]
  • Multi-Agent Orchestration – Stable support for the orchestration patterns that emerged from Microsoft Research and AutoGen: sequential, concurrent, handoff, group chat, and Magentic-One. All patterns support streaming, checkpointing, human-in-the-loop approvals, and pause/resume for long-running workflows. [Learn more…]
  • Declarative Agents and Workflows (YAML) – Define agents’ instructions, tools, memory configuration, and orchestration topology in version-controlled YAML files, then load and run them with a single API call. [Learn more… ]
  • A2A, and MCP – MCP (Model Context Protocol) support lets agents dynamically discover and invoke external tools exposed over MCP-compliant servers. A2A (Agent-to-Agent) protocol (A2A 1.0 support coming soon) support enables cross-runtime agent collaboration – your agents can coordinate with agents running in other frameworks using structured, protocol-driven messaging. [Learn more…]
  • Migration Assistants (Semantic Kernel and AutoGen) – For teams coming from Semantic Kernel or AutoGen, migration assistants analyze your existing code and generate step-by-step migration plans. The Semantic Kernel migration guide and AutoGen migration guide provide detailed walkthroughs.

What’s new since the October release

We’re also shipping with preview features that are functional and available for early adoption. These APIs may evolve based on community feedback before reaching ‘stable’.

  • DevUI – browser-based local debugger for visualizing agent execution, message flows, tool calls, and orchestration decisions in real time. [Learn more… ]
  • Foundry Hosted Agent Integration – run Agent Framework agents as managed services on Microsoft Foundry or as Azure Durable Functions. [Learn more… ]
  • Foundry Tools, Memory, Observability and Evaluations – deep integration with Foundry’s managed tool ecosystem, memory, and OpenTelemetry-powered observability and evaluations dashboards. [Learn more… ]
  • AG-UI / CopilotKit / ChatKit – stream agent output to frontend surfaces with adapters for CopilotKit and ChatKit, including tool execution status and human-in-the-loop flows. [Learn more… ]
  • Skills – reusable domain capability packages (instructions + scripts + resources)) that give agents structured capabilities out of the box. [Learn more… ]
  • GitHub Copilot SDK and Claude Code SDK — use GitHub Copilot SDK or Claude Code as an agent harness directly from your Agent Framework orchestration code. These SDKs handle the autonomous agent loop – planning, tool execution, file edits, and session management – and Agent Framework wraps them, letting you compose a coding-capable agent alongside other agents (Azure OpenAI, Anthropic, custom) in the same multi-agent workflow. [Learn more… ]
  • Agent Harness — customizable Microsoft Agent Framework harness and local runtime giving agents access to shell, file system, and messaging loop for coding agents, automation, and personal assistant patterns. [Learn more… ]

Microsoft Agent Framework DevUI

Getting Started

If you’ve been running on the RC packages, upgrading to 1.0 is a package version bump. If you’re new to Agent Framework, install and go:

Python

pip install agent-framework

.NET

dotnet add package Microsoft.Agents.AI

Check out the quickstart guide for walkthroughs in both languages, or jump straight to the samples on GitHub.

Coming from AutoGen or Semantic Kernel? Now is the time to migrate to Microsoft Agent Framework. The Semantic Kernel migration guide and AutoGen migration guide provide detailed walkthroughs.

What’s Next

Version 1.0 is a beginning, not a destination. We’re continuing to invest in graduating preview features, expanding the connector and skills ecosystem, deepening Foundry integration, and incorporating the latest orchestration research from Microsoft Research. The framework is 100% open source – we build in the open, and your feedback and contributions shape what comes next.

Thank you to everyone who filed issues, submitted PRs, tested release candidates, and pushed us to make Agent Framework better. This milestone belongs to the community as much as it does to the team.

Get started today:

The post Microsoft Agent Framework Version 1.0 appeared first on Microsoft Agent Framework.

Read the whole story
alvinashcraft
16 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Claude Code’s Source Leaks, OpenAI Exits Video Generation, Gemini Adds Music Generation, LLMs Learn at Inference

1 Share
The Batch AI News and Insights: Voice-based AI that you can talk to is improving rapidly, yet most people still don’t appreciate how pervasive voice UIs (user interfaces) will become.
Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Perplexity's 'Incognito Mode' Is a 'Sham,' Lawsuit Says

1 Share
An anonymous reader quotes a report from Ars Technica: Perplexity's AI search engine encourages users to go deeper with their prompts by engaging in chat sessions that a lawsuit has alleged are often shared in their entirety with Google and Meta without users' knowledge or consent. "This happened to every user regardless of whether or not they signed up for a Perplexity account," the lawsuit alleged, while stressing that "enormous volumes of sensitive information from both subscribed and non-subscribed users" are shared. Using developer tools, the lawsuit found that opening prompts are always shared, as are any follow-up questions the search engine asks that a user clicks on. Privacy concerns are seemingly worse for non-subscribed users, the complaint alleged. Their initial prompts are shared with "a URL through which the entire conversation may be accessed by third parties like Meta and Google." Disturbingly, the lawsuit alleged, chats are also shared with personally identifiable information (PII), even when users who want to stay anonymous opt to use Perplexity's "Incognito Mode." That mode, the lawsuit charged, is a "sham." "'Incognito' mode does nothing to protect users from having their conversations shared with Meta and Google," the complaint said. "Even paid users who turned on the 'Incognito' feature still had their conversations shared with Meta and Google, along with their email addresses and other identifiers that allowed Meta and Google to personally identify them." "Perplexity's failure to inform its users that their personal information has been disclosed to Meta and Google or to take any steps to halt the continued disclosure of users' information is malicious, oppressive, and in reckless disregard" of users' rights, the lawsuit alleged. "Nothing on Perplexity's website warns users that their conversations with its AI Machine will be shared with Meta and Google," Doe alleged. "Much less does Perplexity warn subscribed users that its 'Incognito Mode' does not function to protect users' private conversations from disclosure to companies like Meta and Google."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
17 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Some Engineering Teams Won’t Be Ready for AI Orchestration – and It Will Cost Them

1 Share

It’s a question many engineering teams aren’t ready to answer honestly.

Partly because the answer changes depending on who you ask, and partly because the two emerging answers point in completely opposite directions.

Iain Bishop, CEO of Damala Technology and a former CTO with over two decades of experience, believes that “there are uneven gains with AI at the moment.”

Some teams are moving fast – shipping more, experimenting, shaping decisions, and owning outcomes. Others are still treating AI like a smarter autocomplete, focusing on infrastructure and reliability. The gap between these groups, Iain believes, is only going to grow.

Soon, devs won’t jus use AI – they will coordinate it

Most teams today are still operating in what Iain describes as the copilot phase

AI sits alongside developers, helping them generate code, suggest improvements, or speed up repetitive tasks. It’s useful, but it doesn’t fundamentally change how work is structured, though that could change soon, Iain believes.

What we’ll see over time is a move from a copilot model to an orchestration model.

In that world, developers don’t just use AI, they coordinate it. Instead of writing everything themselves, they manage multiple AI agents, assign tasks, validate outputs, and connect everything into a working system. The role shifts from execution to direction.

You’re still accountable, no matter how smart AI becomes

As tools become more powerful, there’s a growing temptation to push more responsibility onto them. Iain sees that as a dangerous path:

If AI is just like a co-worker, it isn’t truly autonomous and we remain accountable no matter how powerful the tools are.

The risk isn’t that AI will take control. It’s that teams will give it up too easily: 

If we allow AI tools to operate completely autonomously, we lose that accountability. And that’s the wrong approach.

This means developers aren’t becoming less responsible, they’re becoming more. They’re accountable not just for what they write, but for what they orchestrate.

AI’s first impact won’t be mass layoffs, it will be role compression

AI’s first big impact won’t be mass layoffs, it will be role compression. “In the coming years, teams will shrink, and people will need to wear multiple hats,” Iain says.

The lines between traditional roles are starting to blur: you’ll see more product engineers build AI-driven solutions. At the same time, deep technical expertise won’t disappear; if anything, it becomes even more critical, Iain explains.

There will always be a need for systems engineers who understand what good code looks like.

As AI generates more code, someone still needs to ensure the architecture makes sense.

Iain sees two clear paths:

  1. Toward product – understanding users, business needs, and delivering end-to-end solutions;
  2. Deeper into systems – architecture, design, and scalability.

“The risk is for engineers who stay in the middle,” he says. “With AI handling more execution, being just kind of technical and kind of product-aware may no longer be enough.”

Structuring AI lets teams move fast without losing control

Most companies aren’t struggling with what AI can do, they’re struggling with how to manage it, Iain says: “There’s a rapid pace of change, and companies need to get control of what’s happening.”

The instinct is to lock things down (limit tools, restrict access, add heavy governance) but engineers will find ways around it.

A more sustainable path is to structure how AI is used. Iain points to orchestration platforms, where standards, design systems, and governance are built into AI workflows. This lets teams move fast without losing control, and ensures organisations don’t have to choose between speed and consistency. Control comes not just from systems, but from people understanding the tools they’re using.

Knowing how to use new models won’t come automatically

With all the focus on automation, one skill is quietly becoming critical: communication.

Iain says that for teams new to AI, it’s about more than prompts – it’s understanding models, structuring context, and guiding outputs into something usable.

Prompt engineering is really about creating the right context to get the best response.

This changes how developers work. Instead of writing everything, they guide systems, shape inputs, and validate outputs. Models will keep improving, that’s inevitable, but knowing how to use them well won’t be automatic.

The post Some Engineering Teams Won’t Be Ready for AI Orchestration – and It Will Cost Them appeared first on ShiftMag.

Read the whole story
alvinashcraft
17 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The Download: LiteLLM hacked, Pretext layout engine, OpenAI news & more

1 Share
From: GitHub
Duration: 5:08
Views: 156

Welcome back to The Download! This week, we cover the serious supply chain attack on the LiteLLM Python package and OpenAI's intent to acquire Astral. We also look at Pretext, a new layout engine designed to help your browser handle complex tasks with ease. Plus, learn how to turn your GitHub contributions into a 3D pixel art city. Drop a comment and let us know which update is your favorite!

#DevNews #OpenAI #GitHub

— CHAPTERS —

00:00 Welcome to the Download
00:30 Pretext high performance layout engine
01:00 LiteLLM Python package supply chain attack
02:09 OpenAI to acquire Astral
02:49 GitHub Actions native timezone support
03:18 Agentevals for AI system reliability
04:05 Turn your GitHub profile into Git City
04:53 Outro

— RESOURCES —

Pretext
https://chenglou.me/pretext/
https://x.com/_chenglou/status/2037713766205608234

LiteLLM
https://futuresearch.ai/blog/no-prompt-injection-required/
https://snyk.io/articles/poisoned-security-scanner-backdooring-litellm/

Astral Acquisition
https://openai.com/index/openai-to-acquire-astral/

GitHub Actions Crons support
https://github.blog/changelog/2026-03-19-github-actions-late-march-2026-updates/#github-actions-timezone-support-for-scheduled-workflows

https://www.linkedin.com/posts/bengotch_you-know-what-feels-good-on-a-friday-morning-activity-7440808005568258048-KXSL?utm_source=share&utm_medium=member_desktop&rcm=ACoAABuOjfwBvMcYGBdcy1lJ550ifkI_DwoPEYc

Agent Evals:
https://www.solo.io/press-releases/introducing-new-agentic-open-source-project-agentevals

Open Source Project:
https://github.com/srizzon/git-city

Stay up-to-date on all things GitHub by connecting with us:

YouTube: https://gh.io/subgithub
Blog: https://github.blog
X: https://twitter.com/github
LinkedIn: https://linkedin.com/company/github
Insider newsletter: https://resources.github.com/newsletter/
Instagram: https://www.instagram.com/github
TikTok: https://www.tiktok.com/@github

About GitHub
It’s where over 180 million developers create, share, and ship the best code possible. It’s a place for anyone, from anywhere, to build anything—it’s where the world builds software. https://github.com

Read the whole story
alvinashcraft
17 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - Managing Properties From Records in C#, Part 6

1 Share
From: Jason Bock
Duration: 1:11:13
Views: 17

I hope to finish the majority of the work I've been doing to customize equality operations on a C# record in this stream.

https://github.com/JasonBock/Transpire/issues/44

#dotnet #csharp

Read the whole story
alvinashcraft
17 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories