







Learn the steps to perform an SQL Server 2025 upgrade for the developer edition from older versions with our easy-to-follow guide.
The post Upgrade to SQL Server 2025 Enterprise Developer Edition appeared first on MSSQLTips.com.
We’re thrilled to announce that Microsoft Agent Framework has reached Release Candidate status for both .NET and Python. Release Candidate is an important milestone on the road to General Availability — it means the API surface is stable, and all features that we intend to release with version 1.0 are complete. Now is the time to move your Semantic Kernel project to Microsoft Agent Framework and give us your feedback before final release. Whether you’re building a single helpful assistant or orchestrating a team of specialized agents, Agent Framework gives you a consistent, multi-language foundation to do it.
Microsoft Agent Framework is a comprehensive, open-source framework for building, orchestrating, and deploying AI agents. It’s the successor to Semantic Kernel and AutoGen, and it provides a unified programming model across .NET and Python with:
If you’ve been building agents with Semantic Kernel or AutoGen, Agent Framework is the natural next step. We’ve published detailed migration guides to help you transition:
Getting started takes just a few lines of code. Here’s how to create a simple agent in both languages.
Python
pip install agent-framework --pre
import asyncio
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def main():
agent = AzureOpenAIResponsesClient(
credential=AzureCliCredential(),
).as_agent(
name="HaikuBot",
instructions="You are an upbeat assistant that writes beautifully.",
)
print(await agent.run("Write a haiku about Microsoft Agent Framework."))
if __name__ == "__main__":
asyncio.run(main())
.NET
dotnet add package Microsoft.Agents.AI.OpenAI --prerelease
dotnet add package Azure.Identity
using System.ClientModel.Primitives;
using Azure.Identity;
using Microsoft.Agents.AI;
using OpenAI;
using OpenAI.Responses;
// Replace <resource> and gpt-4.1 with your Azure OpenAI resource name and deployment name.
var agent = new OpenAIClient(
new BearerTokenPolicy(new AzureCliCredential(), "https://ai.azure.com/.default"),
new OpenAIClientOptions() { Endpoint = new Uri("https://<resource>.openai.azure.com/openai/v1") })
.GetResponsesClient("gpt-4.1")
.AsAIAgent(name: "HaikuBot", instructions: "You are an upbeat assistant that writes beautifully.");
Console.WriteLine(await agent.RunAsync("Write a haiku about Microsoft Agent Framework."));
That’s it — a working AI agent in a handful of lines. From here you can add function tools, sessions for multi-turn conversations, streaming responses, and more.
Single agents are powerful, but real-world applications often need multiple agents working together. Agent Framework ships with a workflow engine that lets you compose agents into orchestration patterns — sequential, concurrent, handoff, and group chat — all with streaming support built in.
Here’s a sequential workflow where a copywriter agent drafts a tagline and a reviewer agent provides feedback:
Python
pip install agent-framework-orchestrations --pre
import asyncio
from typing import cast
from agent_framework import Message
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework.orchestrations import SequentialBuilder
from azure.identity import AzureCliCredential
async def main() -> None:
client = AzureOpenAIChatClient(credential=AzureCliCredential())
writer = client.as_agent(
instructions="You are a concise copywriter. Provide a single, punchy marketing sentence based on the prompt.",
name="writer",
)
reviewer = client.as_agent(
instructions="You are a thoughtful reviewer. Give brief feedback on the previous assistant message.",
name="reviewer",
)
# Build sequential workflow: writer -> reviewer
workflow = SequentialBuilder(participants=[writer, reviewer]).build()
# Run and collect outputs
outputs: list[list[Message]] = []
async for event in workflow.run("Write a tagline for a budget-friendly eBike.", stream=True):
if event.type == "output":
outputs.append(cast(list[Message], event.data))
if outputs:
for msg in outputs[-1]:
name = msg.author_name or "user"
print(f"[{name}]: {msg.text}")
if __name__ == "__main__":
asyncio.run(main())
.NET
dotnet add package Microsoft.Agents.AI.Workflows --prerelease
using System.ClientModel.Primitives;
using Azure.Identity;
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Workflows;
using Microsoft.Extensions.AI;
using OpenAI;
// Replace <resource> and gpt-4.1 with your Azure OpenAI resource name and deployment name.
var chatClient = new OpenAIClient(
new BearerTokenPolicy(new AzureCliCredential(), "https://ai.azure.com/.default"),
new OpenAIClientOptions() { Endpoint = new Uri("https://<resource>.openai.azure.com/openai/v1") })
.GetChatClient("gpt-4.1")
.AsIChatClient();
ChatClientAgent writer = new(chatClient,
"You are a concise copywriter. Provide a single, punchy marketing sentence based on the prompt.",
"writer");
ChatClientAgent reviewer = new(chatClient,
"You are a thoughtful reviewer. Give brief feedback on the previous assistant message.",
"reviewer");
// Build sequential workflow: writer -> reviewer
Workflow workflow = AgentWorkflowBuilder.BuildSequential(writer, reviewer);
List<ChatMessage> messages = [new(ChatRole.User, "Write a tagline for a budget-friendly eBike.")];
await using StreamingRun run = await InProcessExecution.RunStreamingAsync(workflow, messages);
await run.TrySendMessageAsync(new TurnToken(emitEvents: true));
await foreach (WorkflowEvent evt in run.WatchStreamAsync())
{
if (evt is AgentResponseUpdateEvent e)
{
Console.Write(e.Update.Text);
}
}
This Release Candidate represents an important step toward General Availability. We encourage you to try the framework and share your feedback — your input is invaluable as we finalize the release in the coming weeks.
For more information, check out our documentation and examples on GitHub, and install the latest packages from NuGet (.NET) or PyPI (Python).
The post Migrate your Semantic Kernel and AutoGen projects to Microsoft Agent Framework Release Candidate appeared first on Semantic Kernel.
We’re happy to announce that Microsoft Agent Framework is now in Release Candidate status for both .NET and Python. Release Candidate is an important milestone on the road to General Availability — it means the API surface is stable, and all features that we intend to release with version 1.0 are complete. Whether you’re building a single helpful assistant or orchestrating a team of specialized agents, Agent Framework gives you a consistent, multi-language foundation to do it. Microsoft Agent Framework is the easy and most powerful way to build agents and agent systems using Microsoft Foundry or any model or AI service!
Microsoft Agent Framework is a comprehensive, open-source framework for building, orchestrating, and deploying AI agents. It’s the successor to Semantic Kernel and AutoGen, and it provides a unified programming model across .NET and Python with:
Getting started takes just a few lines of code. Here’s how to create a simple agent in both languages.
Python
pip install agent-framework --pre
import asyncio
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential
async def main():
agent = AzureOpenAIResponsesClient(
credential=AzureCliCredential(),
).as_agent(
name="HaikuBot",
instructions="You are an upbeat assistant that writes beautifully.",
)
print(await agent.run("Write a haiku about Microsoft Agent Framework."))
if __name__ == "__main__":
asyncio.run(main())
.NET
dotnet add package Microsoft.Agents.AI.OpenAI --prerelease
dotnet add package Azure.Identity
using System.ClientModel.Primitives;
using Azure.Identity;
using Microsoft.Agents.AI;
using OpenAI;
using OpenAI.Responses;
// Replace <resource> and gpt-4.1 with your Azure OpenAI resource name and deployment name.
var agent = new OpenAIClient(
new BearerTokenPolicy(new AzureCliCredential(), "https://ai.azure.com/.default"),
new OpenAIClientOptions() { Endpoint = new Uri("https://<resource>.openai.azure.com/openai/v1") })
.GetResponsesClient("gpt-4.1")
.AsAIAgent(name: "HaikuBot", instructions: "You are an upbeat assistant that writes beautifully.");
Console.WriteLine(await agent.RunAsync("Write a haiku about Microsoft Agent Framework."));
That’s it — a working AI agent in a handful of lines. From here you can add function tools, sessions for multi-turn conversations, streaming responses, and more.
Single agents are powerful, but real-world applications often need multiple agents working together. Agent Framework ships with a workflow engine that lets you compose agents into orchestration patterns — sequential, concurrent, handoff, and group chat — all with streaming support built in.
Here’s a sequential workflow where a copywriter agent drafts a tagline and a reviewer agent provides feedback:
Python
pip install agent-framework-orchestrations --pre
import asyncio
from typing import cast
from agent_framework import Message
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework.orchestrations import SequentialBuilder
from azure.identity import AzureCliCredential
async def main() -> None:
client = AzureOpenAIChatClient(credential=AzureCliCredential())
writer = client.as_agent(
instructions="You are a concise copywriter. Provide a single, punchy marketing sentence based on the prompt.",
name="writer",
)
reviewer = client.as_agent(
instructions="You are a thoughtful reviewer. Give brief feedback on the previous assistant message.",
name="reviewer",
)
# Build sequential workflow: writer -> reviewer
workflow = SequentialBuilder(participants=[writer, reviewer]).build()
# Run and collect outputs
outputs: list[list[Message]] = []
async for event in workflow.run("Write a tagline for a budget-friendly eBike.", stream=True):
if event.type == "output":
outputs.append(cast(list[Message], event.data))
if outputs:
for msg in outputs[-1]:
name = msg.author_name or "user"
print(f"[{name}]: {msg.text}")
if __name__ == "__main__":
asyncio.run(main())
.NET
dotnet add package Microsoft.Agents.AI.Workflows --prerelease
using System.ClientModel.Primitives;
using Azure.Identity;
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Workflows;
using Microsoft.Extensions.AI;
using OpenAI;
// Replace <resource> and gpt-4.1 with your Azure OpenAI resource name and deployment name.
var chatClient = new OpenAIClient(
new BearerTokenPolicy(new AzureCliCredential(), "https://ai.azure.com/.default"),
new OpenAIClientOptions() { Endpoint = new Uri("https://<resource>.openai.azure.com/openai/v1") })
.GetChatClient("gpt-4.1")
.AsIChatClient();
ChatClientAgent writer = new(chatClient,
"You are a concise copywriter. Provide a single, punchy marketing sentence based on the prompt.",
"writer");
ChatClientAgent reviewer = new(chatClient,
"You are a thoughtful reviewer. Give brief feedback on the previous assistant message.",
"reviewer");
// Build sequential workflow: writer -> reviewer
Workflow workflow = AgentWorkflowBuilder.BuildSequential(writer, reviewer);
List<ChatMessage> messages = [new(ChatRole.User, "Write a tagline for a budget-friendly eBike.")];
await using StreamingRun run = await InProcessExecution.StreamAsync(workflow, messages);
await run.TrySendMessageAsync(new TurnToken(emitEvents: true));
await foreach (WorkflowEvent evt in run.WatchStreamAsync())
{
if (evt is AgentResponseUpdateEvent e)
{
Console.Write(e.Update.Text);
}
}
If you’ve been building agents with Semantic Kernel or AutoGen, Agent Framework is the natural next step. We’ve published detailed migration guides to help you transition:
This Release Candidate represents an important step toward General Availability. We encourage you to try the framework and share your feedback — your input is invaluable as we finalize the release in the coming weeks. Reach out to us on GitHub or on Discord.
For more information, check out our documentation and examples on GitHub, and install the latest packages from NuGet (.NET) or PyPI (Python).
The post Microsoft Agent Framework Reaches Release Candidate appeared first on Microsoft Foundry Blog.

On New Year's Day 2026, while many were recovering from the night before, a different kind of hangover took hold of every AI-pilled, chronically online software engineer. Steve Yegge published a new blog post: "Welcome to Gas Town." Some walked away inspired to finally use their agents optimally; others were just plain confused. If you're like me, you felt a bit of both.
Yegge's 34 minute post is a sprawling vision filled with futuristic ideas, playful characters, and enough side tangents to make your head spin. But underneath the lore is a massive architectural shift. I want to take a step back and simplify the "Big Idea" for everyone: Gas Town is a philosophy and a proof of concept to help people coordinate multiple agents working together.
Most people use AI agents sequentially. The workflow can look like this:
You've built a project in 30 minutes, which is fast, but you spent most of that time just watching a progress bar. Some engineers started to realize that if we are running one agent, we can run another five at the same time.
For example, Agent A builds the API, Agent B can start the frontend, Agent C can write tests, and Agent D can investigate a bug in that legacy codebase you've been avoiding.
This is how people are buying their time back. They're getting entire sprints done in an hour by running parallel threads. (Just don't tell your boss because the reward for finishing work is always more work.)
However, since agents don't communicate with each other, this approach introduces new problems:
Gas Town is designed to stop the babysitting. It coordinates the distribution of tasks among parallel agents so you don't have to. The system uses:
This system also introduces a cast of characters:
I won't list every single character here (it gets deep), but the takeaway is: Gas Town creates a chain of command with a shared way to communicate.
This is exactly the kind of futuristic thinking we're building toward at goose. So the goose team, specifically Tyler Longwell, built our own take on this called Goosetown.
Goosetown is a multi-agent orchestration layer built on top of goose. Like Gas Town, it coordinates parallel agents. Unlike Gas Town, it's deliberately minimal and built for research-first parallel work.
When you give Goosetown a task, the main agent acts as an Orchestrator, breaking the job into phases: research, build, and review. Then, it spawns parallel delegates to get it done. Each delegate communicates via a shared Town Wall, an append-only log where every agent posts what they're doing and what they've found.
Here's a real Town Wall snippet from a session where parallel researchers converged on a pivot quickly:
Goosetown operates on 4 components: skills, subagents, beads, and a gtwall.
Skills are Markdown files that describe how to do something like "how to deploy to production." Goosetown uses these to tell each Delegate how to do its specific job. When a Delegate spawns, it's "pre-loaded" with the skill for its role (Orchestrator, Researcher, Writer, Reviewer).
Instead of doing everything in one long conversation that eventually hits a "context cliff," Goosetown uses subagents, ephemeral agent instances. These are triggered by the summon extension, using delegate() to hand off work to a fresh agent instance. They do the work in their own clean context and return a summary, keeping your main session fast and focused.
Goosetown uses Beads to track progress so work survives crashes. It's a local issue tracker based on Git. The Orchestrator creates issues, delegates update them, and if a session fails, the next agent picks up the "bead" and continues the work.
gtwall is an append-only log that delegates use to communicate and coordinate. All delegates post and read activity.
I mostly use Goosetown when I'm trying to answer something that has a lot of independent angles, where missing a constraint is more expensive than spending extra tokens. For example, integration research ('how does system X actually authenticate?') or migration planning ('what would break first if we moved?')
There's a real tax to running multiple agents. If I can describe the task in one paragraph and I already know where to start, I don't need Goosetown. Parallelism improves throughput, but it adds coordination overhead.
The next improvements I care about are mostly about making failure modes cheaper. The system already has turn limits and a flat hierarchy, but it doesn't yet have good cost controls beyond that. Token budgets and basic circuit breakers would make it harder for a delegate to burn a surprising amount of compute in a tight loop.
On the coordination side, I'm interested in adding a little more structure without turning it into a framework. Even lightweight conventions, like consistent prefixes on wall messages or a clearer artifact layout, can reduce synthesis work.
Longer term, goose has roadmap work around more structured agent-to-agent communication. If that lands, it might replace parts of the wall. The tradeoff is the one we've been making all along: structure buys you scale and tooling; simplicity buys you debuggability and the ability to change policy by editing a markdown file.
— Tyler Longwell
Ready to try parallel agentic engineering for yourself? Goosetown is open source and available on GitHub. Clone the repo, follow the setup instructions in the README, and you'll be orchestrating multiple agents in no time. If you're new to this workflow, watching the video below is a great way to see what a real session looks like before diving in.
$PSDefaultParameterValues leak causing tests to skip unexpectedly (#26705)Expand to see details.
Microsoft.PowerShell.PSResourceGet version to 1.2.0-rc3 (#26767)Microsoft.PowerShell.Native package version (#26748)buildinfo.json uploading for preview, LTS, and stable releases (#26715)metadata.json to update the Latest attribute with a better name (#26708)runCodesignValidationInjection variable from pipeline templates (#26707)Get-ChangeLog to handle backport PRs correctly (#26706)v7.6.0-preview.6 release (#26626)AfterAll cleanup if the initial setup in BeforeAll failed (#26622)