Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148511 stories
·
33 followers

What’s new with GitHub Copilot coding agent

1 Share

You open an issue before lunch. By the time you’re back, there’s a pull request waiting.

That’s what GitHub Copilot coding agent is built for. It works in the background, fixing bugs, adding tests, cleaning up debt, and comes back with a pull request when it’s done. While you’re writing code in your editor with Copilot in real time, the coding agent is handling the work you’ve delegated.

A few recent updates make that handoff more useful. Here’s what shipped and how to start using it.

Visual learner? Watch the video above! ☝️

Choose the right model for each task

The Agents panel now includes a model picker.

Before, every background task ran on a single default model. You couldn’t pay for a more robust model to complete harder work or prioritize speed on routine tasks.

Now you can. Use a faster model for straightforward work like adding unit tests. Upgrade your model for a gnarly refactor or integration tests with real edge cases. If you’d rather not think about it, leave it on auto.

To get started:

  • Open the Agents panel (top-right in GitHub), select your repo, and pick a model.
  • Write a clear prompt and kick off the task.
  • Leave the model on auto if you’d rather let GitHub choose.

Model selection is available for Copilot Pro and Pro+ users now, with support for Business and Enterprise coming soon.

Learn more about model selection with Copilot coding agent. 👉

Pull requests that arrive in better shape

The painful part of reviewing agent output has always been the cleanup. You open the diff and there it is: logic that technically works, but nobody would write it that way.

Copilot coding agent now reviews its own changes using Copilot code review before it opens the pull request. It gets feedback, iterates, and improves the patch. By the time you’re tagged for review, someone already went through it.

In one session, the agent caught that its own string concatenation was overly complex and fixed it before the pull request landed. That kind of thing used to be your problem.

To get started:

  • Assign an issue to Copilot or create a task from the Agents panel.
  • Click into the task to view the logs.
  • See the moments where the agent ran Copilot code review and applied feedback.

Review the pull request when prompted. Copilot requests your review only after it has iterated.

Learn more about Copilot code review + Copilot coding agent. 👉

Security checks that run while the agent works

Just like with human-generated code, AI-generated code can introduce real risks: vulnerable patterns, secrets accidentally committed, dependencies with known CVEs. The difference is it does it faster. And you really don’t want to find that in review.

Copilot coding agent now runs code scanning, secret scanning, and dependency vulnerability checks directly inside its workflow. If a dependency has a known issue, or something looks like a committed API key, it gets flagged before the pull request opens.

Code scanning is normally part of GitHub Advanced Security. With Copilot coding agent, you get it for free.

To get started:

  • Run any task through the Agents panel.
  • Check the session logs as it runs. You’ll see scanning entries as the agent works.
  • Review the pull request. It’s already been through the security filter.

Learn more about security scanning in Copilot coding agent. 👉

Custom agents that follow your team’s process

A short prompt leaves a lot to judgment. And that judgment isn’t always consistent with how your team actually works.

Custom agents let you codify it. Create a file under .github/agents/ and define a specific approach. A performance optimizer agent, for example, can be wired to benchmark first, make the change, then measure the difference before opening a pull request.

In a recent GitHub Checkout demo, that’s exactly what happened. The agent benchmarked a lookup, made a targeted fix, and came back with a 99% improvement on that one function. Small scope, real data, no guessing.

You can share custom agents across your org or enterprise too, so the same process applies everywhere teams are using the coding agent.

To get started:

  • Create an agent file under .github/agents/ in your repo.
  • Open the Agents panel and start a new task.
  • Select your custom agent from the options.
  • Write a prompt scoped to what that agent does.

Learn more about creating custom agents. 👉

Move between cloud and local without losing context

Sometimes you start something in the cloud and want to finish it locally. Sometimes you’re deep in your terminal and want to hand something off without losing your flow. Either way, switching contexts used to mean starting the conversation over.

Now it doesn’t. Pull a cloud session into your terminal and you get the branch, the logs, and the full context. Or press & in the CLI to push work back to the cloud and keep going on your end.

To get started:

  • Start a task with Copilot coding agent and wait for the session to appear.
  • Click “Continue in Copilot CLI” and copy the command.
  • Paste it in your terminal to load the session locally with branch, logs, and context intact.
  • Press the ampersand symbol (&) in the CLI to delegate work back to the cloud and keep going locally.

Learn more about Copilot coding agent + CLI handoff. 👉

What this adds up to

Copilot coding agent has come a long way. Model selection, self-review, security scanning, custom agents, CLI handoff—and that’s just what shipped recently. The team is actively working on private mode, planning before coding, and using the agent for things that don’t even need a pull request, like summarizing issues or generating reports. There’s a lot more coming. Stay tuned.

Share feedback on what ships next in GitHub Community discussions.

Get started with GitHub Copilot coding agent >

The post What’s new with GitHub Copilot coding agent appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Building AI Agents That Wait For Humans

1 Share

The Problem

You have an AI agent that's brilliant at analyzing production incidents. It can examine logs, correlate metrics, identify root causes, and propose remediation steps in seconds. But here's the thing -- you don't want it executing changes to production without human oversight.

So you build a workflow:

  1. Agent analyzes the incident
  2. Human reviews the analysis
  3. Human approves or rejects the remediation plan
  4. Agent executes the approved steps

Simple enough. Until reality hits:

  • What if the human isn't available right now? The incident fires at 2 AM. The on-call engineer sees the notification on their phone, but wants to review it properly at their desk in the morning.
  • What if the process restarts? Azure Functions can scale to zero. Containers get recycled. Deployments happen.
  • What if the approval takes days? Maybe it needs sign-off from the security team. Maybe it's a Friday evening and the change window is Monday.

With a normal async workflow, you'd need to persist state manually, build polling mechanisms, handle timeouts, and wire up a way to resume execution. That's a lot of plumbing for what should be a simple "wait for human input" step.

The Solution:

This sample combines three Azure technologies that solve this elegantly:

1. Azure Durable Functions -- Stateful Orchestration

Durable Functions let you write workflows as plain Python generator functions. The runtime automatically checkpoints execution after each yield, so the workflow can survive process restarts, scale-downs, and crashes. When you call wait_for_external_event, the orchestrator goes dormant -- consuming zero compute -- until the event arrives

2. Microsoft Agent Framework -- AI Agent Abstraction

The Microsoft Agent Framework provides a clean abstraction for building AI agents. You define agents with instructions and a chat client, register them with AgentFunctionApp, and use them in orchestrations via app.get_agent(context, "AgentName"). The framework handles durable entity state, conversation threading, and tool execution.

3. Azure SignalR Service -- Real-Time Streaming

SignalR pushes events from the server to the browser in real time -- no polling needed. As the AI agent works through its analysis, each step is streamed to the frontend instantly. The operator sees a live feed of what the agent is doing, building trust and enabling faster decisions.

The Full Flow

Code Walkthrough

Setting Up the Agents

 

The entry point is `function_app.py`. We create two agents using the Microsoft Agent Framework:

 

import os from agent_framework.azure import AgentFunctionApp, AzureOpenAIChatClient # AzureOpenAIChatClient reads AZURE_OPENAI_ENDPOINT and # AZURE_OPENAI_CHAT_DEPLOYMENT_NAME from env vars automatically chat_client = AzureOpenAIChatClient( api_key=os.getenv("AZURE_OPENAI_API_KEY"), ) analyzer_agent = chat_client.as_agent( name="IncidentAnalyzer", instructions=( "You are an expert production incident response analyst. " "When given an incident report, you analyze logs, metrics, and system state " "to diagnose the root cause and propose a remediation plan..." ), ) remediator_agent = chat_client.as_agent( name="RemediationExecutor", instructions=( "You are a remediation execution specialist for production systems. " "When given a remediation step, you describe how you would execute it..." ), ) app = AgentFunctionApp( agents=[analyzer_agent, remediator_agent], enable_health_check=True, )

`AgentFunctionApp` (imported from `agent_framework.azure`) extends the Azure Durable Functions app class. When you pass agents to it, it automatically:

 

- Creates a **durable entity** for each agent (managing conversation state)

- Creates an **HTTP endpoint** at `/api/agents/{name}/run` (for direct invocation)

- Exposes a `get_agent(context, name)` method for use inside orchestrations

The Orchestrator -- Where the Magic Happens

 

The orchestrator is a generator function that yields tasks. Each `yield` is a checkpoint.

 

@app.orchestration_trigger(context_name="context") def incident_response_orchestrator(context): input_data = context.get_input() instance_id = context.instance_id user_id = input_data.get("user_id", "anonymous") # Step 1: Acknowledge receipt (via SignalR) yield context.call_activity("notify_user", { "user_id": user_id, "instance_id": instance_id, "event": "incident_received", "data": {"title": input_data["title"]}, }) # Step 2: Run AI analysis via Agent Framework analyzer = app.get_agent(context, "IncidentAnalyzer") analyzer_session = analyzer.create_session() analysis_response = yield analyzer.run( messages=f"Analyze this incident: {input_data['description']}", session=analyzer_session, ) # Checkpoints here! analysis = json.loads(analysis_response.text)

Notice the pattern:

  1. `app.get_agent(context, "IncidentAnalyzer")` returns a `DurableAIAgent` -- a proxy that delegates to a durable entity
  2. `analyzer.create_session()` creates a conversation session for the agent
  3. `yield analyzer.run(messages=..., session=...)` checkpoints the orchestrator and waits for the agent to complete

The HITL Pause -- The Star of the Show

 

After the AI analysis completes, the orchestrator sends results to the frontend and then **waits**:

# Step 3: Send results to frontend yield context.call_activity("notify_user", { "user_id": user_id, "instance_id": instance_id, "event": "approval_required", "data": {"remediation_steps": analysis["remediation_steps"]}, }) # Step 4: THE HITL PAUSE approval_event = context.wait_for_external_event("ApprovalDecision") timeout = context.create_timer( context.current_utc_datetime + timedelta(hours=72) ) winner = yield context.task_any([approval_event, timeout]) if winner == timeout: # No response in 72 hours -- expire return {"status": "expired"} timeout.cancel() decision = approval_event.result

This is the key pattern:

 

- `wait_for_external_event("ApprovalDecision")` creates a task that completes when someone calls `raise_event` with that event name

- `create_timer(datetime)` creates a task that completes after the specified duration

- `task_any([event, timer])` waits for whichever completes first (like `Promise.race`)

- The `yield` checkpoints the entire orchestrator state

 

The orchestrator is now dormant. The Azure Functions runtime can scale to zero. The container can be recycled. A deployment can happen. Nothing is holding resources.

 

When the human eventually clicks "Approve" (minutes, hours, or days later), the HTTP endpoint calls:

await client.raise_event(instance_id, "ApprovalDecision", {"decision": "approve"})

The Durable Functions runtime:

 

  1. Reads the execution history from storage
  2. Replays the orchestrator from the beginning (but cached results are returned instantly for completed activities)
  3. Reaches the `wait_for_external_event` -- the event is now available
  4. Continues execution with the approval data

SignalR -- Real-Time Streaming

 

Every significant step in the orchestrator sends a SignalR notification:

def notify(event, data): return context.call_activity("notify_user", { "user_id": user_id, "instance_id": instance_id, "event": event, "data": data, })

 

The `notify_user` activity uses Azure Functions' SignalR output binding to push messages:

 

@app.activity_trigger(input_name="payload") @app.generic_output_binding( arg_name="signalRMessages", type="signalR", hub_name="incidenthub", connection_string_setting="AzureSignalRConnectionString", ) def notify_user(payload, signalRMessages): message = { "userId": payload["user_id"], "target": payload["event"], "arguments": [{"instance_id": payload["instance_id"], **payload["data"]}], } signalRMessages.set(json.dumps([message]))

 

The frontend connects via the `/api/negotiate` endpoint and listens for events:

 

connection.on('analysis_complete', (data) => { // Render the diagnosis card with AI findings showAnalysisResults(data); }); connection.on('approval_required', (data) => { // Show the Approve/Reject buttons showApprovalPanel(data.remediation_steps); }); connection.on('remediation_step', (data) => { // Update the progress bar for this step updateStepProgress(data.step_number, data.status); });

The Frontend -- Session Persistence

One important detail: the browser needs to handle page refreshes and reconnections. The frontend stores the `instanceId` in `sessionStorage` and restores state on reload:

 

// Save session on incident creation sessionStorage.setItem('session', JSON.stringify({ instanceId: data.instance_id, userId: userId })); // On page load, check for saved session const session = JSON.parse(sessionStorage.getItem('session')); if (session?.instanceId) { const status = await fetch(`/api/incident/${session.instanceId}/status`); if (status.customStatus?.stage === 'awaiting_approval') { // Re-show the approval panel showApprovalPanel(); } }

How Replay Works -- The Bookmark Analogy

The Durable Functions replay mechanism is the key to everything. Think of it like reading a book with a bookmark:

 

  1. **First read**: You read page by page (executing activities, calling agents). After each page, you move the bookmark.
  2. **Interrupted**: Someone takes the book away (process restart, scale-down).
  3. **Resume**: You get the book back. You don't re-read from page 1 -- you flip to the bookmark and your notes tell you what happened on each page (cached activity results).
  4. **Continue**: You pick up reading from where you left off.

 

In code terms:

 

- Each `yield` is a "page turn" (checkpoint)

- Activity results are "notes in the margin" (cached in the execution history)

- `wait_for_external_event` is a "bookmark with a question mark" (paused until answered)

- The orchestrator function itself is deterministic -- same inputs always produce the same sequence of yields

 

**Important rule**: Never use `datetime.now()`, `random()`, or direct I/O in orchestrator code. Use `context.current_utc_datetime` for time, `context.new_uuid()` for unique IDs, and activities for all side effects.

Watch Out: The 16 KB Payload Limit

One gotcha that will bite you in production: **Durable Functions enforces a 16 KB limit on JSON-serialized payloads** for `custom_status`, return values, and activity inputs/outputs. AI agents tend to produce verbose responses -- detailed diagnoses, multi-step remediation plans, execution logs -- and it's easy to exceed this limit.

 

You'll see an error like:

Orchestrator function 'incident_response_orchestrator' failed: The UTF-16 size of the JSON-serialized payload must not exceed 16 KB. The current payload size is 20 KB.

 

This happens when you store the full AI analysis, remediation steps, and completed step results in `context.set_custom_status()` or the orchestrator's return value. Each remediation step adds output text, and the payload grows with every iteration.

The Fix: Store Large Data in Azure Table Storage

The pattern is straightforward: keep only **summary data** in the orchestrator state, and offload full results to **Azure Table Storage** (or Blob Storage for very large payloads). Use the `instance_id` as the partition key so you can easily retrieve everything for a given incident.

 

# ❌ Don't do this -- payload grows with every step and will exceed 16 KB context.set_custom_status({ "stage": "remediating", "analysis": analysis, # Full AI response -- could be 5+ KB alone "remediation_steps": remediation_steps, # Detailed step list "completed_steps": completed_steps, # Grows with each iteration }) # ✅ Do this instead -- keep custom_status small context.set_custom_status({ "stage": "remediating", "analysis_summary": { "root_cause": analysis.get("root_cause", ""), "severity_assessment": analysis.get("severity_assessment", ""), "confidence": analysis.get("confidence", 0), }, "completed_steps": len(completed_steps), "total_steps": len(remediation_steps), }) # ✅ Persist full details via an activity that writes to Table Storage yield context.call_activity("save_incident_data", { "instance_id": instance_id, "analysis": analysis, "completed_steps": completed_steps, })

 

The storage activity is simple:

from azure.data.tables import TableServiceClient @app.activity_trigger(input_name="payload") def save_incident_data(payload): """Persist full incident data to Azure Table Storage.""" table_client = TableServiceClient.from_connection_string( os.getenv("AzureWebJobsStorage") ).get_table_client("IncidentData") table_client.upsert_entity({ "PartitionKey": payload["instance_id"], "RowKey": "analysis", "data": json.dumps(payload["analysis"]), }) for step in payload.get("completed_steps", []): table_client.upsert_entity({ "PartitionKey": payload["instance_id"], "RowKey": f"step-{step['step_number']}", "data": json.dumps(step), })

 

This gives you the best of both worlds:

 

- **Orchestrator state stays small** -- `custom_status` and return values contain only summary fields, well under 16 KB

- **Full details are queryable** -- the frontend or an admin API can read from Table Storage using the `instance_id`

- **SignalR still delivers real-time details** -- the `notify()` calls in the orchestrator push full analysis and step results to the browser, which has no size limit

- **Audit trail** -- Table Storage gives you a durable, queryable record of every incident and its remediation history

 

 **Tip**: For payloads that could exceed Table Storage's 64 KB entity property limit (e.g., very long agent conversations), use Azure Blob Storage instead and store a blob reference in the orchestrator state.

Running the Sample

Prerequisites

- Python 3.10+

- Azure Functions Core Tools v4+

- Azure SignalR Service (Serverless mode)

- Azure OpenAI with a deployed model

Setup

git clone https://github.com/lordlinus/durable-agents-hitl-sample.git cd durable-agents-hitl-sample python -m venv .venv source .venv/bin/activate pip install -r requirements.txt # Copy and edit the config cp local.settings.json.template local.settings.json # Fill in your Azure OpenAI and SignalR values # Start the Durable Task Scheduler emulator docker run -p 8080:8080 mcr.microsoft.com/durable-task/scheduler:latest # Start the function app func start

 

Localhost Open `http://localhost:7071/api/index` and submit an incident.

Beyond HTTP: Auto-Triggering From Events

In this sample, incident response starts with an HTTP POST from the UI. But in production, you likely want orchestrations to kick off **automatically** when your monitoring picks up a problem. Azure Functions supports a rich set of event-driven triggers -- and since the orchestrator is decoupled from the trigger, you can start the same `incident_response_orchestrator` from any of them.

Azure Monitor / Alert Rules

The most natural fit for incident response. When an Azure Monitor alert fires (high error rate, latency spike, health check failure), it can invoke an Azure Function via an Action Group. You parse the alert payload and start the orchestration:

@app.route(route="alert/monitor", methods=["POST"]) @app.durable_client_input(client_name="client") async def monitor_alert_trigger(req: func.HttpRequest, client): """Auto-start incident response from an Azure Monitor alert.""" alert = req.get_json() alert_context = alert.get("data", {}).get("alertContext", {}) instance_id = await client.start_new( "incident_response_orchestrator", client_input={ "user_id": "on-call-team", "title": alert.get("data", {}).get("essentials", {}).get("alertRule", "Monitor Alert"), "description": json.dumps(alert_context.get("condition", {})), "severity": alert.get("data", {}).get("essentials", {}).get("severity", "high"), "affected_service": alert_context.get("conditionType", "unknown"), }, ) return func.HttpResponse(json.dumps({"instance_id": instance_id}), status_code=202)

 

Now a P99 latency spike in Azure Monitor automatically triggers AI-powered analysis and queues up a remediation plan for human approval -- all without anyone manually filing the incident.

Event Grid

Azure Event Grid gives you reactive access to events across your entire Azure estate -- resource provisioning failures, security alerts from Microsoft Defender for Cloud, storage events, custom app events, and more. Use an Event Grid trigger to start orchestrations:

 

.function_name("EventGridIncidentTrigger") @app.event_grid_trigger(arg_name="event") @app.durable_client_input(client_name="client") async def eventgrid_incident_trigger(event: func.EventGridEvent, client): """Auto-start incident response from Event Grid events.""" event_data = event.get_json() instance_id = await client.start_new( "incident_response_orchestrator", client_input={ "user_id": "on-call-team", "title": f"Event Grid: {event.event_type}", "description": json.dumps(event_data), "severity": "high" if "security" in event.event_type.lower() else "medium", "affected_service": event.subject or "unknown", }, )

 

This is powerful for scenarios like: a Defender for Cloud alert fires about a suspicious login, and the AI agent immediately starts correlating identity logs and preparing a lockdown plan.

Service Bus / Event Hubs

For organizations with existing observability or event-streaming pipelines, Azure Service Bus queues and Event Hubs are common trigger sources. Your monitoring platform (Datadog, Grafana, custom) pushes events into a queue, and a Function picks them up:

 

.function_name("ServiceBusIncidentTrigger") @app.service_bus_queue_trigger( arg_name="msg", queue_name="incident-events", connection="ServiceBusConnection", ) @app.durable_client_input(client_name="client") async def servicebus_incident_trigger(msg: func.ServiceBusMessage, client): """Auto-start incident response from Service Bus queue messages.""" body = json.loads(msg.get_body().decode("utf-8")) instance_id = await client.start_new( "incident_response_orchestrator", client_input={ "user_id": body.get("team", "on-call-team"), "title": body.get("title", "Pipeline Event"), "description": body.get("description", msg.get_body().decode("utf-8")), "severity": body.get("severity", "medium"), "affected_service": body.get("service", "unknown"), }, )

 

The key takeaway: the orchestrator doesn't care how it was started. Whether a human clicks a button, Azure Monitor fires an alert at 2 AM, Event Grid reacts to a security event, or a Service Bus message arrives from your observability pipeline the same durable, HITL-enabled workflow executes. You pick the trigger that fits your ops workflow and wire it up.

Real-World Scenarios

 

This HITL + Durable Functions + SignalR pattern applies far beyond incident response:

 

- Code Review Automation - AI reviews a PR, suggests changes, waits for developer approval before auto-merging

- Content Moderation - AI flags content, streams reasoning to moderators, waits for human judgment

- Online Campaigns - AI Agents create a targeted campaign and wait for human approval before launch. Sample repo

- Medical Triage - AI analyzes symptoms, proposes treatment plan, waits for physician sign-off

- Financial Compliance - AI detects suspicious transactions, builds a case, waits for compliance officer review

- Infrastructure Changes - AI proposes scaling decisions, waits for SRE approval before executing

- Intelligent Document Processing - AI extracts data from documents, presents findings, waits for human validation

 

The common thread: an AI agent does the heavy lifting, but a human stays in the loop for high-stakes decisions. The orchestration is durable enough to bridge the gap between machine speed and human schedules.

Why Agent Framework?

The Microsoft Agent Framework durable functions extension provides several advantages:

  1. Durable execution -- Agents run inside durable entities, so conversation state persists across function invocations
  2. Clean abstraction -- Define agents with `chat_client.as_agent(name, instructions)` and use them with `app.get_agent(context, name)` in orchestrations
  3. Automatic infrastructure -- `AgentFunctionApp` auto-creates HTTP endpoints, durable entities, and health checks for each agent
  4. Orchestration-native -- `yield agent.run(messages=..., session=...)` checkpoints and resumes in orchestrations, fitting naturally into the Durable Functions programming model
  5. Tool support -- Agents can have tools (functions) that the framework automatically invokes during conversation

Summary

Building AI workflows that involve human decisions doesn't have to be complex. With Azure Durable Functions for stateful orchestration, Microsoft Agent Framework for AI agents, and Azure SignalR for real-time streaming, you get:

- Durability -- Workflows survive restarts and can wait for days

- Human-in-the-Loop -- One-line pause with `wait_for_external_event`

- Real-time UX -- Live progress streaming via SignalR

- Clean code -- Orchestrator reads like a sequential script, not a state machine

 

The full source code is available in the sample repository. Clone it, plug in your Azure OpenAI and SignalR credentials, and see it in action.

References

Azure Functions Python Developer Guide -- Getting started with Python on Azure Functions

- Agent Framework -- Azure Functions hosting sample -- Running agents using Azure Functions

- Agent Framework -- Durable Task hosting sample -- Running agents with durable orchestration

- Multi-agent Workflow with Human Approval using Agent Framework

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing ReFS Boot for Windows Server Insiders

1 Share
We’re excited to announce that Resilient File System (ReFS) boot support is now available for Windows Server Insiders in Insider Preview builds. For the first time, you can install and boot Windows Server on an ReFS-formatted boot volume directly through the setup UI. With ReFS boot, you can finally bring modern resilience, scalability, and performance to your server’s most critical volume — the OS boot volume.

Why ReFS Boot?

Modern workloads demand more from the boot volume than NTFS can provide. ReFS was designed from the ground up to protect data integrity at scale. By enabling ReFS for the OS boot volume we ensure that even the most critical system data benefits from advanced resilience, future-proof scalability, and improved performance.

In short, ReFS boot means a more robust server right from startup with several benefits:

  • Resilient OS disk: ReFS improves boot‑volume reliability by detecting corruption early and handling many file‑system issues online without requiring chkdsk. Its integrity‑first, copy‑on‑write design reduces the risk of crash‑induced corruption to help keep your system running smoothly.
  • Massive scalability: ReFS supports volumes up to 35 petabytes (35,000 TB) — vastly beyond NTFS’s typical limit of 256 TB. That means your boot volume can grow with future hardware, eliminating capacity ceilings.
  • Performance optimizations: ReFS uses block cloning and sparse provisioning to accelerate I/O‑heavy scenarios — enabling dramatically faster creation or expansion of large fixed‑size VHD(X) files and speeding up large file copy operations by copying data via metadata references rather than full data movement.

 

Maximum Boot Volume Size: NTFS vs. ReFS

 

 

Resiliency Enhancements with ReFS Boot

Feature

ReFS Boot Volume  

NTFS Boot Volume

Metadata checksums

✅ Yes

❌ No

Integrity streams (optional)

✅ Yes

❌ No

Proactive error detection (scrubber)

✅ Yes

❌ No

Online integrity (no chkdsk)

✅ Yes

❌ No

Check out Microsoft Learn for more information on ReFS resiliency enhancements.

 

Performance Enhancements with ReFS Boot

Operation

ReFS Boot Volume

NTFS Boot Volume

Fixed-size VHD creation

Seconds

Minutes

Large file copy operations

Milliseconds-seconds (independent of file size)

Seconds-minutes
(linear with file size)

Sparse provisioning

Check out Microsoft Learn for more information on ReFS performance enhancements.

 

Getting Started with ReFS Boot

Ready to try it out? Here’s how to get started with ReFS boot on Windows Server Insider Preview:

1. Update to the latest Insider build: Ensure you’re running the most recent Windows Server vNext Insider Preview (Join Windows Server Insiders if you haven’t already). Builds from 2/11/26 or later (minimum build number 29531.1000.260206-1841) include ReFS boot in setup. 

2. Choose ReFS during setup: When installing Windows Server, format the system (C:) partition as ReFS in the installation UI. 
Note: ReFS boot requires UEFI firmware and does not support legacy BIOS boot; as a result, ReFS boot is not supported on Generation 1 VMs.

Screenshot of the Windows Server Setup UI showing ReFS as a File System format option.

3. Complete installation & verify: Finish the Windows Server installation as usual. Once it boots, confirm that your C: drive is using ReFS (for example, by running fsutil fsinfo volumeInfo C: or checking the drive properties). That’s it – your server is now running with an ReFS boot volume.

Screenshot of PowerShell output showing the C: drive formatted as ReFS.

 

A step-by-step demo video showing how to install Windows Server on an ReFS-formatted boot volume, including UEFI setup, disk formatting, and post-install verification. If the player doesn’t load, open the video in a new window: Open video.

 

Call to Action

In summary, ReFS boot brings future-proof resiliency, scalability, and performance improvements to the Windows Server boot volume — reducing downtime, removing scalability limits, and accelerating large storage operations from day one.

We encourage you to try ReFS boot on your servers and experience the difference for yourself. As always, we value your feedback. Please share your feedback and questions on the Windows Server Insiders Forum.

Christina Curlette (and the Windows Server team)

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Word, Excel and PowerPoint Agents in Microsoft 365 Copilot: Overview + live demo- February 2026 M365 Champions Community call

1 Share

Hello Champions!

Here’s a recap and top Q+A from our February M365 Champions monthly call of 2026, featuring Word, Excel and PowerPoint Agents in Microsoft 365 Copilot with Alex Yang, Chelsea Fesik and Sangeeta Kulkarni of the Office AI product team.

 We kicked off the call with two major community event announcements. First, the SharePoint 25th Birthday Celebration is happening on March 2, 2026—a free digital event highlighting SharePoint’s most defining innovations and offering a forward‑looking view into its AI‑driven future, with insights from Microsoft leaders. We also shared that the Microsoft 365 Community Conference returns April 21–23, 2026 in Orlando, where attendees can connect with Microsoft experts, explore hands‑on labs, and dive into the latest advancements across AI and Microsoft 365.

Alex and Chelsea introduced the new Word, Excel, and PowerPoint agents coming to Microsoft 365 Copilot, explaining that they are designed to solve the “blank page” problem by turning natural‑language intent into finished, professional artifacts, not just drafts. They positioned these agents as task‑specific “thought partners” that help users move from conversation to completed documents, spreadsheets, and presentations, grounded in enterprise data and the web while respecting permissions. The agents ask clarifying questions when needed (such as audience, tone, or format) and support multi‑turn iteration so users can refine content conversationally rather than starting from scratch.

During his demo, Alex focused on the Excel (and PowerPoint) agent experience, showing how Copilot can create a fully built Excel workbook directly from a natural‑language request. In his demo, the Excel agent generated a multi‑tab spreadsheet complete with formulas, structured models, and sourced data, then seamlessly handed the file off to Excel for further refinement. He explained that the agent intelligently decides when to use enterprise data versus web research and highlighted the workflow distinction: using agents in Copilot chat for creation from scratch, then switching to in‑app agent mode for precise, iterative edits once the file exists.

Next, Chelsea demonstrated the Word agent, emphasizing long‑form, structured writing scenarios such as guides, reports, and proposals. Her demo showed how Word agent asks clarifying questions (like audience, tone, and theme), then generates a polished document that users can refine through conversation—adding sections, changing tone, or expanding details. She highlighted multi‑turn iteration, visible references, and how the agent acts as a thought partner that helps users move past the blank page while still keeping them in control of the final output.

For more information on Word, Excel, and PowerPoint agents coming to Microsoft 365 Copilot, read the official release blog by Sangeeta Kulkarni here!

Q+A from this month's session:

  1. Can Word Agent use organizational templates?

Answer: Not fully today. Users can reference existing documents or templates and the agent will try to replicate them, but native support for organizational template libraries is still in progress.

 

  1. Will Word, Excel, and PowerPoint Agents be available without Copilot Premium?

Answer: Rollout is currently prioritized for Microsoft 365 Copilot Premium customers, with broader availability planned after the initial rollout completes.

 

  1. When will these agents be available worldwide (including Australia)?

Answer: Rollout started roughly two weeks before the call, with Microsoft aiming to saturate most worldwide tenants during March.

 

  1. Will agents support corporate brand kits and branding guidelines?

Answer: This is planned but not fully supported yet. Agents can reuse referenced documents today, but formal brand kit and template integration is under active development.

 

  1. Will these agents be embedded directly inside Word, Excel, and PowerPoint?

Answer: They are accessed via Copilot Chat and the Tools/Agents menu. Inside the apps, users interact through Agent Mode, which is the preferred in-app experience.

 

  1. When is the best time to use these agents in Copilot app vs agent mode in Word/PPT/Excel?

Answer: The Word/Excel/PowerPoint Agents in the M365 Copilot app are best suited for creating new documents. When you have a document already created and want to iterate on your document with Copilot with the document open side-by-side, then Agent Mode is best-suited for that.

 

  1. How do Word/Excel/PowerPoint Agents differ from Researcher Agent?

Answer: Researcher is optimized for deep, long-running research. App agents are optimized for creating polished, app-native documents with layouts, charts, and formatting.

 

  1. How does Microsoft reduce hallucinations in generated content?

Answer: Newer models have significantly lower hallucination rates, and agents surface citations, flag missing data, and provide templates when data can’t be verified.

 

  1. Are sources and citations included in generated documents?

Answer: Yes. Agents generate explicit source lists for facts and numbers, especially for Excel and research-heavy outputs.

 

  1. What happens if the agent can’t find reliable data?

Answer: Instead of fabricating information, the agent will flag missing data or provide a structured template for the user to complete.

 

  1. Are these agents user-created or personal agents?

Answer: No. Word, Excel, PowerPoint, Researcher, and Analyst are first‑party Microsoft agents, not user-created agents.

 

  1. What happens to agents when a user leaves the organization?

Answer: Agents remain available. Files created by the user follow standard OneDrive offboarding and governance processes.

 

  1. Do these agents live in a user’s OneDrive or SharePoint?

Answer: No. The agents themselves do not live in user storage; only the generated documents do.

 

  1. Why am I seeing “Frontier agent availability is restricted” errors?

Answer: This indicates the tenant is not enrolled in the Copilot Frontier program and the GA version has not yet rolled out to that tenant.

 

  1. Do I need to enable Anthropic as an AI provider for these agents?

Answer: Anthropic enablement is required for Frontier access, but many users simply need to wait for GA rollout to their tenant.

 

  1. Will the Frontier agent disappear once GA is available?

Answer: Yes. The Frontier version will be replaced by the general availability agent when rollout completes.

 

  1. Will these agents appear in the Copilot Tools menu?

Answer: Yes. Word, Excel, and PowerPoint Agents will appear alongside Researcher and Analyst in the Tools menu.

 

  1. Is there any difference between accessing the agent from copilot for m365 app or within the app itself?

Answer: The agents in the M365 Copilot app and from within the app itself are similarly capable. The M365 Copilot app agents are tuned towards creating documents, whereas if you want to do lots of iterative work, we recommend using the Agent Modes inside of the apps.

 

  1. Can Excel Agent generate full financial models?

Answer: Yes. Excel Agent can generate multi-sheet workbooks with formulas, projections, dashboards, and scenarios.

 

  1. Can Word Agent pull from enterprise data like meetings and files?

Answer: Yes. Word Agent can pull from Microsoft Graph sources such as emails, chats, meetings, and documents—respecting permissions.

 

  1. Can agents generate charts, graphs, and visuals?

Answer: Yes. Word and Excel agents generate charts and tables, and PowerPoint Agent generates slides with visuals.

 

  1. Can users iterate on generated content after creation?

Answer: Yes. Users can continue refining content either in Copilot Chat or directly in the app using Agent Mode.

 

  1. Can prompts and agent behavior be version-controlled in SharePoint?

Answer: This is not applicable because these agents are not user-created; they are global first‑party agents.

 

  1. Are governance controls the same as other Microsoft 365 files?

Answer: Yes. Generated files follow the same sensitivity labels, permissions, and compliance controls as any other file.  

 

  1. Are Word/Excel/PowerPoint Agents the same as Copilot Studio agents?

Answer: No. Copilot Studio agents are custom and user-created, while Office app agents are first‑party Microsoft features.

 

  1. We have a managed property "Sensitivity" applied to OneDrive and SharePoint. These labels include "Corporate, sensitive, confidential, etc.) to assign to documents and data. Does - or - can Copilot agents be made to respond to Managed Properties to not include specific content with its creations?

Answer: Yes, all of the agents support sensitivity labeling and Azure Information Protection.

 

  1. Is it possible to limit the data source to our tenant's content or is it always assumed to look/use the web?

Answer: Select Work (at top) then go to Settings and turn off Web search.

 

  1. So, what's the best way to describe the different between Analyst and Excel Agent, or Researcher and Word Agent?

Answer: The Word and Excel agents are scoped specifically to functionality within those applications. Researcher will use content from a variety of sources in your enterprise and is meant for deeper analysis.

 

  1. I'm in a GCC Tenant, are these available in there now?

Answer:  These are currently in the process of rolling out the agents to users in the Worldwide (non-GCC) environments. We will be rolling them out to GCC at a later date.

 

  1. The game changer and what I am waiting for is actually calling the WXP agents from an agent flow so that it can create documents in a deterministic way. When will that be possible?

Answer: No timelines to share about this just yet, but this is something we also really want as well. ;)

 

Join us next month on March 24th for our next community call.

 

 

 

 

 

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

VS Code for the Web

1 Share
From: Microsoft Azure Developers
Duration: 19:12
Views: 27

Get started with VS Code for the Web - Azure to seamlessly run, debug and deploy your applications with no setup! This browser based VS Code environment allows you to work as you would locally, but wherever you are. Watch this video to see us create an enterprise-grade application, run it and deploy it to Azure within minutes!

🌮 Chapter Markers:
0:30 – Introduction
2:26 – VS Code for the Web - Azure Overview
5:21 - AI Template Entry Point Scenario
6:00 – Microsoft Foundry Entry Point Scenario
8:42 - RAG Chat Application - Enterprise-level Application Scenario
13:52 - GitHub Copilot in /azure
16:25 - RAG Chat Application Deployed - Testing Scenario
18:10 - Go to vscode.dev/azure to try it out today!

🌮 Resources
VS Code​ Docs: https://code.visualstudio.com/docs/azure/vscodeforweb

🌮 Follow us on social:
Scott Hanselman | @SHanselman – https://x.com/SHanselman
Meera Haridasa | @meeraharidasa - https://www.linkedin.com/in/meeraharidasa/

Blog - https://aka.ms/azuredevelopers/blog
Twitter - https://aka.ms/azuredevelopers/twitter
LinkedIn - https://aka.ms/azuredevelopers/linkedin
Twitch - https://aka.ms/azuredevelopers/twitch

#azuredeveloper #azure

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

We Built an App That Runs 10 Copilot Agents at Once — Here's How

1 Share
From: Gerald Versluis
Duration: 29:45
Views: 116

What if you could run dozens of GitHub Copilot coding agents in parallel, and control them all from one app? That's PolyPilot.

In this video we walk through what PolyPilot is, how it works, and why we built it. PolyPilot is a cross-platform app built with .NET MAUI and Blazor that lets you spin up, orchestrate, and monitor multiple Copilot agents — each with its own model, repo, and conversation — from a single dashboard. Or from your phone.

We cover:
⚡ Running parallel agents across repos with different models (Claude, GPT, Gemini)
📱 Remote access — monitor your agent fleet from your phone via DevTunnel
🌿 Worktree management for zero-conflict parallel work
🤖 Squad presets —launch pre-configured agent teams with one click
🔁 Reflection cycles for goal-based iterative refinement
💾 Session persistence — agents never lose context across restarts

The best part? Most of PolyPilot was built by Copilot agents, orchestrated from within PolyPilot itself.

💝 Join this channel to get access to perks:
https://www.youtube.com/channel/GeraldVersluis/join

🛑 Don't forget to subscribe to my channel for more cool content: https://www.youtube.com/GeraldVersluis/?sub_confirmation=1

🔗 Links
PolyPilot repo: https://github.com/PureWeen/PolyPilot
MauiDevFlow: https://github.com/Redth/MauiDevFlow
@GoneDotNet: https://www.youtube.com/@GoneDotNet

⏱ Timestamps
00:00 - PolyPilot - Run an army of GitHub Copilot agents from a single app
00:27 - What is PolyPilot?
03:05 - PolyPilot in Action
04:02 - Enhance PolyPilot from within PolyPilot
05:36 - Create a new Copilot orchestration/fleet
08:55 - Manage GitHub Copilot sessions
11:11 - Demo: Implement new PolyPilot feature using PolyPilot
16:08 - PolyPilot Mobile app: command Copilot from remote
19:42 - MauiDevFlow
25:17 - Chat, Plan, Autopilot Modes
28:36 - Use PolyPilot for everything!

#githubcopilot #ai #copilot #githubcopilotcli

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories