Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151904 stories
·
33 followers

🏆 Agents League Winner Spotlight – Reasoning Agents Track

1 Share

Agents League was designed to showcase what agentic AI can look like when developers move beyond single‑prompt interactions and start building systems that plan, reason, verify, and collaborate.

Across three competitive tracks—Creative Apps, Reasoning Agents, and Enterprise Agents—participants had two weeks to design and ship real AI agents using production‑ready Microsoft and GitHub tools, supported by live coding battles, community AMAs, and async builds on GitHub.

Today, we’re excited to spotlight the winning project for the Reasoning Agents track, built on Microsoft Foundry: CertPrep Multi‑Agent System — Personalised Microsoft Exam Preparation by Athiq Ahmed.

 

The Reasoning Agents Challenge Scenario

The goal of the Reasoning Agents track challenge was to design a multi‑agent system capable of effectively assisting students in preparing for Microsoft certification exams. Participants were asked to build an agentic workflow that could understand certification syllabi, generate personalized study plans, assess learner readiness, and continuously adapt based on performance and feedback. The suggested reference architecture modeled a realistic learning journey: starting from free‑form student input, a sequence of specialized reasoning agents collaboratively curated Microsoft Learn resources, produced structured study plans with timelines and milestones, and maintained learner engagement through reminders. Once preparation was complete, the system shifted into an assessment phase to evaluate readiness and either recommend the appropriate Microsoft certification exam or loop back into targeted remediation—emphasizing reasoning, decision‑making, and human‑in‑the‑loop validation at every step.

All details are available here: agentsleague/starter-kits/2-reasoning-agents at main · microsoft/agentsleague.

 

The Winning Project: CertPrep Multi‑Agent System

The CertPrep Multi‑Agent System is an AI solution for personalized Microsoft certification exam preparation, supporting nine certification exam families.

At a high level, the system turns free‑form learner input into a structured certification plan, measurable progress signals, and actionable recommendations—demonstrating exactly the kind of reasoned orchestration this track was designed to surface.

 

 

Inside the Multi‑Agent Architecture

At its core, the system is designed as a multi‑agent pipeline that combines sequential reasoning, parallel execution, and human‑in‑the‑loop gates, with traceability and responsible AI guardrails.

The solution is composed of eight specialized reasoning agents, each focused on a specific stage of the learning journey:

  • LearnerProfilingAgent – Converts free‑text background information into a structured learner profile using Microsoft Foundry SDK (with deterministic fallbacks).
  • StudyPlanAgent – Generates a week‑by‑week study plan using a constrained allocation algorithm to respect the learner’s available time.
  • LearningPathCuratorAgent – Maps exam domains to curated Microsoft Learn resources with trusted URLs and estimated effort.
  • ProgressAgent – Computes a weighted readiness score based on domain coverage, time utilization, and practice performance.
  • AssessmentAgent – Generates and evaluates domain‑proportional mock exams.
  • CertificationRecommendationAgent – Issues a clear “GO / CONDITIONAL GO / NOT YET” decision with remediation steps and next‑cert suggestions.

Throughout the pipeline, a 17‑rule Guardrails Pipeline enforces validation checks at every agent boundary, and two explicit human‑in‑the‑loop gates ensure that decisions are made only when sufficient learner confirmation or data is present.

CertPrep leverages Microsoft Foundry Agent Service and related tooling to run this reasoning pipeline reliably and observably:

  • Managed agents via Foundry SDK
  • Structured JSON outputs using GPT‑4o (JSON mode) with conservative temperature settings
  • Guardrails enforced through Azure Content Safety
  • Parallel agent fan‑out using concurrent execution
  • Typed contracts with Pydantic for every agent boundary
  • AI-assisted development with GitHub Copilot, used throughout for code generation, refactoring, and test scaffolding

Notably, the full pipeline is designed to run in under one second in mock mode, enabling reliable demos without live credentials.

 

User Experience: From Onboarding to Exam Readiness

Beyond its backend architecture, CertPrep places strong emphasis on clarity, transparency, and user trust through a well‑structured front‑end experience. The application is built with Streamlit and organized as a 7‑tab interactive interface, guiding learners step‑by‑step through their preparation journey.

From a user’s perspective, the flow looks like this:

  1. Profile & Goals Input
    Learners start by describing their background, experience level, and certification goals in natural language. The system immediately reflects how this input is interpreted by displaying the structured learner profile produced by the profiling agent.
  2. Learning Path & Study Plan Visualization
    Once generated, the study plan is presented using visual aids such as Gantt‑style timelines and domain breakdowns, making it easy to understand weekly milestones, expected effort, and progress over time.
  3. Progress Tracking & Readiness Scoring
    As learners move forward, the UI surfaces an exam‑weighted readiness score, combining domain coverage, study plan adherence, and assessment performance—helping users understand why the system considers them ready (or not yet).
  4. Assessments and Feedback
    Practice assessments are generated dynamically, and results are reported alongside actionable feedback rather than just raw scores.
  5. Transparent Recommendations
    Final recommendations are presented clearly, supported by reasoning traces and visual summaries, reinforcing trust and explainability in the agent’s decision‑making.

The UI also includes an Admin Dashboard and demo‑friendly modes, enabling judges, reviewers, or instructors to inspect reasoning traces, switch between live and mock execution, and demonstrate the system reliably without external dependencies.

 

Why This Project Stood Out

This project embodies the spirit of the Reasoning Agents track in several ways:

  • ✅ Clear separation of reasoning roles, instead of prompt‑heavy monoliths
  • ✅ Deterministic fallbacks and guardrails, critical for educational and decision‑support systems
  • ✅ Observable, debuggable workflows, aligned with Foundry’s production goals
  • ✅ Explainable outputs, surfaced directly in the UX

It demonstrates how agentic patterns translate cleanly into maintainable architectures when supported by the right platform abstractions.

 

Try It Yourself

Explore the project, architecture, and demo here:

Read the whole story
alvinashcraft
20 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

⚡Foundry Toolkit for VS Code: A Deep Dive on GA

1 Share

As we shared in the announcement, Microsoft Foundry Toolkit for Visual Studio Code is now generally available. In this deep dive, we walk through everything that’s in the GA release — from the rebrand and extension consolidation, to model experimentation, agent development, evaluations, and on-device AI for scientists and engineers pushing the boundaries of edge hardware.

Whether you’re exploring your first model, shipping a production agent, or squeezing performance from edge hardware, Foundry Toolkit meets you where you are.

đŸ§ȘThe Fastest Way to Start Experimenting with AI Models

You’ve heard about a new model and want to try it right now — not after spinning up infrastructure or writing boilerplate API code. That’s exactly what Microsoft Foundry Toolkit is built to deliver.

With a Model Catalog spanning 100+ models — cloud-hosted from GitHub, Microsoft Foundry, OpenAI, Anthropic, and Google, plus local models via ONNX, Foundry Local, or Ollama — you go from curiosity to testing in minutes.

The Model Playground is where experimentation lives: compare two models side by side, attach files for multimodal testing, enable web search, adjust system prompts, and watch streaming responses come in.

When something works, View Code generates ready-to-use snippets in Python, JavaScript, C#, or Java — the exact API call you just tested, translated into your language of choice and ready to paste.

đŸ€–Building AI Agents: From Prototype to Production

Foundry Toolkit supports the full agent development journey with two distinct paths and a clean bridge between them.

Path A: The Prototyper: No Code Required

Agent Builder is a low-code interface that lets you take an idea, define instructions, attach tools, and start a conversation — all without writing a line of code. It’s the fastest way to validate whether an agent concept actually works. You can:

  • Write and refine instructions with the built-in Prompt Optimizer, which analyzes your instructions and suggests improvements
  • Connect tools from the Tool Catalog â€” browse tools from the Foundry public catalog or local MCP servers, configure them with a few clicks, and wire them into your agent
  • Configure MCP tool approval â€” decide whether tool calls need your sign-off or can run automatically
  • Switch between agents instantly with the quick switcher, and manage multiple agent drafts without losing work (auto-save has you covered)
  • Save to Foundry with a single click and manage your agents from there.

The result is a working, testable agent in minutes — perfect for validating use cases or prototyping features before investing in a full codebase.

Path B: The Professional Team: Code-First, Production-Ready

For teams building complex systems — multi-agent workflows, domain-specific orchestration, production deployments — code gives you control. Foundry Toolkit scaffolds production-ready code structures for Microsoft Agent Framework, LangGraph, and other popular orchestration frameworks. You’re not starting from scratch; you’re starting from a solid foundation.

Once your agent is running, Agent Inspector turns debugging from guesswork into real engineering:

  • Hit F5to launch your agent with full VS Code debugger support — breakpoints, variable inspection, step-through execution
  • Watch real-time streaming responses, tool calls, and workflow graphs visualize as your agent runs
  • Double-click any node in the workflow visualization to jump straight to the source code behind it
  • Local tracing captures the full execution span tree across tool calls and delegation chains — no external infrastructure needed

When you’re ready to ship, one-click deployment packages your agent and deploys it to a production-grade runtime on Microsoft Foundry Agent Service as a hosted-agent. The Hosted Agent Playground lets you test it directly from the VS Code sidebar, keeping the feedback loop tight.

The Bridge: From Prototype to Code, Seamlessly

These paths aren’t silos — they’re stages. When your Agent Builder prototype is ready to grow, export it directly to code with a single click. The generated project includes the agent’s instructions, tool configurations, and scaffolding — giving your engineering team a real starting point rather than a rewrite.

GitHub Copilot with the Microsoft Foundry Skill keeps momentum going once you’re in code. The skill knows the Agent Framework patterns, evaluation APIs, and Foundry deployment model. Ask it to generate an agent, write an evaluation, or scaffold a multi-agent workflow, and it produces code that works with the rest of the toolkit.

🎯Evaluations: Quality Built Into the Workflow

At every stage — prototype or production — integrated evaluations let you measure agent quality without switching tools. Define evaluations using familiar pytest syntax, run them from VS Code Test Explorer alongside your unit tests, and analyze results in a tabular view with Data Wrangler integration. When you need scale, submit the same definitions to run in Microsoft Foundry. Evaluations become versioned, repeatable, and CI-friendly — not one-off scripts you hope to remember.

đŸ’»Unlock AI's Full Capabilities on Edge Device    

AI running on your device â€” at your pace, without data leaving your machine 

 Cloud-hosted AI is convenient — but it's not always the right fit. Local models offer: 

  • Privacy and Compliance: Your data stays on your machine. No round-trips to a server. 
  • Cost control: Run as many inferences as you want — no per-token billing. 
  • Offline capability: Works anywhere, even without internet access. 
  • Hardware leverage: Modern Windows devices are built for localAI. 

That's why we're bringing a complete end-to-end workflow for discovering, running, converting, profiling, and fine-tuning AI models directly on Windows. Whether you're a developer exploring what models can do, an engineer optimizing models for production, or a researcher training domain-specific model adapters, Foundry Toolkit gives you the tools to work with local AI without compromise. 

Model Playground: Try Any Local Model, Instantly 

As we mentioned at the beginning of this article, the Model Playground is your starting point — not only for cloud models but also for local models. It includes Microsoft's full catalog of models, including the Phi open model family and Phi Silica â€” Microsoft's local language model optimized for Windows. As you go deeper, the Playground also supports any LLM model you've converted locally through the Conversion workflow — add it to My Resources and try it immediately in the same chat experience.  

Model Conversion: From Hugging Face to Hardware-Ready on Windows 

Getting a model from a research checkpoint to something that runs efficiently on your specific hardware is non-trivial. Foundry Toolkit's conversion pipeline handles the full transformation for a growing selection of popular HuggingFace models: Hugging Face → Conversion â†’ Quantization → Evaluation → ONNX 

The result: a model optimized for Windows ML â€” Microsoft's unified runtime for local AI on Windows.   

All supported hardware targets are aligned with Windows ML's execution provider ecosystem: 

  • MIGraphX (AMD) 
  • NvTensorRtRtx (NVIDIA) 
  • OpenVINO (Intel) 
  • QNN (Qualcomm) 
  • VitisAI (AMD) 

 

Why Windows ML matters for you: Windows ML lets your app automatically acquire and use hardware-specific EPs  at runtime â€” no device-specific code requiredYour converted model runs across the full range of supported Windows hardware. 

Once your model has been converted successfully, Foundry Toolkit gives you everything you need to validate, share, and ship: 

  • Benchmark results: Every conversion run is automatically tracked in the History Board — giving you an easy way to validate accuracy, latency, and throughput across model variants before you ship. 
  • Sample code with Windows ML: Get ready-to-use code showing how to load and inference your converted model with the Windows ML runtime — no boilerplate hunting, just copy and go. 
  • Quick Playground via GitHub Copilot: Ask GitHub Copilot to generate a playground web demo for your converted model. Instantly get an interactive experience to validate behavior before integrating into your application. 
  • Package as MSIX: Package your converted model into an MSIX installer. Share it with teammates or incorporate into your application. 

Profiling: See Exactly What Your Model Is Doing 

Converting a local model is one thing. Understanding how it uses your hardware is another. Foundry Toolkit’s profiling tools give you real-time visibility into CPU, GPU, NPU, and memory consumption — with per-second granularity and a 10-minute rolling window. 

Three profiling modes cover different workflows: 

  • Attach at startup — profile a model from the moment it loads 
  • Connect to a running process — attach to an already-running inference session 
  • Profile an ONNX model directly — The Toolkit feeds data to the model and runs performance measurement directly, no application or process needed 

For example, when you run a local model in the Playground, you get detailed visibility into what's happening under the hood during inference — far beyond basic resource usage. Windows ML Event Breakdown surfaces how execution time is spent: a single model execution is broken down into phases — such as session initialization versus active inference — so you know whether slowness is a one-time startup cost or a per-request bottleneck. 

When you profile any ONNX model directly, operator-level tracing shows exactly which graph nodes and operators are dispatched to the NPU, CPU, or GPU, and how long each one takes. This makes it straightforward to identify which parts of your model are underutilizing available hardware — and where quantization, graph optimization, or EP changes will have the most impact. 

Fine-Tuning: Make Phi Silica Yours 

Generic models are capable. Domain-specific models are precise with LoRA (Low-Rank Adaption). Foundry Toolkit's fine-tuning workflow lets you train LoRA adapters for Phi Silica using your own data — no ML infrastructure required. 

Bring your data, customize your LoRA parameters, and submit a job to the cloud. Foundry Toolkit spins up Azure Container Apps to train your adapter with your own subscription. To validate finetuning quality, the workflow tracks training and evaluation loss curves for your LoRA adapter and cloud inference is available to validate the adapter’s behaviorhelping you confirm learning progress and output quality before shipping.  

Once satisfied, download the adapter and incorporate it into your app for use at runtime. 

This is the full loop: train in the cloud → run at the edge. Domain adaptation for local AI, without standing up your own training infrastructure. 

🚀One Toolkit for Every Stage.

Foundry Toolkit for VS Code GA supports every stage of serious AI development:

  • Explore 100+ models without commitment
  • Prototype agents in minutes with no code
  • Build production agents with real debugging, popular frameworks, and coding agent assistance
  • Deploy to Microsoft Foundry with one click and test without leaving VS Code
  • Measure quality with evaluations that fit your existing test workflows
  • Optimize models for specific hardware and use cases

All of it, inside VS Code. All of it, now generally available. Install Foundry Toolkit from the VS Code Marketplace →

Get Started with Hands on Labs and Samples:

We'd love to hear what you build. Share feedback and file issues on GitHub, and join the broader conversation in the Microsoft Foundry Community.

Read the whole story
alvinashcraft
30 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Context Is Everything: Getting the Most from GitHub Copilot with Joydip Kanjilal

1 Share

Strategic Technology Consultation Services

This episode of The Modern .NET Show is supported, in part, by RJJ Software's Strategic Technology Consultation Services. If you're an SME (Small to Medium Enterprise) leader wondering why your technology investments aren't delivering, or you're facing critical decisions about AI, modernization, or team productivity, let's talk.

Show Notes

"Artificial intelligence is nothing new. It enables machines to simulate human cognitive functions such as reasoning, learning, problem solving and all using algorithms and vast data data sets to recognise patterns. And then it makes predictions and performs, you know, language processing, image recognition, and all those stuff."— Joydip Kanjilal

Hey everyone, and welcome back to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. I'm your host Jamie Taylor, bringing you conversations with the brightest minds in the .NET ecosystem.

Today, we're joined by Joydip Kanjilal to talk about GitHub Copilot, agentic workflows for developers, and the benefits (and drawbacks) of having an AI agent help you write code.

Note that I didn't say, "write all the code for you," because an AI agent is simply helping you to be more productive.

"You want to you know, convert, I mean uh migrate a legacy application to a modern-day enterprise application, there will be a lot of redundant code that you will otherwise have to write. So that all that code can be automatically generated by Copilot, provided you have provided the right context."— Joydip Kanjilal

Along the way, we talked about the importance of the context that you give to an AI agent, security best practises (spoiler: you wouldn't give a new junior the keys to teh castle on day one, do the same with your AI agents), and the most important things to remember when using AI agents.

Before we jump in, a quick reminder: if The Modern .NET Show has become part of your learning journey, please consider supporting us through Patreon or Buy Me A Coffee. Every contribution helps us continue bringing you these in-depth conversations with industry experts. You'll find all the links in the show notes.

Anyway, without further ado, let's sit back, open up a terminal, type in `dotnet new podcast` and we'll dive into the core of Modern .NET.

Full Show Notes

The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-8/context-is-everything-getting-the-most-from-github-copilot-with-joydip-kanjilal/

Useful Links:

Supporting the show:

Getting in Touch:

Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend.

And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch.

You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast.

Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show.

Editing and post-production services for this episode were provided by MB Podcast Services.





Download audio: https://traffic.libsyn.com/clean/secure/thedotnetcorepodcast/817-JoydipKanjilal.mp3?dest-id=767916
Read the whole story
alvinashcraft
50 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Episode 568: Claude Code, OpenAI Drama, and Is Anyone Still Using Backstage?

1 Share

This week, we discuss Claude's harness, OpenAI's ongoing drama, and whether Backstage is still a thing. Plus, Coté gives up on the green screen.

Watch the YouTube Live Recording of Episode 568

Runner-up Titles

  • It’s clearly an AI chair.
  • I’ve given up on the green screen
  • No animals, only buttholes
  • I don’t know if I can think on my own
  • Make your pecans really pop
  • Did they shut it down? I wasn’t paying attention
  • Always take the first offer for a billion dollars
  • Bad Parking
  • The Apple IIE had this problem
  • I was paying attention but I might have had a mouthful of cereal.

Rundown

Relevant to your Interests

Sponsors

Nonsense

Listener Feedback

Conferences

SDT News & Community

Recommendations





Download audio: https://aphid.fireside.fm/d/1437767933/9b74150b-3553-49dc-8332-f89bbbba9f92/2b72c9d0-76c5-4788-af76-fd58a5763e39.mp3
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

What’s New and Coming Next for Copilot and Teams

1 Share

Microsoft is lining up a new wave of Copilot and Teams capabilities—features that are in preview, targeted release, or scheduled rollout over the coming weeks and months. Here’s what’s on the way, when to expect it, and why it matters.

Across Copilot and Teams, Microsoft is pushing from “AI assistant” toward “AI execution”—with more model options, more in-flow actions, smarter agents, richer meeting recaps, stronger controls for external participants, and better support for multilingual collaboration. Some of these experiences are already in limited rings, but many organizations will see them arrive as rollouts progress.

Here’s information of a selected what’s coming. Keep in mind that Copilot and Teams, yes Teams too, are evolving all the time and there are a lot more updates coming to both.

  1. Claude Sonnet and Opus 4.7 Join the Copilot Model Lineup
    1. Claude Opus 4.7 in Microsoft 365 Copilot
  2. Draft and Send Outlook Emails Without Leaving the Copilot Chat
  3. Declarative Agents Upgraded to GPT-5.2
  4. Teams Will Identify External Bots Joining Your Meetings
  5. Teams Meetings Video Recaps
  6. Teams Is Adding Turn-by-Turn Interpretation
  7. Channel Agents Are Getting a Level-Up
  8. Copilot is Just Getting Started

Claude Sonnet and Opus 4.7 Join the Copilot Model Lineup

Microsoft is expanding model selection in Microsoft 365 Copilot by adding Anthropic Claude Sonnet alongside OpenAI’s GPT models. If you have a Copilot license, you’ll be able to choose Claude Sonnet from the model selector in Copilot Chat as this rolls out. I can already see it in my demo tenant.

Microsoft has indicated this is starting in Frontier and then expanding across web, desktop, macOS, and mobile..

If you don’t see this yet and want to test out Claude already, you can use it in the Researcher Agent (if Anthropic models are allowed in your environment).

Claude Opus 4.7 in Microsoft 365 Copilot

But this is getting even better with Claude Opus 4.7! Microsoft is expanding model choice with Anthropic’s Claude Opus 4.7 is available in Microsoft 365 Copilot — specifically in Copilot Cowork (Frontier) and in Copilot Studio early release cycle environments — and it is also rolling out to Copilot in Excel.

Opus 4.7 is designed to be faster and more precise than the earlier Opus generation. It follows instructions more closely, checks its own outputs before responding, and reads images at higher resolution — so Copilot can interpret visual content with more detail and use Work IQ context to take action more precisely. It is also better at picking the right tool for the task, which matters a lot when you are running multi-step, agentic work.

You will find Opus 4.7 in the model selector in Copilot Cowork and Copilot Studio, and in the near future in Copilot in Excel. This is exactly the kind of cutting-edge, enterprise-grade model choice that makes Microsoft 365 Copilot — and Copilot Cowork especially — so powerful for real work.

Why this matters:

This isn’t just about having options—it’s about matching the right model to the right task. Claude Sonnet is known for strong performance in document analysis, nuanced reasoning, and content generation. Giving users the ability to pick their model based on the work they’re doing is a clear signal that Microsoft sees Copilot as a multi-model platform, not a single-model product.

There are governance implications too. Anthropic operates as a Microsoft subprocessor, and the models are excluded from EU Data Boundary commitments. In regions where Anthropic is set to “off by default,” admins will need to opt in. This is a good reminder that AI governance is now part of everyday IT administration.


Draft and Send Outlook Emails Without Leaving the Copilot Chat

One of the most practical capabilities on the roadmap: Copilot Chat will be able to draft, edit, and send Outlook emails without leaving the chat interface as this feature rolls out.

In tenants where it’s enabled, when Copilot detects email-writing intent it can open an embedded Outlook compose experience inside Copilot Chat. You can review and edit the content, modify recipients, and send—or open the draft in Outlook to continue there.

Microsoft has described this as a desktop-first (“big screen”) rollout with completion targets in early April. Availability depends on rollout ring, tenant configuration, and prerequisites (Microsoft 365 Copilot license and an Exchange Online mailbox).

Why this matters:

This is one of the clearest examples yet of Copilot shifting from assistant to execution surface. Instead of generating a draft that you copy and paste into Outlook, Copilot can now complete the entire workflow. Less context switching. More doing.

It’s also a sign of where this is going: Copilot as the place where work happens, not just where you ask questions about work.

Did you know you can already use Copilot Chat to book meetings?


Declarative Agents Upgraded to GPT-5.2

Microsoft 365 Copilot Declarative Agents have been upgraded to the GPT-5.2 model, which brings improvements in reasoning, multi-step workflows, tool calling, structured output generation, and document analysis.

Users may notice improvements in quality, accuracy, and formatting—along with slight behavioural differences due to the model change.

Why this matters:

If you’ve built Declarative Agents, this is a good time to test your top prompts and workflows before the change reaches all users. The upgrade is automatic, and there are no new admin controls, but Microsoft recommends validating key scenarios and using the thumbs-up/thumbs-down feedback with the tag #GPT52 to help improve detection of any issues.

More broadly, this update reinforces that agents are a core part of Microsoft’s Copilot strategy—and the platform is evolving quickly to support more complex, reliable agent behaviors.


Teams Will Identify External Bots Joining Your Meetings

Microsoft Teams is introducing a new capability to detect external meeting assistant bots as they attempt to join meetings hosted by your organization.

The feature specifically targets bots used for transcription, summarization, and other meeting assistance services. When detected, these bots will be clearly labelled in the meeting lobby experience.

The rollout begins in mid-May for Targeted Release and completes in mid-June for general availability (including GCC tenants).

Why this matters:

This is about visibility and control. Microsoft notes that some external bots can access meetings without the knowledge or consent of the organizer or hosting tenant, creating data security, privacy, and compliance risks.

This update gives organizers greater awareness and gives admins clear controls over how detected bots are handled. It’s a timely response to the growing ecosystem of third-party meeting bots—and a reminder that AI governance extends to who (and what) participates in your meetings.

There may still be bots that go undetected, so Microsoft is encouraging users to report them directly from the meeting to help improve detection over time.


Teams Meetings Video Recaps

Intelligent meeting recap in Teams is adding video-based recaps—narrated video highlights that showcase key takeaways and important moments from recorded meetings.

This feature is expected to reach general availability in April 2026.

Why this matters:

Text-based recaps are helpful, but video recaps are more consumable and more engaging, especially for people who couldn’t attend the meeting. This is a natural evolution of Teams’ intelligent recap capabilities toward multimodal summary experiences.

It’s also another example of how AI is making meetings more accessible after the fact—reducing the pressure to attend every meeting live and making it easier to stay aligned across time zones and schedules.


Teams Is Adding Turn-by-Turn Interpretation

Microsoft is planning a Consecutive Interpretation mode for the existing Interpreter agent in Teams.

Unlike simultaneous interpretation, this mode uses turn-by-turn interpretation: participants speak one at a time, and each speaker’s words are interpreted before the next speaker begins. It’s designed for structured, interactive multilingual meetings like working sessions, negotiations, and cross-functional collaboration.

The feature rolled out to Targeted Release in early March and will reach general availability between late April and early May.

Why this matters:

This is one of the most distinctive “future of AI meetings” stories in this set. It shows that Microsoft is thinking beyond captions and transcription and moving toward real-time, structured multilingual collaboration.

For global teams, this reduces overlap, improves interpretation accuracy, and helps everyone stay aligned—even when they don’t share a common language. It’s a practical example of AI enabling collaboration that would otherwise be slow, expensive, or impossible.


Channel Agents Are Getting a Level-Up

Microsoft is rolling out several updates to Channel Agent in Teams to make it more collaborative and more practical.

The updates include:

  • Customized welcome messages based on channel context
  • The ability to disable automatic Channel Agent creation during team or channel setup
  • Improved scheduling suggestions using updated calendar logic
  • The ability to add members to a channel with prompts like “@agent add John Doe to this channel”
  • The ability for Channel Agents to post messages directly to a channel

This feature has been rolling out to Targeted Release since early March and should be complete on every tenant around this time. A Microsoft 365 Copilot license is required.

Why this matters:

This is a strong example of agents becoming operational and collaborative, not just informational. Channel Agents are moving into the flow of teamwork—helping with scheduling, adding people, and posting updates without needing a human to do it manually.

It’s another signal that agents are becoming teammates, not just tools.


Copilot is Just Getting Started

If you step back and look at what’s in the rollout pipeline, a clear pattern emerges:

Copilot is becoming more powerful, more actionable, and more configurable. You can now choose your model, execute work directly in Copilot Chat, and rely on smarter, more capable agents.

Teams is becoming more intelligent, more secure, and more multilingual. You get richer recaps, better visibility into who’s joining your meetings, and real-time interpretation that makes global collaboration feel effortless.

And across both products, Microsoft is making it clear that AI isn’t just about answering questions—it’s about doing work, enabling collaboration, and making decisions. This article was just about coming updates and improvements to Copilot and AI in Teams – however I suggest you read my earlier article about Digital Workers to see where we are heading.

If you’re managing Copilot or Teams in your organization, soon is a good time to:

  • Review your governance settings, especially around model choice and external meeting bots
  • Test your Declarative Agents to make sure they’re performing well on GPT-5.2
  • Start thinking about how features like Copilot email drafting and Channel Agents change the way your teams work

The future of work is being built in real time. And it’s moving faster than most of us expected.



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

LINQ Max and nullable value types

1 Share

While working on a project for a customer, we came across a slight oddity of LINQ's Max operator when you use it with a value type. In some cases Max returns null when supplied with an empty list, and there are cases where this works even with value types—this overload returns int? for example. But in some cases it will not do this, and will instead throw an exception if its input is empty. The reasons behind it are non-obvious and somewhat subtle, so I thought I'd write about it.

The Max operator offers a projection-based overload with this signature:

public static TResult? Max<TSource,TResult>(
    this IEnumerable<TSource> source,
    Func<TSource,TResult> selector);

This will iterate through the source, pass each item to the selector callback, and then return the highest of the values the callback returns.

Notice how although the selector function returns a TResult, the return type of Max itself is TResult?. That nullability is there to handle the case where the source enumerable is empty: in that case there is no maximum value (because there are no values at all) and that TResult? return type means Max can return null to indicate that.

But it goes a bit weird if the selector returns a value type. Suppose you've got this type (which is a reference type, but crucially, two of its properties use value types):

public record WithValues(string Label, int Number, DateTimeOffset Date);

First, let's verify that Max does what I've said with an empty list when the projection retrieves a reference type:

WithValues[] empty = [];
string? maxLabel = empty.Max(x => x.Label);
Console.WriteLine(maxLabel is null);

This prints out True, confirming that Max here does indeed return null to let us know that there was no maximum value. (The notion of a maximum string value raises the awkward fact that Max doesn't let you pass an IComparer<T> here, but let's ignore that for now.)

With that in mind, what type do you suppose maxDate has in this example?

WithValues[] empty = [];
var maxDate = empty.Max(d => d.Date);

If you look at the definition of Max you could correctly conclude that TSource here becomes WithValues and that TResult is DateTimeOffset. (And as we're doing all this type inference in our heads, we might reflect on whether using var here has really saved us any time and effort.) And since Max returns TResult? you might conclude that maxDate must be of type DateTimeOffset? (which is an alias for Nullable<DateTimeOffset>).

But that would be wrong. Here's exactly equivalent code using an explicit type declaration instead of var:

WithValues[] empty = [];
DateTimeOffset maxDate = empty.Max(d => d.Date);

It is now clear that maxDate's type is DateTimeOffset. If we were to try to declare it as a DateTimeOffset?, that would actually compile, but it wouldn't be equivalent to the var example: in the case where we use var, maxDate really does have the non-nullable DateTimeOffset type.

And if we do try to use DateTimeOffset?, it goes wrong. This compiles:

DateTimeOffset? maxDate = empty.Max(d => d.Date);
if (maxDate.HasValue)
{
    Console.WriteLine(maxDate.Value);
}
else
{
    Console.WriteLine("No dates found.");
}

but it only compiles without error because an implicit conversion is available from Max's return type of DateTimeOffset to the variable's type of DateTimeOffset?.

The most important thing to know about this code is that it will actually fail at runtime with an InvalidOperationException complaining that the Sequence contains no elements!

Earlier I linked to a non-generic overload of Max that returns an int? so you might think that this would work:

WithValues[] empty = [];
int? maxNumber = empty.Max(x => x.Number);

but this will also fail with an exception at runtime instead of returning null. In fact it ends up using a different overload that returns an int, and not the one that returns an int?.

So that's weird. The first two examples call the same single overload of Max, and yet it handles an empty list completely differently depending on whether our selector returns the Label or Date. (When it returns Number, we end up using the int-specific overload, but the fact remains that an empty list causes an exception when we select a value-typed property, but the method returns null when selecting a reference-typed property.) What's going on?

Well it turns out that this particular Max method actually has two different code paths: and it effectively uses this test to choose which path to use:

TResult val = default;
if (val == null)
...

If val == null, then it goes down the code path that returns null if the list is empty. If not, it goes down the path that throws an exception if the list is empty.

This is a deliberate design choice. If default(TResult) is something other than null—e.g. default(int) is 0—then there might be no way to tell the difference between an empty list, and a list where default(TResult) really was the maximum value. For example, in the list [-3,-2,-1,0], the maximum value is 0, so how could we distinguish between that case and the empty list case if we were getting back 0 in either case?

So there's a rationale for this behaviour, but it's not obvious that this one method can behave in two quite different ways. The documentation doesn't mention that this particular overload may throw an InvalidOperationException.

We can explore what that test will do with various types:

static void ShowNull<T>()
{
    T? val = default;
    Console.WriteLine(val == null);
}

ShowNull<string>();
ShowNull<string?>();
ShowNull<int>();
ShowNull<DateTimeOffset>();
ShowNull<int?>();
ShowNull<DateTimeOffset?>();

This prints out:

True
True
False
False
True
True

So this tells us that Max will consider TResult to be potentially nullable if it's a reference type like string, or if it's a nullable value type like int? or DateTimeOffset?. But plain value types like int and DateTimeOffset are considered not to be nullable.

That explains why using the x => x.Label lambda makes Max return null when the list is empty, while with d => d.Date or d => d.Number, it throws an exception. The first has a return type of string (a reference type) while the other two have non-nullable value-typed return types (DateTimeOffset and int).

But why does Max even have these two different code paths? It's perfectly possible for a method to return a DateTimeOffset?, so why does Max not do that here? If the argument for the type parameter TResult is DateTimeOffset, and the method declares a return type of TResult?, shouldn't that make the return type DateTimeOffset??

The reason it doesn't work out that way is because nullability handling for reference types was a bit of an afterthought in C#. (See my extensive series on nullable reference types for (a lot) more detail.)

In the beginning (C# 1.0) there were value types, which could not be null, and reference types, which were always capable of being null. There simply wasn't any concept of a value type being nullable, and nor was there any way to constrain a reference type to be non-null. This reflected the underlying reality of the .NET runtime's type system. Then C# 2.0 added support for nullable value types, enabling us to write int?. But this was an entirely different way of representing nullability: an int? is really a Nullable<int>, and Nullable<T> essentially combines a value with a bool indicating whether the value is present. This is fundamentally different from how reference types like string represent null. (This is more of a library feature than a runtime feature. OK, strictly speaking there's some special handling for Nullable<T> when it comes to boxing and unboxing, but otherwise, this is mainly a language feature that doesn't directly reflect how the underlying runtime type system really works.) Although C# lets us write code that works with int? in ways that are (sometimes) similar to how we might work with a reference, the generated code is really quite different, and that causes challenges for generic code. And finally, C# 8.0 introduced nullability annotations for reference types, so that now, we write string? if we mean a reference that might be null whereas string is (in theory) never null, in a way that is conceptually similar to the fact that an int can never be null.

But although we've ended up in a place where there are apparently two dimensions—value vs reference, and nullable vs non-nullable—the history of how we got here means these aren't truly independent. Nullability works very differently for values vs references in practice. And two of the four combinations (nullable values, and non-nullable references) aren't really first class citizens in the .NET type system. (A nullable value in a null state looks different from the null reference value. And a 'non-nullable' reference might in fact be null.)

And this difference tends to poke out from time to time with surprising behaviour like we're seeing with this Max operator. It would be completely reasonable to expect it to deal with the Label and Date properties in exactly the same way. But the history of nullability in .NET means it doesn't work in practice.

So how do we fix this? We can use this slightly ugly hack:

DateTimeOffset? maxDate = empty.Max(d => (DateTimeOffset?)d.Date);

That cast means that the lambda's type is now Func<WithValues, DateTimeOffset?>. (Before it was Func<WithValues, DateTimeOffset>, with a non-nullable return type.) Since default(DateTimeOffset?) == null, Max will select the code path that returns null when the input collection is empty. (It doesn't do that without this cast, because default(DateTimeOffset) is not null. It's a value representing midnight on the 1st January in the year 1, with a zero time zone offset.)

But what about that specialized (non-generic) overload of Max I linked to earlier that returns an int?? Well it turns out that it only comes into play when the selector also returns an int?. You get a different overload when the selector returns a plain int. So it ends up looking similar to the DateTimeOffset case (which used the generic overload). We need to cast to a nullable value:

int? maxNumber = empty.Max(x => (int?)x.Number);

So we can make it work how we want, it's just slightly messy. That's the reality of a 25+ year old language that has made two major changes to the nature of what it means to be null.



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories