Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148034 stories
·
32 followers

The Microsoft Azure Outage Shows the Harsh Reality of Cloud Failures

1 Share
The second major cloud outage in less than two weeks, Azure's downtime highlights the “brittleness” of a digital ecosystem that depends on a few companies never making mistakes.
Read the whole story
alvinashcraft
57 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Building Multiagent Workflows With Microsoft AutoGen

1 Share
AI agent system performing tasks automatically.

Most AI implementations can feel like having a single, really smart intern: helpful, but limited to one perspective and prone to confidently wrong answers. But what if instead you could assemble a small team of AI specialists who actually debate with each other before giving you their final recommendation?

That’s the industry change happening right now. Organizations are moving beyond basic chatbots toward choosing AI systems that can tackle real, complex problems. Microsoft’s AutoGen caught my attention because it makes this kind of multiagent collaboration surprisingly approachable.

I’ve been experimenting with AutoGen for a few months, and the difference is striking. Instead of hoping one model gets it right, you’re watching agents challenge each other’s reasoning, catch mistakes and build on ideas — kind of like overhearing a really productive brainstorming session.

This approach becomes essential when you’re dealing with messy, real-world problems: processing complex documents, synthesizing research from multiple sources or generating code that needs to actually work. Single agents often miss nuances or make assumptions that seem reasonable in isolation but fall apart under scrutiny.

Here’s how to build a simple but effective multiagent system. You’ll set up agents with different roles — one focused on generating ideas, another on critiquing them — and watch them work together to produce better results than either could achieve alone.

By the end, you’ll understand not just how to implement this technically, but why it represents a fundamental shift in the way we should think about AI in business contexts.

Why Multiagent Collaboration Matters

Traditional large language model (LLM) systems use one model to answer a query. But what if that model:

  • Needs context from prior answers?
  • Misses an important fact?
  • Could benefit from structured review?

Multiagent systems let you solve these problems by assigning roles:

  • One agent plans.
  • Another generates output.
  • A third critiques.
  • A user proxy collects and routes inputs.

Real-World Examples

  • Research copilot: Analyst agent searches documents, summarizer agent condenses, QA agent verifies facts.
  • Coding agent: User defines specs, builder agent generates code, critic agent tests or reviews it.

Step-By-Step Tutorial With AutoGen

1. Install Dependencies

pip install pyautogen openai

Note: It’s recommended to use a virtual environment such as venv or conda when installing Python packages. This helps manage dependencies effectively and prevents conflicts with other projects.

You’ll need an OpenAI API key and access to GPT-4 or GPT-3.5 Turbo. Store your key safely:

export OPENAI_API_KEY="your-openai-key"

2. Create the Agent Configuration

AutoGen agents are configured using JSON-like dictionaries. You define:

  • Role (such as assistant, user, critic)
  • LLM settings
  • Behavioral flags (like auto-reply, enable feedback)

llm_config = {
    "model": "gpt-4",
    "api_key": os.environ["OPENAI_API_KEY"],
    "temperature": 0.5
}

3. Create a User Proxy Agent

The user proxy acts as the bridge between a human user and the LLM agents. It routes messages and optionally injects prompts.

user_proxy = UserProxyAgent(
    name="user",
    human_input_mode="NEVER",
    code_execution_config={"work_dir": "./autogen_output"},
    llm_config=llm_config
)


Set human_input_mode="ALWAYS" if you want the user to be involved at each step. Otherwise, agents will operate automatically.

4. Create a Task Assistant Agent

This agent handles the actual task (answer generation, coding, summarization).

assistant = AssistantAgent(
    name="assistant",
    llm_config=llm_config
)

5. Create a Critic Agent (Optional but Powerful)

To improve quality, introduce a second agent to evaluate and refine assistant outputs.

critic = AssistantAgent(
    name="critic",
    system_message="You are a critic agent. Improve and verify the responses from other agents.",
    llm_config=llm_config
)

6. Set up Collaborative Workflow

AutoGen allows you to define group chats (multiturn dialogues between agents) or static message passing. Here’s a simple user-assistant dialogue:

user_proxy.initiate_chat(assistant, message="Explain the concept of transfer learning in machine learning.")


For critic involvement, chain multiple agents:

def critic_loop(user_proxy, assistant, critic):
    user_proxy.initiate_chat(
        recipient=assistant,
        message="Write a Python function to calculate the Fibonacci sequence.",
        summary_method="last_msg"
    )
    assistant.initiate_chat(critic, message="Please review my previous response.")

critic_loop(user_proxy, assistant, critic)


This mimics real-world collaboration: One agent completes a task, another reviews and optionally routes output back to the user.

Optional: Multiagent Group Chat

AutoGen supports GroupChatManager, which orchestrates many agents in turn-based settings:

from autogen import GroupChat, GroupChatManager

groupchat = GroupChat(
    agents=[user_proxy, assistant, critic],
    messages=[],
    max_round=5
)

manager = GroupChatManager(groupchat=groupchat)
user_proxy.initiate_chat(manager, message="Design a plan to teach AI in high schools.")


This structure lets each agent add value over multiple rounds until a consensus is formed.

What This Looks Like in Action

  • Prompt: “Design a marketing campaign for an eco-friendly toothpaste.”
  • Assistant agent response: Suggests a three-phase plan with social media and influencer outreach.
  • Critic agent: Flags that budget constraints weren’t considered.
  • Final answer: Refined with realistic costs and A/B testing.

You just built a self-improving, autonomous thought partnership.

Advanced Extensions

  • Tool-using agents: Configure agents to run Python code, query APIs or search the web.
  • Self-reflection: Let assistants review their own responses before sending.
  • Custom roles: Define agents like “planner,” “strategist,” “coder” or “tester.”

Example of a tool-invoking assistant:

AssistantAgent(..., code_execution_config={"use_docker": True})

Business and Technical Takeaways

Benefit For Executives For Developers
Modular agents Delegate responsibilities to improve interpretability and trust. Test and debug roles independently.
Dialogue-driven architecture Mirrors human workflows for review and feedback. Easy to replicate agile-style processes.
Production-ready Works with OpenAI APIs and pluggable backends. Great for experimentation and later deployment.

Final Thoughts

Working with a single AI model often feels like having a brilliant but isolated consultant, in that you get one perspective, take it or leave it. But what if you could assemble a whole team of AI specialists who actually talk to each other, debate ideas and catch each other’s mistakes?

That’s the promise of multiagent systems like Microsoft AutoGen. Instead of crossing your fingers and hoping one model gets it right, you’re orchestrating conversations between different AI agents, each bringing their own strengths to the table.

I’ve seen this approach solve problems that stumped single models. When agents can push back on each other’s reasoning, question assumptions and build on ideas collaboratively, the results are noticeably better. Less hallucination, more nuanced thinking and outputs that feel like they came from an actual team discussion.

The practical benefits are hard to ignore:

  • Decisions happen faster when you have multiple perspectives working in parallel.
  • Agents catch each other’s errors before they reach you.
  • You can create specialist agents for different domains, including legal, finance and marketing, and let them combine their expertise.

The organizations that figure this out early are going to have a serious advantage. We’re moving from “using AI” to “managing AI teams.” The companies that nail this transition will be the ones setting the pace.

The post Building Multiagent Workflows With Microsoft AutoGen appeared first on The New Stack.

Read the whole story
alvinashcraft
57 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Building the future together: Microsoft and NVIDIA announce AI advancements at GTC DC

1 Share

Microsoft and NVIDIA are deepening our partnership to power the next wave of AI industrial innovation. For years, our companies have helped fuel the AI revolution, bringing the world’s most advanced supercomputing to the cloud, enabling breakthrough frontier models, and making AI more accessible to organizations everywhere. Today, we’re building on that foundation with new advancements that deliver greater performance, capability, and flexibility.

With added support for NVIDIA RTX PRO 6000 Blackwell Server Edition on Azure Local, customers can deploy AI and visual computing workloads distributed and edge environments with the seamless orchestration and management you use in the cloud. New NVIDIA Nemotron and NVIDIA Cosmos models in Azure AI Foundry give businesses an enterprise-grade platform to build, deploy, and scale AI applications and agents. With NVIDIA Run:ai on Azure, enterprises can get more from every GPU to streamline operations and accelerate AI. Finally, Microsoft is redefining AI infrastructure with the world’s first deployment of NVIDIA GB300 NVL72.

Today’s announcements mark the next chapter in our full-stack AI collaboration with NVIDIA, empowering customers to build the future faster.

Expanding GPU support to Azure Local

Microsoft and NVIDIA continue to drive advancements in artificial intelligence, offering innovative solutions that span the public and private cloud, the edge, and sovereign environments.

As highlighted in the March blog post for NVIDIA GTC, Microsoft will offer NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on Azure. Now, with expanded availability of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on Azure Local, organizations can optimize their AI workloads, regardless of location, to provide customers with greater flexibility and more options than ever. Azure Local leverages Azure Arc to empower organizations to run advanced AI workloads on-premises while retaining the management simplicity of the cloud or operating in fully disconnected environments. 

NVIDIA RTX PRO 6000 Blackwell GPUs provide the performance and flexibility needed to accelerate a broad range of use cases, from agentic AI, physical AI, and scientific computing to rendering, 3D graphics, digital twins, simulation, and visual computing. This expanded GPU support unlocks a range of edge use cases that fulfill the stringent requirements of critical infrastructure for our healthcare, retail, manufacturing, government, defense, and intelligence customers. This may include real-time video analytics for public safety, predictive maintenance in industrial settings, rapid medical diagnostics, and secure, low-latency inferencing for essential services such as energy production and critical infrastructure. The NVIDIA RTX PRO 6000 Blackwell enables improved virtual desktop support by leveraging NVIDIA vGPU technology and Multi-Instance GPU (MIG) capabilities. This can not only accommodate a higher user density, but also power AI-enhanced graphics and visual compute capabilities, offering an efficient solution for demanding virtual environments.

Earlier this year, Microsoft announced a multitude of AI capabilities at the edge, all enriched with NVIDIA accelerated computing:

  • Edge Retrieval Augmented Generation (RAG): Empower sovereign AI deployments with fast, secure, and scalable inferencing on local data—supporting mission-critical use cases across government, healthcare, and industrial automation.
  • Azure AI Video Indexer enabled by Azure Arc: Enables real-time and recorded video analytics in disconnected environments—ideal for public safety and critical infrastructure monitoring or post-event analysis.

With Azure Local, customers can meet strict regulatory, data residency, and privacy requirements while harnessing the latest AI innovations powered by NVIDIA.

Whether you need ultra-low latency for business continuity, robust local inferencing, or compliance with industry regulations, we’re dedicated to delivering cutting-edge AI performance wherever your data resides. Customers now access the breakthrough performance of the NVIDIA RTX PRO 6000 Blackwell GPUs in new Azure Local solutions—including Dell AX-770, HPE ProLiant DL380 Gen12, and Lenovo ThinkAgile MX650a V4.

To find out more about upcoming availability and sign up for early ordering, visit: 

Powering the future of AI with new models on Azure AI Foundry

At Microsoft, we’re committed to bringing the most advanced AI capabilities to our customers, wherever they need them. Through our partnership with NVIDIA, Azure AI Foundry now brings world-class multimodal reasoning models directly to enterprises, deployable anywhere as secure, scalable NVIDIA NIM™ microservices. The portfolio spans a range of different use cases:

NVIDIA Nemotron Family: High accuracy open models and datasets for agentic AI

  • Llama Nemotron Nano VL 8B is available now and is tailored for multimodal vision-language tasks, document intelligence and understanding, and mobile and edge AI agents. 
  • NVIDIA Nemotron Nano 9B is available now and supports enterprise agents, scientific reasoning, advanced math, and coding for software engineering and tool calling. 
  • NVIDIA Llama 3.3 Nemotron Super 49B 1.5 is coming soon and is designed for enterprise agents, scientific reasoning, advanced math, and coding for software engineering and tool calling.

NVIDIA Cosmos Family: Open world foundation models for physical AI

  • Cosmos Reason-1 7B is available now and supports robotics planning and decision making, training data curation and annotation for autonomous vehicles, and video analytics AI agents extracting insights and performing root-cause analysis from video data.
  • NVIDIA Cosmos Predict 2.5 is coming soon and is a generalist model for world state generation and prediction. 
  • NVIDIA Cosmos Transfer 2.5 is coming soon and is designed for structural conditioning and physical AI.

Microsoft TRELLIS by Microsoft Research: High-quality 3D asset generation 

  • Microsoft TRELLIS by Microsoft Research is available now and enables digital twins by generating accurate 3D assets from simple prompts, immersive retail experiences with photorealistic product models for AR and virtual try-ons, and game and simulation development by turning creative ideas into production-ready 3D content.

Together, these open models reflect the depth of the Azure and NVIDIA partnership: combining Microsoft’s adaptive cloud with NVIDIA’s leadership in accelerated computing to power the next generation of agentic AI for every industry. Learn more about the models here.

Maximizing GPU utilization for enterprise AI with NVIDIA Run:ai on Azure

As an AI workload and GPU orchestration platform, NVIDIA Run:ai helps organizations make the most of their compute investments, accelerating AI development cycles and driving faster time-to-market for new insights and capabilities. By bringing NVIDIA Run:ai to Azure, we’re giving enterprises the ability to dynamically allocate, share, and manage GPU resources across teams and workloads, helping them get more from every GPU.

NVIDIA Run:ai on Azure integrates seamlessly with core Azure services, including Azure NC and ND series instances, Azure Kubernetes Service (AKS), and Azure Identity Management, and offers compatibility with Azure Machine Learning and Azure AI Foundry for unified, enterprise-ready AI orchestration. We’re bringing hybrid scale to life to help customers transform static infrastructure into a flexible, shared resource for AI innovation.

With smarter orchestration and cloud-ready GPU pooling, teams can drive faster innovation, reduce costs, and unleash the power of AI across their organizations with confidence. NVIDIA Run:ai on Azure enhances AKS with GPU-aware scheduling, helping teams allocate, share, and prioritize GPU resources more efficiently. Operations are streamlined with one-click job submission, automated queueing, and built in governance. This ensures teams spend less time managing infrastructure and more time focused on building what’s next. 

This impact spans industries, supporting the infrastructure and orchestration behind transformative AI workloads at every stage of enterprise growth: 

  • Healthcare organizations can use NVIDIA Run:ai on Azure to advance medical imaging analysis and drug discovery workloads across hybrid environments. 
  • Financial services organizations can orchestrate and scale GPU clusters for complex risk simulations and fraud detection models. 
  • Manufacturers can accelerate computer vision training models for improved quality control and predictive maintenance in their factories. 
  • Retail companies can power real-time recommendation systems for more personalized experiences through efficient GPU allocation and scaling, ultimately better serving their customers.

Powered by Microsoft Azure and NVIDIA, Run:ai is purpose-built for scale, helping enterprises move from isolated AI experimentation to production-grade innovation.

Reimagining AI at scale: First to deploy NVIDIA GB300 NVL72 supercomputing cluster

Microsoft is redefining AI infrastructure with the new NDv6 GB300 VM series, delivering the first at-scale production cluster of NVIDIA GB300 NVL72 systems, featuring over 4600 NVIDIA Blackwell Ultra GPUs connected via NVIDIA Quantum-X800 InfiniBand networking. Each NVIDIA GB300 NVL72 rack integrates 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace™ CPUs, delivering over 130 TB/s of NVLink bandwidth and up to 136 kW of compute power in a single cabinet. Designed for the most demanding workloads—reasoning models, agentic systems, and multimodal AI—GB300 NVL72 combines ultra-dense compute, direct liquid cooling, and smart rack-scale management to deliver breakthrough efficiency and performance within a standard datacenter footprint. 

Azure’s co-engineered infrastructure enhances GB300 NVL72 with technologies like Azure Boost for accelerated I/O and integrated hardware security modules (HSM) for enterprise-grade protection. Each rack arrives pre-integrated and self-managed, enabling rapid, repeatable deployment across Azure’s global fleet. As the first cloud provider to deploy NVIDIA GB300 NVL72 at scale, Microsoft is setting a new standard for AI supercomputing—empowering organizations to train and deploy frontier models faster, more efficiently, and more securely than ever before. Together, Azure and NVIDIA are powering the future of AI. 

Learn more about Microsoft’s systems approach in delivering GB300 NVL72 on Azure.

Unleashing the performance of ND GB200-v6 VMs with NVIDIA Dynamo 

Our collaboration with NVIDIA focuses on optimizing every layer of the computing stack to help customers maximize the value of their existing AI infrastructure investments. 

To deliver high-performance inference for compute-intensive reasoning models at scale, we’re bringing together a solution that combines the open-source NVIDIA Dynamo framework, our ND GB200-v6 VMs with NVIDIA GB200 NVL72 and Azure Kubernetes Service(AKS). We’ve demonstrated the performance this combined solution delivers at scale with the gpt-oss 120b model processing 1.2 million tokens per second deployed in a production-ready, managed AKS cluster and have published a deployment guide for developers to get started today. 

Dynamo is an open-source, distributed inference framework designed for multi-node environments and rack-scale accelerated compute architectures. By enabling disaggregated serving, LLM-aware routing and KV caching, Dynamo significantly boosts performance for reasoning models on Blackwell, unlocking up to 15x more throughput compared to the prior Hopper generation, opening new revenue opportunities for AI service providers. 

These efforts enable AKS production customers to take full advantage of NVIDIA Dynamo’s  inference optimizations when deploying frontier reasoning models at scale. We’re dedicated to bringing the latest open-source software innovations to our customers, helping them fully realize the potential of the NVIDIA Blackwell platform on Azure. 

Learn more about Dynamo on AKS.

Get more AI resources

The post Building the future together: Microsoft and NVIDIA announce AI advancements at GTC DC appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
57 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Agent HQ: Any agent, any way you work

1 Share

The current AI landscape presents a challenge we’re all too familiar with: incredible power fragmented across different tools and interfaces. At GitHub, we’ve always worked to solve these kinds of systemic challenges—by making Git accessible, code review systematic with pull requests, and automating deployment with Actions.

With 180 million developers, GitHub is growing at its fastest rate ever—a new developer joining every second. What’s more, 80% of new developers are using Copilot in their first week. AI isn’t just a tool anymore; it’s an integral part of the development experience. Our responsibility is to ensure this new era of collaboration is powerful, secure, and seamlessly integrated into the workflow you already trust.

At GitHub Universe, we’re announcing Agent HQ, GitHub’s vision for the next evolution of our platform. Agents shouldn’t be bolted on. They should work the way you already work. That’s why we’re making agents native to the GitHub flow.

Agent HQ transforms GitHub into an open ecosystem that unites every agent on a single platform. Over the coming months, coding agents from Anthropic, OpenAI, Google, Cognition, xAI, and more will become available directly within GitHub as part of your paid GitHub Copilot subscription.

To bring this vision to life, we’re shipping a suite of new capabilities built on the primitives you trust. This starts with a mission control, a single command center to assign, steer, and track the work of multiple agents from anywhere. It extends to VS Code with new ways to plan and customize agent behavior. And it is backed by enterprise-grade functionality: a new generation of agentic code review, a dedicated control plane to govern AI access and agent behavior, and a metrics dashboard to understand the impact of AI on your work.

We are also deeply committed to investing in our platform and strengthening the primitives you rely on every day. This new world of development is powered by that foundational work, and we look forward to sharing more updates.

Let’s dive in.

In this post
GitHub is your Agent HQ: An open ecosystem for all agents
Mission control: Your command center, wherever you build
New in VS Code: Plan, customize, and connect
Increased confidence and control for your team
GitHub is your Agent HQ: An open ecosystem for all agents
The future is about giving you the power to orchestrate a fleet of specialized agents to perform complex tasks in parallel, not juggling a patchwork of disconnected tools or relying on a single agent. As the pioneer of asynchronous collaboration, we believe it’s our responsibility to make sure these next-generation async tools just work.

With Agent HQ what’s not changing is just as important as what is. You’re still working with the primitives you know—Git, pull requests, issues—and using your preferred compute, whether that’s GitHub Actions or self-hosted runners. You’re accessing agents through your existing paid Copilot subscription.

On top of that foundation, we’re opening the doors to a new world of capability. Over the coming months, coding agents from Anthropic, OpenAI, Google, Cognition, and xAI will be available on GitHub as part of your paid GitHub Copilot subscription.

Don’t want to wait? Starting this week, Copilot Pro+ users can begin working with OpenAI Codex in VS Code Insiders, the first of our partner agents to extend beyond its native surfaces and directly into the editor.

‘Our collaboration with GitHub has always pushed the frontier of how developers build software. The first Codex model helped power Copilot and inspired a new generation of AI-assisted coding. We share GitHub’s vision of meeting developers wherever they work, and we’re excited to bring Codex to millions more developers who use GitHub and VS Code, extending the power of Codex everywhere code gets written.’

  • Alexander Embiricos, Codex Product Lead, OpenAI

‘We’re partnering with GitHub to bring Claude even closer to how teams build software. With Agent HQ, Claude can pick up issues, create branches, commit code, and respond to pull requests, working alongside your team like any other collaborator. This is how we think the future of development works: agents and developers building together, on the infrastructure you already trust.’

  • Mike Krieger, Chief Product Officer, Anthropic

‘The best developer tools fit seamlessly into your workflow, helping you stay focused and move faster. With Agent HQ, Jules becomes a native assignee, streamlining manual steps and reducing friction in everyday development. This deeper integration with GitHub brings agents closer to where developers already work, making collaboration more natural and efficient.’

  • Kathy Korevec, Director of Product at Google Labs
    Mission control: Your command center, wherever you build
    The power of Agent HQ comes from mission control, a unified command center that follows you wherever you work. It’s not a single destination; it’s a consistent interface across GitHub, VS Code, mobile, and the CLI that lets you direct, monitor, and manage every AI-driven task. With mission control, you can choose from a fleet of agents, assign them work in parallel, and track their progress from any device.

We’re also providing:

New branch controls that give you granular oversight over when to run CI and other checks for agent-created code.
Identity features to control which agent is building the task, managing access, and policies just like you would with any other developer on your team.
One-click merge conflict resolution, improved file navigation, and better code commenting capabilities.
New integrations for Slack and Linear, on top of our recently announced connections for Atlassian Jira, Microsoft Teams and Azure Boards, and Raycast.
Logos for Slack, Linear, Microsoft Teams, VS Code, Azure Boards, Jira, and Raycast.
Try mission control today.

New in VS Code: Plan, customize, and connect
Mission control is in VS Code, too, so you’ve got a single view of all your agents running in VS Code, in the Copilot CLI, or on GitHub.

Today’s brand new release in VS Code is all about working alongside agents on projects, and it’s not surprising that great results start with a great plan. Getting the context right before a project is critical, but that same context needs to carry through into the work. Copilot already adapts to the way your team works by learning from your files and your project’s culture, but sometimes you need more pointed context.

So today, we’re introducing Plan Mode, which works with Copilot, and asks you clarifying questions along the way, to help you to build a step-by-step approach for your task. Providing the context upfront improves what Copilot can do and helps you find gaps, missing decisions, or project deficiencies early in the process—before any code is written. Once you approve, your plan goes to Copilot to start implementing, whether that’s locally in VS Code or using an agent in the cloud.

For even finer control, you can now create custom agents in VS Code with AGENTS.md files, source-controlled documents that let you set clear rules and guardrails such as “prefer this logger” or “use table-driven tests for all handlers.” This shapes Copilot’s behavior without you re-prompting it every time.

Now you can rely on the new GitHub MCP Registry, available directly in VS Code. VS Code is the only editor that supports the full MCP specification. Discover, install, and enable MCP servers like Stripe, Figma, Sentry, and others, with a single click. When your task calls for a specialist, create custom agents in GitHub Copilot with their own system prompt and tools to help you define the ways you want Copilot to work.

Increased confidence and control for your team
Agent HQ doesn’t just give you more power—it gives you confidence. Ensuring code quality, understanding AI’s influence on your workflow, and maintaining control over how AI interacts with your codebase and organization are essential for your team’s success, and we’re tackling these challenges head-on.

When it comes to code quality, the core problem is that “LGTM” doesn’t always mean “the code is healthy.” A review can pass, but can still degrade the codebase and quickly become long-term technical debt. With GitHub Code Quality, in public preview today, you’ve got org-wide visibility, governance, and reporting to systematically improve code maintainability, reliability, and test coverage across every repository. Enabling it extends Copilot’s security checks to look at the maintainability and reliability impact of the code that’s been changed.

And we’ve added a code review step into the Copilot coding agent’s workflow, too, so Copilot gets an initial first-line review and addresses problems (before you even see the code).

Screenshot of GitHub Code Quality, showing the results of Copilot’s review.
As an organization, you need to know how Copilot is being used. So today, we’re announcing the public preview of the Copilot metrics dashboard, showing Copilot’s impact and critical usage metrics across your entire organization.

For enterprise administrators who are managing AI access, including AI agents and MCP, we’re focused on providing consistent AI controls for teams with the control plane—your agent governance layer. Set security policies, audit logging, and manage access all in one place. Enterprise admins can also control which agents are allowed, define access to models, and obtain metrics about the Copilot usage in your organization.

For developers, by developers
We built Agent HQ because we’re developers, too. We know what it’s like when it feels like your tools are fighting you instead of helping you. When “AI-powered” ends up meaning more context-switching, more babysitting, more subscriptions, and more time explaining what you need to get the value you were promised.

That ends today.

Agent HQ isn’t about the hype of AI. It’s about the reality of shipping code. It’s about bringing order and governance to this new era without compromising choice. It’s about giving you the power to build faster, with more confidence, and on your terms.

Welcome home. Let’s build.

The post Introducing Agent HQ: Any agent, any way you work appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Episode 120: The Zero Trust Workshop (and so much more!)

1 Share

In this episode Michael talks with guest Merill Fernando about the Zero Trust Workshop, but we also spend time talking about all things identity! Merill's final thought is pure gold, too!

The only bit of news is about Azure SQL DB and how TDE key management during restore,





Download audio: https://content.rss.com/episodes/8411/2297382/azsecpodcast/2025_10_29_15_58_22_0c9eb163-916f-4513-aee0-a3393c93ee57.mp3
Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Migrating from IDCRL authentication to modern authentication in SharePoint

1 Share

In the next few months, Microsoft will be removing the legacy authentication protocol known as IDCRL (Identity Client Run Time Library) in SharePoint and force calls to rely on OpenID connect and OAuth Protocols. Since 2018, regular sign-in for services such as OneDrive and SharePoint has always relied on the more secure OpenID Connect and OAuth protocols; going forward we will enforce this for all authentical calls.

How to access telemetry to check whether your organization has made any IDCRL Calls

To access telemetry please log into the Microsoft Purview portal. Typically, users with following roles/permissions are allowed access to this portal.

  • Global Administrator
  • Compliance Administrator
  • Purview Compliance Administrator
  • Role Management Role (to manage role groups)
  • Collection Admin, Data Curator, or Data Reader for governance tasks.

Go to https://compliance.microsoft.com or https://purview.microsoft.com depending on your organization’s configuration. Use your organizational account (not a personal Microsoft account).

Once logged in, please navigate to “Audit” on the left hand side panel. Then under “Activities – operations name” please select “IDCRLSuccessSignIn” as shown in screenshot 1. Select other filters as required and click on Search.

pic1 image

You will get telemetry output similar to screenshot 2 and 3 depending upon the information available.

pic2 image

pic3 image

How to mitigate IDCRL calls and move to modern Authentication

Definition of IDCRL Calls: There are 2 categories of calls referred to as IDCRL calls

  1. Category 1: Any use of the SharePointOnlineCredentials library within your codebase is an indication of IDCRL calls as
    • This library leverages the IDCRL protocol under the hood for authentication with SharePoint Online. If your application or script is calling SharePointOnlineCredentials, it is using IDCRL authentication.
  2. Category 2: All calls to the following endpoints are using IDCRL protocol:

https://login.microsoftonline.com/rst2.srf (used to obtain the SAML BinarySecurityToken) https://TENANT.sharepoint.com/_vti_bin/idcrl.svc (used to exchange the BinarySecurityToken for the SPOIDCRL cookie)

Users should consider moving to modern auth protocols by using Microsoft Authentication Library (MSAL) for OAuth as this will ensure safe and secure continuity for both Category 1 and Category 2 calls mentioned above. MSAL provides methods to acquire security tokens that can be used to authenticate apps and scripts.

For comprehensive technical guidance on migrating to modern authentication, please consult this resource: Using modern authentication with CSOM for .NET Standard.

Additionally, your application must be registered in Microsoft Entra to acquire access tokens. For detailed steps on app registration, see: Configuring an application in Azure AD.

For more details and context on MSAL and OAuth, please visit the following links:

 

Alternative fix for category 1 calls (mentioned above): If your application or script leverages the “SharePointOnlineCredentials” library (from the Microsoft.SharePointOnline.CSOM NuGet package), an upcoming release of the package will give the option of transitioning from IDCRL to Modern Authentication protocol. Organizations can use the overloaded constructor SharePointOnlineCredentials(username, password, useModernAuth: true) to upgrade to OAuth. The upgraded NuGet package will be released soon & will be available from NuGet Gallery | Microsoft.SharePointOnline.CSOM

In some rare cases, calls are directly being made to GetAuthenticationCookie method to acquire cookies; This method is getting deprecated in the newer version of the NuGet package and would be replaced by AcquireTokenAsync method which will acquire OAuth token.

Additionally, if you have enabled Multi Factor Authentication in your tenant, then you need to pass a fourth parameter with SharePointOnlineCredentials constructor i.e (username, password, useModernAuth:true, interactiveAuth:true). The fourth parameter i.e “true” will allow for user intervention and enable interactive auth for the tenant.

In case admins don’t want user intervention then they can use App Only Authentication while registering their app on Entra as mentioned above.

 

The post Migrating from IDCRL authentication to modern authentication in SharePoint appeared first on Microsoft 365 Developer Blog.

Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories