To see what repository-native orchestration feels like, check out the Squad repository and throw a squad at a problem to see how the workflow evolves.
Over the past year, I have had conversations with security leaders across a variety of disciplines, and the energy around AI is undeniable. Organizations are moving fast, and security teams are rising to meet the moment. Time and again, the question comes back to the same thing: “We’re adopting AI fast, how do we make sure our security keeps pace?”
It’s the right question, and it’s the one we’ve been working to answer by updating the tools and guidance you already rely on. We’re announcing Microsoft’s approach to Zero Trust for AI (ZT4AI). Zero Trust for AI extends proven Zero Trust principles to the full AI lifecycle—from data ingestion and model training to deployment and agent behavior. Today, we’re releasing a new set of tools and guidance to help you move forward with confidence:
- A new AI pillar in the Zero Trust Workshop.
- Updated Data and Networking pillars in the Zero Trust Assessment tool.
- A new Zero Trust reference architecture for AI.
- Practical patterns and practices for securing AI at scale.
Here’s what’s new and how to use it.
Why Zero Trust principles must extend to AI
AI systems don’t fit neatly into traditional security models. They introduce new trust boundaries—between users and agents, models and data, and humans and automated decision-making. As organizations adopt autonomous and semi-autonomous AI agents, a new class of risk emerges: agents that are overprivileged, manipulated, or misaligned can act like “double agents,” working against the very outcomes they were built to support.
By applying three foundational principles of Zero Trust to AI:
- Verify explicitly—Continuously evaluate the identity and behavior of AI agents, workloads, and users.
- Apply least privilege—Restrict access to models, prompts, plugins, and data sources to only what’s needed.
- Assume breach—Design AI systems to be resilient to prompt injection, data poisoning, and lateral movement.
These aren’t new principles. What’s new is how we apply them systematically to AI environments.
A unified journey: Strategy → assessment → implementation
The most common challenge we hear from security leaders and practitioners is a lack of a clear, structured path from knowing what to do to doing it. That’s what Microsoft’s approach to Zero Trust for AI is designed to solve—to help you get to next steps and actions, quickly.
Zero Trust Workshop—now with an AI pillar
Building on last year’s announcement, the Zero Trust Workshop has been updated with a dedicated AI pillar, now covering 700 security controls across 116 logical groups and 33 functional swim lanes. It is scenario-based and prescriptive, designed to move teams from assessment to execution with clarity and speed.
The workshop helps organizations:
- Align security, IT, and business stakeholders on shared outcomes.
- Apply Zero Trust principles across all pillars, including AI.
- Explore real-world AI scenarios and the specific risks they introduce.
- Identify cross-product integrations that break down silos and drive measurable progress.

The new AI pillar specifically evaluates how organizations secure AI access and agent identities, protect sensitive data used by and generated through AI, monitor AI usage and behavior across the enterprise, and govern AI responsibly in alignment with risk and compliance objectives.
Zero Trust Assessment—expanded to Data and Networking
As AI agents become more capable, the stakes around data and network security have never been higher. Agents that are insufficiently governed can expose sensitive data, act on malicious prompts, or leak information in ways that are difficult to detect and costly to remediate. Data classification, labeling, governance, and loss prevention are essential controls. So are network-layer defenses that inspect agent behavior, block prompt injections, and prevent unauthorized data exposure.
Yet, manually evaluating security configurations across identity, endpoints, data, and network controls is time consuming and error prone. That is why we built the Zero Trust Assessment to automate it. The Zero Trust Assessment evaluates hundreds of controls aligned to Zero Trust principles, informed by learnings from Microsoft’s Secure Future Initiative (SFI). Today, we are adding Data and Network as new pillars alongside the existing Identity and Devices coverage.

Zero Trust Assessment tests are derived from trusted industry sources including:
- Industry standards such as the National Institute of Standards and Technology (NIST), the Cybersecurity and Infrastructure Security Agency (CISA), and the Center for Internet Security (CIS).
- Microsoft’s own learnings from SFI.
- Real-world customer insights from thousands of security implementations.
And we are not stopping here. A Zero Trust Assessment for AI pillar is currently in development and will be available in summer 2026, extending automated evaluation to AI-specific scenarios and controls.
Overall, the redesigned experience delivers:
- Clearer insights—Simplified views that help teams quickly identify strengths, gaps, and next steps.
- Deep(er) alignment with the Workshop—Assessment insights directly inform workshop discussions, exercises, and deployment paths.
- Actionable, prioritized recommendations—Concrete implementation steps mapped to maturity levels, so you can sequence improvements over time.
Zero Trust for AI reference architecture
Our new Zero Trust for AI reference architecture (extends our existing Zero Trust reference architecture) shows how policy-driven access controls, continuous verification, monitoring, and governance work together to secure AI systems, while increasing resilience when incidents occur.

The architecture gives security, IT, and engineering teams a shared mental model by clarifying where controls apply, how trust boundaries shift with AI, and why defense-in-depth remains essential for agentic workloads.
Practical patterns and practices for AI security
Knowing what to do is one thing. Knowing how to operationalize it at scale is another. Our patterns and practices provide repeatable, proven approaches to the most complex AI security challenges, much like software design patterns offer reusable solutions to common engineering problems.
| Pattern | What it helps you do |
|---|---|
| Threat modeling for AI | Why traditional threat modeling breaks down for AI—and how to redesign it for real-world risk at AI scale. |
| AI observability | End-to-end logging, traceability, and monitoring to enable oversight, incident response, and trust at scale. |
See it live at RSAC 2026
If you’re attending RSAC™ 2026 Conference, join us for three sessions focused on Zero Trust for AI—from expanding attack surfaces to hands-on, actionable guidance.
| When | Session | Title |
|---|---|---|
| Wednesday, March 25, 2026, 11:00 AM PT-11:20 AM PT | Zero Trust Theatre Session, by Tarek Dawoud (Principal Group Product Manager, Microsoft Security) and Hammad Rajjoub (Director, Microsoft Secure Future Initiative and Zero Trust) | Zero Trust for AI: Securing the Expanding Attack Surface |
| Wednesday, March 25, 2026, 12:00 PM PT-1:00 PM PT | Ancillary Executive Session, by Travis Gross (Principal Group Product Manager, Microsoft Security), Eric Sachs (Corporate Vice President, Microsoft Security), and Marco Pietro (Executive Vice President, Global Head of Cybersecurity, Capgemini), moderated by Mia Reyes (Director of Security, Microsoft). | Building Trust for a Secure Future: From Zero Trust to AI Confidence |
| Thursday, Marxh 26, 2026, 11:00 AM PT-12:00 PM PT | RSAC Post-Day Workshop, by Travis Gross, Tarek Dawoud, Hammad Rajjoub | Zero Trust, SFI, and ZT4AI: Practical, actionable guidance for CISOs |
Get started with Zero Trust for AI
Zero Trust for AI brings proven security principles to the realities of modern AI. Whether you’re governing agents, protecting models and data, or scaling AI without introducing new risk, the tools, architecture, and guidance are ready for you today.
Get started:
- Explore Microsoft’s unique approach to Zero Trust (including Zero Trust for AI).
- Implement your Zero Trust architecture for AI your way—self-service, through a trusted partner, or directly with Microsoft.
- Execute the Zero Trust Workshop to move from strategy to execution with scenario-based, prescriptive guidance.
- Assess your Zero Trust posture and explore the new Data and Network pillars with the Zero Trust Assessment tool.
To continue the conversation, join the Microsoft Security Community, where security practitioners and Microsoft experts share insights, guidance, and real world experiences across Zero Trust and AI security.
Learn more about Microsoft Security solutions on our website and bookmark the Microsoft Security blog for expert insights on security matters. Follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest cybersecurity news and updates.
The post New tools and guidance: Announcing Zero Trust for AI appeared first on Microsoft Security Blog.

