Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150187 stories
·
33 followers

Why Use Microsoft’s Zero Trust Assessment?

1 Share

If you’ve been working on “doing Zero Trust” for a while, you’ve probably hit the same wall I see everywhere: lots of guidance and checklists, but very little that tells you how your tenant is actually configured today.

That’s precisely where Microsoft’s Zero Trust Assessment comes in.

Below, I’ll break it down in two parts, in a practical, admin-friendly way:

  • What the Zero Trust Assessment is.
  • Why would you run it?

What is the Microsoft Zero Trust Assessment?

At a simple level, the Zero Trust Assessment is a PowerShell-based, automated posture scan for your Microsoft cloud environment. It checks hundreds of configuration items across Microsoft Entra and Intune and compares them to Microsoft’s recommended security baselines, aligned with:

  • The Zero Trust pillars (identities, devices, data, apps, infrastructure, networks, visibility/automation)
  • Microsoft’s Secure Future Initiative (SFI), which is their internal push to raise the security bar across products and operations

The key points:

  • It runs as a PowerShell 7 module called ZeroTrustAssessment.
  • It connects to your tenant using Microsoft Graph and (optionally) Azure.
  • It is read-only: no changes are made to your tenant configuration.
  • It generates a local HTML report that summarizes your Zero Trust posture, including detailed tests, risk levels, and remediation guidance.

Think of it as a repeatable health check that sits between the “marketing deck” version of Zero Trust and the “click every blade in the portal” reality.

Instead of manually walking through every Intune setting, Conditional Access policy, or identity protection control, the assessment automates that review and presents the findings in a structured report, mapped back to Zero Trust concepts.

Why would you run a Zero Trust Assessment?

You don’t run this just to tick a box. You run it to get clarity. Here’s how I’d frame the “why” when talking to stakeholders.

1. Establish a real baseline for your Zero Trust journey

Most organizations say they’re “on the Zero Trust journey,” but when you ask, “What’s our current maturity?” the answers are vague.

Microsoft provides several assessment and progress tracking resources for Zero Trust, including posture assessments, workshops, and progress trackers that help you understand where you are and how you’re improving over time.

The Zero Trust Assessment gives you that missing piece:

A defensible, evidence-based baseline of your current configuration.

That baseline is what you’ll use to:

  • Prioritize which gaps to close first
  • Show progress to leadership over time
  • Align technical work with Zero Trust adoption frameworks and business scenarios

2. Reduce manual, error-prone config reviews

Microsoft publishes extensive guidance on configuring Entra ID and Intune securely, but manually validating every recommendation against your tenant isn’t realistic at scale. The overview explicitly states that manual checks are time-consuming and error-prone, and that the assessment automates that process.

Instead of:

  • Clicking through the Conditional Access policy after the policy
  • Exporting device compliance reports
  • Manually checking MFA, passwordless, sign-in risk, etc.

The assessment does that heavy lifting and maps findings back to Zero Trust and SFI pillars.

3. Turn Zero Trust from vague strategy into concrete work

Zero Trust guidance is great for strategy decks, but engineers need something far more concrete:

  • Which settings are wrong or missing
  • Why they matter in a Zero Trust model
  • Exactly what to change

The Zero Trust Assessment report includes:

  • A high-level Overview
  • Detailed Identity and Devices tabs listing each test, risk level, and status
  • Per-test details with descriptions and recommended remediation actions

That is the bridge between architecture and operations: you can hand specific findings to specific teams and say, “Fix these 15 items in this sprint.”

4. Support audits, compliance, and executive reporting

Many organizations are using Zero Trust not just as a technical model, but also to meet regulatory and compliance expectations (e.g., data protection regulations, government guidance, or internal policies).

Running this assessment helps you:

  • Show evidence of due diligence and continuous improvement.
  • Provide before/after posture snapshots for audits.
  • Give leadership a clear, visual story instead of a pile of portal screenshots.

In other words, it’s not just for the SOC or identity team—it’s a tool you can use across security, IT, and governance.

Wrapping up

The main goal of this first part is simple: take Zero Trust out of the abstract and connect it to something concrete you can actually run in your tenant. The Zero Trust Assessment isn’t a slide, a maturity model, or another “future state” diagram; it’s a practical way to see how your current identity and device configuration stacks up against Microsoft’s baseline modern security.

Once you understand what the assessment is and why it matters, every technical step you take afterward carries more weight. You’re not just installing a PowerShell module for the sake of it; you’re putting in place a repeatable way to baseline your posture, have better conversations with leadership, and prioritize the work that actually reduces risk.

Think of this as laying the foundation. You’ve got the context, you know why this matters, and you know what you’re aiming to measure. In part two, we’ll walk through installing and running the Zero Trust Assessment so you can put all of this into practice in your own environment.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What’s New This Month in Microsoft AI Security (November 2025)

1 Share

As AI systems become more embedded across the enterprise, the security surface expands with them. Microsoft’s November 2025 updates reflect a significant shift toward treating AI agents as entirely governed, identity-aware, and risk-assessed components of the modern environment. This month’s releases focus heavily on centralizing control, strengthening identity, improving data governance, and enhancing threat protection for AI-driven workloads.

Below is an overview of what’s new and why it matters.

Unified Agent Governance: Microsoft Agent 365

One of the most significant announcements this month is the preview release of Microsoft Agent 365, a unified control plane for managing and securing AI agents across your organization.
Agent 365 allows you to:

  • Track and manage all AI agents (internal or third-party) from a single place.
  • Control how agents authenticate, what they access, and how they interact with data.
  • Apply consistent governance, auditing, and policy enforcement across the entire agent ecosystem.

This clearly signals Microsoft’s long-term vision:

AI agents are no longer applications. They are identities and must be governed as such.


Strengthened Identity and Access Controls for AI Agents

Microsoft Entra received several key updates to support this new agent-centric model:

  • Entra Agent ID — a new identity type designed explicitly for AI agents, giving them a managed identity similar to users or apps.
  • Conditional Access for Agent ID (Preview) — bringing Zero Trust enforcement to AI agents, ensuring agents only operate under compliant conditions.
  • Agent Registry and Role Enhancements — providing centralized visibility into all registered agents, along with new roles for proper segregation of duties.

This brings much-needed maturity to the security of AI-driven workflows, especially for organizations handling regulated or sensitive data.


Governance, Compliance, and Data Protection Updates in Microsoft Purview

Purview introduced several enhancements to manage the data lifecycle and the compliance posture for AI-generated and AI-accessed content. The updates include:

  • Expanded Data Security Posture Management (DSPM) tailored for AI workloads, helping identify where sensitive data may be exposed to agents.
  • Improved policy enforcement for classification, retention, deletion, and DLP actions on
    AI-generated content.
  • Advanced compliance reporting and monitoring for agent activity, risky prompt behavior, and output handling.
  • Better storage hygiene for AI-related artifacts within Microsoft 365.

These features make it easier to bring AI into compliance-sensitive environments without increasing operational risk.


AI Threat Protection and Security Posture Enhancements

This month also includes new capabilities across Defender and Microsoft’s cloud-security stack to monitor, secure, and control agent behavior:

  • Security Posture Management for AI Applications and Agents provides insights into vulnerabilities, exposure pathways, and misconfigurations in agent-driven solutions.
  • AI Agent Protection in Copilot Studio (Preview) adds runtime safeguards to help prevent misuse, harmful actions, or unintentional behavior from custom agents.
  • Additional monitoring and risk assessment integrations for organizations building AI solutions through Microsoft Foundry.

These capabilities help unify observability and protection across the entire AI application lifecycle.


New Documentation, Guidance, and Learning Resources

Microsoft also released new architectural guidance, scenario-based documentation, and implementation best practices focusing on:

  • How to adopt Agent 365 as the governance backbone for enterprise AI.
  • Security principles for the “agentic era,” including identity-first design and containment models.
  • Best practices for securing AI agents built in Foundry, Copilot Studio, and other AI development environments.
  • Updated learning paths that walk organizations through adopting secure-by-default AI patterns.

These resources make it easier for security teams to adapt governance strategies as AI becomes more autonomous and integrated.


Why These Updates Matter

The November 2025 updates formalize a significant shift: AI agents are now treated as distinct security subjects with identities, roles, rules, and monitoring. For organizations integrating generative AI into operational systems:

  • You gain clearer visibility into agent actions and data access.
  • You can enforce Zero Trust principles directly on AI entities.
  • You can govern AI-generated content with the same rigor as traditional data workflows.
  • You can detect and mitigate threats or misuse arising from agent behavior.

This is a foundational change, not an incremental one. The security model for AI is becoming more mature, structured, and measurable, exactly what organizations have needed.


Final Thoughts

Microsoft’s November 2025 updates reinforce a simple reality: the “agentic era” is here. AI agents can make decisions, access sensitive data, and interact autonomously with internal systems. Treating them like traditional applications is no longer sufficient.

With new capabilities across Agent 365, Entra, Purview, and Defender, organizations now have the tools to secure AI at scale with identity-first controls, consistent governance, and robust risk mitigation built directly into the platform.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Amazon previews 3 AI agents, including ‘Kiro’ that can code on its own for days

1 Share
Amazon Web Services on Tuesday announced three new AI agents it calls "Frontier agents" for coding, security, and DevOps.
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Agent vs Agentic

1 Share

From Copilot:

The term “agent” typically refers to an entity that can act independently and make decisions based on its programming or objectives. In contrast, “agentic” describes the quality or characteristic of being an agent, often emphasizing the capacity for self-directed action and autonomy.

In practice though, the term “agent” is highly overloaded. Over the years, “agent” has been used to describe software programs, types of microservice, human roles, and now some aspects of AI systems.

Less common, until now, was the term “agentic”. What seems to be emerging is the idea that an “agentic” system has one or more “agents” that that are more advanced than simple “agents”. Yeah, that makes everything more clear!

AI Pragmatist

What is an Agent?

There are many ways people build agents today, and many types of software are called agents.

For example, inside some AI chat software, an agent might be defined as a pre-built set of prompts, and possibly access to specific tools via MCP (model context protocol). Such an agent might be focused on building a specific type of software, or finding good flights and hotels for a trip. In any case, these types of agents are built into a chat experience, and are triggered by user requests.

For a recent client project, we wrote an agent that was triggered by user requests, and routed those requests to other agents. Those other agents (subagents?) vary quite a lot in implementation and complexity. One of them, for example, was also an agent that routed user requests to various APIs to find information. Another exposed a set of commands over an existing REST API. What they all have in common is that they are triggered by user requests, and they do not have any autonomy or ability to act without user input.

Sometimes people talk about an agent as being able to act autonomously. To trigger on events other than direct user requests. I think this is where the term “agentic” starts to make more sense.

What is Agentic?

In my mind, I differentiate between agents that are triggered by user requests, and those that can act autonomously. The latter I would call “agentic”. Or at least autonomous agents enable the creation of agentic systems.

And I guess that’s the key here: there’s a difference between simple agents that respond to user requests, and more complex agents that can act on their own, make decisions, and pursue goals without direct user input.

An agentic system has at least one, but probably many, agents that can operate autonomously. These agents can perceive their environment, make decisions based on their programming and objectives, and take actions to achieve specific goals.

This is not to say that there won’t also be simpler agents and tools within an agentic system. In fact, an agentic system might be composed of many different types of agents, some of which are simple and user-triggered, and others that are more complex and autonomous. And others that are just tools used by the agents, probably exposed via MCP.

How Big is an Autonomous Agent?

Given all that, the next question to consider is the “size” or scope of an autonomous agent.

Here I think we can draw on things like the Single Responsibility Pattern, Domain Driven Design (DDD) and microservices architecture. An autonomous agent should probably have a well-defined scope or bounded context, where it has clear responsibilities and can operate independently of other agents.

I tend to think about it in terms of a human “role”. Most human workers have many different roles or responsibilities as part of their job. Some of those roles are often things that could be automated with powerful end software. Others require a level of judgement to go along with any automation. Still others require empathy, creativity, or other human qualities that are hard to replicate with software.

Building Automation

One good use of AI is to have it build software that automates specific tasks. In this case, an autonomous agent might be responsible for understanding a specific domain or task, and then generating code to automate that task. The AI is not involved in the actual task, just in understanding the task and building the automation.

This is, in my view, a good use of AI, because it leverages the strengths of AI (pattern recognition, code generation, etc.) to creat tools. The tools are almost certainly more cost effective to operate than AI itself. Not just in terms of money, but also in terms of the overall ethical concerns around AI usage (power, water, training data).

Decision Making

As I mentioned earlier though, some roles require judgement and decision making. In these cases, an autonomous agent might be responsible for gathering information, analyzing options, and making decisions based on its programming and objectives.

This is probably done in combination of automation. So AI might be used to create automation for parts of the task that are repetitive and well-defined, while the autonomous agent focuses on the more complex aspects that require judgement.

Earlier I discussed the ambiguity around the term agent, and you can imagine how this scenario involves different types of agent:

  • Simple agents that are triggered by user requests to gather information or perform specific tasks.
  • Autonomous agents that can analyze the gathered information and make decisions based on predefined criteria.
  • Automation tools that are created by AI to handle repetitive tasks.

What we’ve created here is an agentic system that leverages different types of agents and automation to achieve a specific goal. That goal is a single role or responsibility, that should have clear boundaries and scope.

Science Fiction Inspiration

The idea of autonomous agents is not new. It has been explored by Isaac Asimov in his Robot series, where robots are designed to act autonomously and make decisions based on the Three Laws of Robotics.

More recent examples come from the works of Ian Banks, Neil Asher, Alastair Reynolds, and many others. In these stories, autonomous agents (often called AIs or Minds) are capable of complex decision making, self-improvement, and even creativity. These fictional portrayals often explore the ethical and societal implications of autonomous agents, which is an important consideration as we move towards more advanced AI systems in the real world.

Some of these authors explore utopic visions of AI, while others focus on dystopic outcomes. Both perspectives are valuable, as they highlight the potential benefits and risks associated with autonomous agents.

I think there’s real value in looking at these materials for terminology that can help us better understand and communicate about the evolving landscape of AI and autonomous systems. Yes, we’ll end up creating new terms because that’s how language works, but a lot of the concepts like agent, sub-agent, mind, sub-mind, and more are already out there.

Conclusion

Today the term “agent” is overloaded, overused, and ambiguous. Collectively we need to think about how to better define and communicate about different types of agents, especially as AI systems become more complex and capable.

The term “agentic” seems to be less overloaded, and is useful for describing systems that have one or more autonomous agents in the mix. These autonomous agents can perceive their environment, make decisions, and take actions to achieve specific goals.

We are at the beginning of this process, and this phase of every new technology involves chaos. It will be fun to learn lessons and see how the industry and terminology evolves over time.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing One-Click MCP Install

1 Share
Warp now supports one-click MCP install for a simplified installation flow for MCP. You can install MCP servers from: Shared MCPs from your team: Anyone on your team can publish an MCP server, and Warp automatically redacts secrets before sharing. Teammates (or your second laptop) can install it instantly, only entering the values they personally need. Warp’s curated list: A built-in selection of popular MCP servers chosen based on what Warp users actually use. Install them directly—no config h...

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

python-1.0.0b251120

1 Share

[1.0.0b251120] - 2025-11-20

Added

  • agent-framework-core: Introducing support for declarative YAML spec (#2002)
  • agent-framework-core: Use AI Foundry evaluators for self-reflection (#2250)
  • agent-framework-core: Propagate as_tool() kwargs and add runtime context + middleware sample (#2311)
  • agent-framework-anthropic: Anthropic Foundry integration (#2302)
  • samples: M365 Agent SDK Hosting sample (#2292)
  • samples: Foundry Sample for A2A + SharePoint Samples (#2313)

Changed

  • agent-framework-azurefunctions: [BREAKING] Schema changes for Azure Functions package (#2151)
  • agent-framework-core: Move evaluation folders under evaluations (#2355)
  • agent-framework-core: Move red teaming files to their own folder (#2333)
  • agent-framework-core: "fix all" task now single source of truth (#2303)
  • agent-framework-core: Improve and clean up exception handling (#2337, #2319)
  • agent-framework-core: Clean up imports (#2318)

Fixed

  • agent-framework-azure-ai: Fix for Azure AI client (#2358)
  • agent-framework-core: Fix tool execution bleed-over in aiohttp/Bot Framework scenarios (#2314)
  • agent-framework-core: @ai_function now correctly handles self parameter (#2266)
  • agent-framework-core: Resolve string annotations in FunctionExecutor (#2308)
  • agent-framework-core: Langfuse observability captures ChatAgent system instructions (#2316)
  • agent-framework-core: Incomplete URL substring sanitization fix (#2274)
  • observability: Handle datetime serialization in tool results (#2248)
Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories