Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148685 stories
·
33 followers

The PHP Foundation Is Seeking a New Executive Director

1 Share
New submitter benramsey writes: The PHP Foundation has launched a search for its next executive director. The Executive Director serves as the operational leader of the PHP Foundation, defining its strategic vision and translating it into reality while managing day-to-day operations and serving as the primary bridge between the Board, staff, community, and sponsors. While the programming language PHP is over 30 years old, the PHP Foundation was only created in 2021. The Executive Director will be responsible for maturing the foundation's internal structure and will play a crucial role in ensuring the foundation can effectively support this vital ecosystem. Interested parties are encouraged to submit a cover letter describing their interest and relevant experience, resume or CV, and a brief vision statement detailing the applicant's understanding of the position, key opportunities and challenges they see for the foundation, and their approach to the role.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Study debunks viral claim that AI filters out most job applicants

1 Share
Enhancv has released a new study challenging the long-standing myth that Applicant Tracking Systems (ATS) or AI automatically reject most resumes before a human review. According to its interviews with 25 U.S. recruiters and HR professionals, 92 percent said their systems do not auto-reject resumes based on formatting or content. Instead, recruiters point to overwhelming application volume and the timing of submissions as the real reasons many candidates never make it as far as the interview stage. The study, titled The ATS Rejection Myth, offers a clearer look at how hiring processes work in practice. It finds that modern ATS… [Continue Reading]
Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft is updating Windows 11’s Snipping Tool with option to add text to screenshots

1 Share
Over the last few years, Microsoft has gradually evolved the Snipping Tool from a simple screen-grabbing tool into something which is much more advanced and sophisticated than anyone could have first imagined. Having added features such as the ability to create video, grab text from images using OCR, the company is now adding new text options to the app. While not officially available – or even announced – new capabilities have been spotted in the app that show how Microsoft is developing this increasingly essential tool. While the addition of text capabilities may not seem particularly groundbreaking, it shows how… [Continue Reading]
Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

​​Securing our future: November 2025 progress report on Microsoft’s Secure Future Initiative ​​

1 Share

When we launched the Secure Future Initiative (SFI), our mission was clear: accelerate innovation, strengthen resilience, and lead the industry toward a safer digital future. Today, we’re sharing our latest progress report that reflects steady progress in every area and engineering pillar, underscoring our commitment to security above all else. We also highlight new innovations delivered to better protect customers, and share how we use some of those same capabilities to protect Microsoft. Through SFI, we have improved the security of our platforms and services and our ability to detect and respond to cyberthreats.

Fostering a security-first mindset 

Engineering sentiment around security has improved by nine points since early 2024. To increase security awareness, 95% of employees have completed the latest training on guarding against AI-powered cyberattacks, which remains one of our highest-rated courses. Finally, we developed resources for employees and made them available to customers for the first time to improve security awareness. 

Governance that scales globally 

The Cybersecurity Governance Council now includes three additional Deputy Chief Information Security Officers (CISOs) functions covering European regulations, internal operations, and engagement with our ecosystem of partners and suppliers. We launched the Microsoft European Security Program to deepen partnerships and better inform European governments about the cyberthreat landscape and collaborating with industry partners to better align cybersecurity regulations, advance responsible state behavior in cyberspace, and build cybersecurity capacity through the Advancing Regional Cybersecurity Initiative in the global south. You can read more on our cybersecurity policy and diplomacy work.

Secure by Design, Secure by Default, Secure Operations

Microsoft Azure, Microsoft 365, Windows, Microsoft Surface, and Microsoft Security engineering teams continue to deliver innovations to better protect customers. Azure enforced secure defaults, expanded hardware-based trust, and updated security benchmarks to improve cloud security. Microsoft 365 introduced a dedicated AI Administrator role, and enhanced agent lifecycle governance and data security transparency to give organizations more control and visibility. Windows and Surface advanced Zero Trust principles with expanded passkeys, automatic recovery capabilities, and memory-safe improvements to firmware and drivers. Microsoft Security introduced data security posture management for AI and evolved Microsoft Sentinel into an AI-first platform with data lake, graph, and Model Context Protocol capabilities.

Engineering progress that sets the benchmark

We’re making steady progress across all engineering pillars. Key achievements include enforcing phishing-resistant multifactor authentication (MFA) for 99.6% of Microsoft employees and devices, migrating higher-risk users to locked-down Azure Virtual Desktop environments, completing network device inventory and lifecycle management, and achieving 99.5% detection and remediation of live secrets in code. We’ve also deployed more than 50 new detections across Microsoft infrastructure with applicable detections to be added to Microsoft Defender and awarded $17 million to promote responsible vulnerability disclosure.

Actionable guidance 

To help customers improve their security, we highlight 10 SFI patterns and practices customers can follow to reduce their risk. We also share additional best practices and guidance throughout the report. Customers can do a deeper assessment of their security posture by using our Zero Trust Workshops which incorporate SFI-based assessments and actionable learnings to help customers on their own security journeys.

Security as the foundation of trust 

Cybersecurity is no longer a feature—it’s the foundation of trust in a connected world.

With the equivalent of 35,000 engineers working full time on security, SFI remains the largest cybersecurity effort in digital history. Looking ahead, we will continue to prioritize the highest risks, accelerate delivery of security innovations, and harness AI to increase engineering efficiency and enable rapid anomaly detection and automated remediation.

The cyberthreat landscape will continue to evolve. Technology will continue to advance. And Microsoft will continue to prioritize security above all else. Our progress reflects a simple truth: trust is earned through action and accountability.

We are grateful for the partnership of our customers, industry peers, and security researchers. Together, we will innovate for a safer future.

​​Learn more with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post ​​Securing our future: November 2025 progress report on Microsoft’s Secure Future Initiative ​​  appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Secure AI agent deployment to GKE

1 Share

Building AI agents is

exciting, but deploying them securely to production shouldn't be

complicated. In this tutorial, you will learn how GitLab's native Google Cloud integration makes it straightforward to deploy AI agents to Google Kubernetes Engine (GKE) — with built-in scanning and zero service account keys.

Why choose GKE to deploy your AI agents?

GKE provides enterprise-grade orchestration that connects seamlessly with GitLab CI/CD pipelines through OIDC authentication. Your development team can deploy AI agents while maintaining complete visibility, compliance, and control over your cloud infrastructure. This guide uses Google's Agent Development Kit (ADK) to build the app, so you can expect increased seamlessness as this is deployed using GitLab.

Three key advantages to this approach:

Full infrastructure control - Your data, your rules, your environment. You maintain complete control over where your AI agents run and how they're configured.

Native GitLab integration - No complex workarounds. Your existing pipelines work right out of the box thanks to GitLab's native integration with Google Cloud.

Production-grade scaling - GKE automatically handles the heavy lifting of scaling and internal orchestration as your AI workloads grow.

The key point is that GitLab with GKE provides the enterprise reliability your AI deployments demand without sacrificing the developer experience your teams expect.

Prerequisites

Before you start, make sure you have these APIs enabled:

  • GKE API

  • Artifact Registry API

  • Vertex AI API

Also make sure you have:

  • GitLab project created

  • GKE cluster provisioned

  • Artifact Registry repository created

The deployment process

1. Set up IAM and permissions on GitLab

Navigate to your GitLab integrations to configure Google Cloud authentication (IAM).

Go to Settings > Integrations and configure the Google Cloud integration. If you're using a group-level integration, notice that default settings are already inherited by projects. This means you configure once at the group level, and all projects benefit and inherit this setting.

To set this up from scratch, provide:

  • Project ID

  • Project Number

  • Workload Identity Pool ID

  • Provider ID

Once configured, GitLab provides a script to run in Google Cloud Console, via Cloud Shell. The outcome of running this script is a Workload Identity Federation pool with the necessary service principal to enable the proper access.

2. Configure Artifact Registry integration

Still in GitLab's integration settings, configure Artifact Management:

  1. Click Artifact Management.

  2. Select Google Artifact Registry.

  3. Provide:

    • Project ID
    • Repository Name (created beforehand)
    • Repository Location

GitLab provides another script to run in Google Cloud Console.

Important: Before proceeding, add these extra roles to the Workload Identity Federation pool:

  • Service Account User

  • Kubernetes Developer

  • Kubernetes Cluster Viewer

These permissions allow GitLab to deploy to GKE in subsequent steps.

3. Create the CI/CD pipeline

Now for the key part — creating the CI/CD pipeline for deployment.

Head to Build > Pipeline Editor and define your pipeline with four stages:

  • Build - Docker creates the container image.

  • Test - GitLab Auto DevOps provides built-in security scans to ensure there are no vulnerabilities.

  • Upload - Uses GitLab's built-in CI/CD component to push to Google Artifact Registry.

  • Deploy - Uses Kubernetes configuration to deploy to GKE.

Here's the complete .gitlab-ci.yml:



default:
  tags: [ saas-linux-2xlarge-amd64 ]

stages:
  - build
  - test
  - upload
  - deploy

variables:
  GITLAB_IMAGE: $CI_REGISTRY_IMAGE/main:$CI_COMMIT_SHORT_SHA
  AR_IMAGE: $GOOGLE_ARTIFACT_REGISTRY_REPOSITORY_LOCATION-docker.pkg.dev/$GOOGLE_ARTIFACT_REGISTRY_PROJECT_ID/$GOOGLE_ARTIFACT_REGISTRY_REPOSITORY_NAME/main:$CI_COMMIT_SHORT_SHA
  GCP_PROJECT_ID: "your-project-id"
  GKE_CLUSTER: "your-cluster"
  GKE_REGION: "us-central1"
  KSA_NAME: "ai-agent-ksa"

build:
  image: docker:24.0.5
  stage: build
  services:
    - docker:24.0.5-dind
  before_script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  script:
    - docker build -t $GITLAB_IMAGE .
    - docker push $GITLAB_IMAGE

include:
  - template: Jobs/Dependency-Scanning.gitlab-ci.yml
  - template: Jobs/Container-Scanning.gitlab-ci.yml
  - template: Jobs/Secret-Detection.gitlab-ci.yml
  - component: gitlab.com/google-gitlab-components/artifact-registry/upload-artifact-registry@main
    inputs:
      stage: upload
      source: $GITLAB_IMAGE
      target: $AR_IMAGE

deploy:
  stage: deploy
  image: google/cloud-sdk:slim
  identity: google_cloud
  before_script:
    - apt-get update && apt-get install -y kubectl google-cloud-sdk-gke-gcloud-auth-plugin
    - gcloud container clusters get-credentials $GKE_CLUSTER --region $GKE_REGION --project $GCP_PROJECT_ID
  script:
    - |
      kubectl apply -f - <<EOF
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: ai-agent
        namespace: default
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: ai-agent
        template:
          metadata:
            labels:
              app: ai-agent
          spec:
            serviceAccountName: $KSA_NAME
            containers:
            - name: ai-agent
              image: $AR_IMAGE
              ports:
              - containerPort: 8080
              resources:
                requests: {cpu: 500m, memory: 1Gi}
                limits: {cpu: 2000m, memory: 4Gi}
              livenessProbe:
                httpGet: {path: /health, port: 8080}
                initialDelaySeconds: 60
              readinessProbe:
                httpGet: {path: /health, port: 8080}
                initialDelaySeconds: 30
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: ai-agent-service
        namespace: default
      spec:
        type: LoadBalancer
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: ai-agent
      ---
      apiVersion: autoscaling/v2
      kind: HorizontalPodAutoscaler
      metadata:
        name: ai-agent-hpa
        namespace: default
      spec:
        scaleTargetRef:
          apiVersion: apps/v1
          kind: Deployment
          name: ai-agent
        minReplicas: 2
        maxReplicas: 10
        metrics:
        - type: Resource
          resource:
            name: cpu
            target: {type: Utilization, averageUtilization: 70}
      EOF
      
      kubectl rollout status deployment/ai-agent -n default --timeout=5m
      EXTERNAL_IP=$(kubectl get service ai-agent-service -n default -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
      echo "Deployed at: http://$EXTERNAL_IP"
  only:
    - main

The critical configuration for GKE

What makes this work — and why we need this extra configuration for GKE— is that we must have a Kubernetes Service Account in the cluster that can work with Vertex AI. We need that service account to be permitted to access the AI capabilities of Google Cloud.

Without this, we can deploy the application, but the AI agent won't work. We need to create a Kubernetes Service Account that can access Vertex AI.

Run this one-time setup:



#!/bin/bash


PROJECT_ID="your-project-id"


GSA_NAME="ai-agent-vertex"


GSA_EMAIL="${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"


KSA_NAME="ai-agent-ksa"


CLUSTER_NAME="your-cluster"


REGION="us-central1"



# Create GCP Service Account


gcloud iam service-accounts create $GSA_NAME \
    --display-name="AI Agent Vertex AI" \
    --project=$PROJECT_ID

# Grant Vertex AI permissions


gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member="serviceAccount:${GSA_EMAIL}" \
    --role="roles/aiplatform.user"

# Get cluster credentials


gcloud container clusters get-credentials $CLUSTER_NAME \
    --region $REGION --project $PROJECT_ID

# Create Kubernetes Service Account


kubectl create serviceaccount $KSA_NAME -n default



# Link accounts


kubectl annotate serviceaccount $KSA_NAME -n default \
    iam.gke.io/gcp-service-account=${GSA_EMAIL}

gcloud iam service-accounts add-iam-policy-binding ${GSA_EMAIL} \
    --role=roles/iam.workloadIdentityUser \
    --member="serviceAccount:${PROJECT_ID}.svc.id.goog[default/${KSA_NAME}]" \
    --project=$PROJECT_ID

4. Deploy to GKE

Once you're done, push this change to the pipeline and you're good to go.

You can see the pipeline has just deployed. Go to CI/CD > Pipelines and you'll see the four stages:

  • Build

  • Test (with all defined security scans)

  • Upload to Artifact Registry (successful)

  • Deploy to Kubernetes in GKE (success)

Summary

With GitLab and Google Cloud together, you're able to deploy your AI agent to GKE with ease and security. We didn't have to go through a lot of steps — we were able to do that thanks to GitLab's native integration with Google Cloud.

Watch this demo:

<!-- blank line -->

<figure class="video_container"> <iframe src="https://www.youtube.com/embed/mc2pCL5Qjus?si=QoH02lvz5KH5Ku9O" frameborder="0" allowfullscreen="true"> </iframe> </figure>

<!-- blank line -->

Use this tutorial's complete code example to get started now. Not a GitLab customer yet? Explore the DevSecOps platform with a free trial. Startups hosted on Google Cloud have a special perk to try and use GitLab.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Riding the AI Wave: How Microsoft Entra is Evolving for the Agentic Era

1 Share
Secure access for AI agents with Microsoft Entra is Tuesday, November 18, 2025 at 3:45PM Pacific Time.

 

This blog is based on my keynote at The Experts Conference in October 2025.

I’m in my 34th year at Microsoft—hard to believe, considering I was too young to drink and without gray hairs when I joined. I remember the day, about 15 years ago, when my phone rang. Back then, we had actual desk phones you had to pick up, with little LCD displays that showed who was calling. The display lit up: "Satya N." Whoa!

"Hey Alex," he said, "I have this new role as director of the team for identity. I think you'd be perfect."

First, I couldn't believe Satya Nadella was calling me directly. Second, I was thinking: "Really? You want me to work on identity?" But then I spent two hours with him as he explained how identity is the cornerstone of Microsoft's enterprise strategy and that he needed someone to help drive its transition to the cloud.

Three weeks later, I started what's become the best job I've ever had, helping take Active Directory from on-premises to the cloud as Azure Active Directory, then evolving it into Microsoft Entra as a full identity and access management solution. And while that voyage has been really cool, it hasn’t fundamentally changed how identity works.

A much bigger wave of innovation has started.

The AI Wave

A year ago, my boss, Joy Chik told me, "I want you to focus on AI and how we get ready for the future."

Since then, I've been trying out AI tools, building my own agents, putting our strategy together, and talking with customers, industry analysts, and AI experts around the world. That’s why I’m convinced this next wave will fundamentally change how humans work and how our economy functions, just as the Industrial, Electrical, Internet, and Mobile Revolutions did.

Around 12,000 years ago, humans began domesticating plants and animals, transitioning from hunting and gathering to agriculture. During the next big wave, we developed complex societies and created governments. With the Industrial Revolution and power from coal, steam, and eventually electricity, we could start building big factories and large cities. Before the Electrical Revolution, most humans went to sleep when it got dark. Now we all stay up watching late night TV while scrolling on our devices.

Today, we’re living through the Internet and Mobile Revolution. This has been a big, big shift, but AI represents a much bigger wave of innovation.

The Reality of AI Agent Adoption

It seems a lot of our customers feel the same way. A KPMG survey of large organizations revealed that 42% have active AI agents working for them today, up from 11% just six months ago.1 And we're just getting started.

A lot of great agents that can help your business are already available in Microsoft 365 and the Security Copilot platform, as well as through our partners like Workday and ServiceNow

IDC projects that by 2028, there’ll be 1.3 billion AI agents in large companies in the developed world.2 As just one real world example of this coming wave, at Microsoft, we’re already using agents in development, security, research, and analytics. From what I see, it won’t take long before we have more agents than human employees. My bet? Within two years, we'll have over a million active AI agents in the Microsoft tenant.

Not only are there lots of agents, but they’re also ephemeral. They get spun up, run for a couple of weeks, then get turned off. You need the ability to deal with agents coming and going at a massive scale.

What Exactly Is an Agent?

Let me step back for a moment and clarify how we think about agents at Microsoft.

An AI agent is an LLM with domain knowledge, an ability to act, and guardrails.

An agent is an LLM with a set of instructions telling it what its job is plus supporting code that provides plumbing and guardrails and verifies results. But what makes an agent special are two things:

  1. Domain knowledge: It knows about your business, a specific process, or technology areas, ideally with memory so it can learn and improve.
  2. Ability to act: It can make API calls to something like MS Graph, Azure resources, or other systems to take action, and then send input or data back to whoever called it.

And since they can act, agents need proper access controls. My assertion is that every single agent needs an identity.

Think of it this way. An auto manufacturer would never ship a car without a VIN number. For our part, all agents from Microsoft will come with a built-in agent identity, so you can track them, control their access, and manage their entire lifecycle.

An agent to optimize Conditional Access policies

One of the agents I'm most excited about is the Conditional Access Optimization Agent in Microsoft Entra. This agent inspects your Conditional Access policies, helps you understand what they do and where security gaps exist, then keeps them updated.

Using this agent, customers are already uncovering an average of 26 policy gaps per month that might otherwise be missed and/or maliciously exploited. 73% of customers who use it have made meaningful improvements in their security posture based on its recommendations.3

 

Microsoft Teams alerts for the Conditional Access Optimization Agent help your team act quickly on the agent’s suggestions.

This isn't just ChatGPT pointed at APIs. It's a purpose-built agent loaded with all the knowledge and expertise that my team and our customer-facing teams have learned about building effective Conditional Access policies. It continuously analyzes your tenant and all your Conditional Access policies, detects policy overlap, finds unprotected users and applications, and then helps remediate gaps with a single click. It can also identify risky configurations, like break-glass accounts that aren't excluded or policies with too many exclusions.

The agent even creates tickets in ServiceNow for proposed policy updates, ensuring compliance with your change management requirements. It can design phased rollout plans to gradually enable policies while minimizing user disruption. It can even deploy new policies for you in pilot mode. Then on an ongoing basis, it checks for new users and apps so you can make sure they're protected by policy correctly.

🎯 Read Alex’s Oct 14, 2025 post: The Conditional Access Optimization Agent keeps getting better—and making your life easier

Think of Conditional Access Optimization Agent as evolving into your Zero Trust consultant and advisor, saving you time and money while improving your security posture. You’ll see more agents, from us and across the industry, that bring deep expertise and analysis to you more quickly and conveniently.

Joy Chik will be announcing more capabilities of agents in Microsoft Entra in her Microsoft Ignite session on Tues, Nov 18: Microsoft Entra: What's New in Secure Access on the AI Frontier.

Managing at Scale: Cattle, Not Pets

In addition to the permissions model, the other big challenge is managing agents at scale. I’m breaking my boats and waves analogy with a better, land-based metaphor. AI agents are cattle, not pets. To care for and feed a pet, you need time that you just won’t have in the agent space. With so many agents coming and going, you’ll need to govern a sprawling, diverse herd across all your ranches—with lifecycle automation at scale.

Automated identity lifecycle management will be necessary to handle the scale of agents being deployed, updated, and deprovisioned to reduce risk.

You'll need strategies for:

  • How agents get deployed and tracked
  • Knowing who's responsible for them
  • How to provision and approve access for agents
  • Running ongoing access campaigns
  • Automatically revoking permissions
  • Proving which version of an agent is running
  • Deprovisioning agents you no longer need

Any holes in your governance or lifecycle management will overwhelm everyone, especially with access reviews. So we’re working hard on this challenge, creating a framework for your agent identity governance strategies.

Secure Access for agents: Microsoft Entra Agent ID

Soon, you’ll be able to manage, protect, and govern agents as first-class enterprise identities.

At Microsoft BUILD, we announced support for AI agents into Microsoft Entra. Our goal is simple: bring the same protections and controls you rely on for workforce identities to AI agents. We've added agents as an identity type in Microsoft Entra, and are developing capabilities for access management, security, and identity governance capabilities.  These agent identities will be pre-configured in the tools that you use to create agents, including Foundry, Copilot Studio, Security Copilot, at third-party platforms. 

I’ll be revealing many more details during my Ignite session “Secure access for AI agents with Microsoft Entra” so you can get started.

Evolving Standards for the Agent Era

Microsoft Entra enables a huge ecosystem of partners and applications, and we collaborate across the industry to ensure all agentic identity systems can adopt common standards that enable integration, reduce security risks, and follow best practices agreed upon by expert practitioners, academia, and standards bodies.

To enable this, we're working in the OAuth 2.0 standards groups on critical changes to:

  • Represent agents as first-class actors (not clients, not users, but something new)
  • Make sure agents can have independent permissions
  • Make agent actions traceable (so logs show "agent acting on behalf of Alex," not just "Alex")
  • Enable fine-grained permissions, discovery, and delegation for agents. 

The changes we’re proposing to OAuth will make it possible to see that an agent is acting on its own behalf, on behalf of a user, or on behalf of another agent. 

We're also thinking about the full chain of responsibility. How does an agent down the chain discover what permissions it needs and then come back and ask you for them? These capabilities will be incorporated into MCP to enable agent-to-agent protocols.

We're also working on how to give agents fine-grained access to specific resources, like a folder, rather than broad access, like to an entire cloud. Adhering to this Zero Trust principle of agents only having the access they need is a great way to reduce your risk surface.

Extending SCIM Standards for Agent Lifecycle Management

Along with the OAuth work, we're suggesting a set of changes to the open SCIM standard to enable support for agents.

Think about how SCIM works today: We use it to put a user from your HR system into Entra, which creates a user record in Entra, then we send it out to other apps through your governance system.

We need the same thing for agents. Any kind of agent builder (something that creates agents) needs the ability to use SCIM to put agent records into Entra. Then Entra can retrieve those agent records and configure them correctly into all of your SaaS apps.

Standardized schema extensions to open standards make it possible to provision agents with the rich context that applications need.

We think this is the right way to ensure you have great governance and automation around agents. It's easy for an agent to show up and get registered, but much harder to manage it on an ongoing basis. That's why we need these updates to SCIM.

What You Should Do Now

First, get started experimenting with AI agents. See what works for you. Try our agents in Microsoft Entra, or build your own with Copilot Studio—you can do amazing things. I’ve noticed that what used to take my product management team three to four weeks to get done now takes two or three days with agents. Make sure your company has a plan to start piloting a few.

Second, think about your agent taxonomy, starting with the three kinds of agents I discussed above. What kinds of agents will you build? What data will they need? How will you govern access and ensure high-privilege data only gets accessed by proven, trusted agents versus ones somebody spins up on their desktop? The Copilot Studio team offers some good resources as you consider building, administering, and governing agents.

Finally, tune into our live-streamed session “Secure Access for AI Agents with Microsoft Entra” on November 18 at Microsoft Ignite 2025. We’ll explain in detail how all of this works.

We're committed to doing the hard work for you and with you. Microsoft Entra support for AI agents will give you great tools for harnessing AI agents safely in your corporation. As Satya said at BUILD: "Our goal is simple—bring the same protections and controls you rely on for employee identities to AI agents."

The agentic wave is coming. We can all ride the top of the wave, or we can drift below and get splashed—it’s our choice.

 

Surf's up!

Alex

 

1 KPMG AI Quarterly Pulse Survey | Q3 2025. September 2025. n= 130 U.S.-based C-suite and business leaders representing organizations with annual revenue of $1 billion or more.

 2  IDC Info Snapshot, 1.3 Billion AI Agents by 2028, May 2025

3 James Bono, Beibe Cheng, and Joaquin Lozano, Randomized Controlled Trials for Conditional Access Optimization Agent, October 2025, Microsoft Corporation.

 

⏰Tune in for Secure access for AI agents with Microsoft Entra on Tuesday, Nov. 18, 2025 at 3:45PM Pacific Time.

 

 

Learn more about Microsoft Entra 

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories