Read more of this story at Slashdot.
Read more of this story at Slashdot.
Organizations are rapidly adopting Copilot Studio agents, but threat actors are equally fast at exploiting misconfigured AI workflows. Mis-sharing, unsafe orchestration, and weak authentication create new identity and data‑access paths that traditional controls don’t monitor. As AI agents become integrated into operational systems, exposure becomes both easier and more dangerous. Understanding and detecting these misconfigurations early is now a core part of AI security posture.
Copilot Studio agents are becoming a core part of business workflows- automating tasks, accessing data, and interacting with systems at scale.
That power cuts both ways. In real environments, we repeatedly see small, well‑intentioned configuration choices turn into security gaps: agents shared too broadly, exposed without authentication, running risky actions, or operating with excessive privileges. These issues rarely look dangerous- until they are abused.
If you want to find and stop these risks before they turn into incidents, this post is for you. We break down ten common Copilot Studio agent misconfigurations we observe in the wild and show how to detect them using Microsoft Defender and Advanced Hunting via the relevant Community Hunting Queries.
Short on time? Start with the table below. It gives you a one‑page view of the risks, their impact, and the exact detections that surface them. If something looks familiar, jump straight to the relevant scenario and mitigation.
Each section then dives deeper into a specific risk and recommended mitigations- so you can move from awareness to action, fast.
| # | Misconfiguration & Risk | Security Impact | Advanced Hunting Community Queries (go to: Security portal>Advanced hunting>Queries> Community Queries>AI Agent folder) |
| 1 | Agent shared with entire organization or broad groups | Unintended access, misuse, expanded attack surface | • AI Agents – Organization or Multi‑tenant Shared |
| 2 | Agents that do not require authentication | Public exposure, unauthorized access, data leakage | • AI Agents – No Authentication Required |
| 3 | Agents with HTTP Request actions using risky configurations | Governance bypass, insecure communications, unintended API access | • AI Agents – HTTP Requests to connector endpoints • AI Agents – HTTP Requests to non‑HTTPS endpoints • AI Agents – HTTP Requests to non‑standard ports |
| 4 | Agents capable of email‑based data exfiltration | Data exfiltration via prompt injection or misconfiguration | • AI Agents – Sending email to AI‑controlled input values • AI Agents – Sending email to external mailboxes |
| 5 | Dormant connections, actions, or agents | Hidden attack surface, stale privileged access | • AI Agents – Published Dormant (30d) • AI Agents – Unpublished Unmodified (30d) • AI Agents – Unused Actions • AI Agents – Dormant Author Authentication Connection |
| 6 | Agents using author (maker) authentication | Privilege escalation, separation of duties bypass‑of‑duties bypass | • AI Agents – Published Agents with Author Authentication • AI Agents – MCP Tool with Maker Credentials |
| 7 | Agents containing hard‑coded credentials | Credential leakage, unauthorized system access | • AI Agents – Hard‑coded Credentials in Topics or Actions |
| 8 | Agents with Model Context Protocol (MCP) tools configured | Undocumented access paths, unintended system interactions | • AI Agents – MCP Tool Configured |
| 9 | Agents with generative orchestration lacking instructions | Prompt abuse, behavior drift, unintended actions | • AI Agents – Published Generative Orchestration without Instructions |
| 10 | Orphaned agents (no active owner) | Lack of governance, outdated logic, unmanaged access | • AI Agents – Orphaned Agents with Disabled Owners |
Imagine this scenario: A help desk agent is created in your organization with simple instructions.
The maker, someone from the support team, connects it to an organizational Dataverse using an MCP tool, so it can pull relevant customer information from internal tables and provide better answers. So far, so good.
Then the maker decides, on their own, that the agent doesn’t need authentication. After all, it’s only shared internally, and the data belongs to employees anyway (See example in Figure 1). That might already sound suspicious to you. But it doesn’t to everyone.
You might be surprised how often agents like this exist in real environments and how rarely security teams get an active signal when they’re created. No alert. No review. Just another helpful agent quietly going live.
Now here’s the question: Out of the 10 risks described in this article, how many do you think are already present in this simple agent?
The answer comes at the end of the blog.

Sharing an agent with your entire organization or broad security groups exposes its capabilities without proper access boundaries. While convenient, this practice expands the attack surface. Users unfamiliar with the agent’s purpose might unintentionally trigger sensitive actions, and threat actors with minimal access could use the agent as an entry point.
In many organizations, this risk occurs because broad sharing is fast and easy, often lacking controls to ensure only the right users have access. This results in agents being visible to everyone, including users with unrelated roles or inappropriate permissions. This visibility increases the risk of data exposure, misuse, and unintended activation of sensitive connectors or actions.
Agents that you can access without authentication, or that only prompt for authentication on demand, create a significant exposure point. When an agent is publicly reachable or unauthenticated, anyone with the link can use its capabilities. Even if the agent appears harmless, its topics, actions, or knowledge sources might unintentionally reveal internal information or allow interactions that were never for public access.
This gap appears because authentication was deactivated for testing, left in its default state, or misunderstood as optional. The results in an agent that behaves like a public entry point into organizational data or logic. Without proper controls, this creates a risk of data leakage, unintended actions, and misuse by external or anonymous users.
Agents that perform direct HTTP requests introduce a unique risks, especially when those requests target non-standard ports, insecure schemes, or sensitive services that already have built in Power Platform connectors. These patterns often bypass the governance, validation, throttling, and identity controls that connectors provide. As a result, they can expose the organization to misconfigurations, information disclosure, or unintended privilege escalation.
These configurations appear unintentionally. A maker might copy a sample request, test an internal endpoint, or use HTTP actions for flexibility during testing and convenience. Without proper review, this can lead to agents issuing unsecured calls over HTTP or invoking critical Microsoft APIs directly through URLs instead of secured connectors. Each of these behaviors represent an opportunity for misuse or accidental exposure of organizational data.
Agents that send emails using dynamic or externally controlled inputs present a significant risk. When an agent uses generative orchestration to send email, the orchestrator determines the recipient and message content at runtime. In a successful cross-prompt injection (XPIA) attack, a threat actor could instruct the agent to send internal data to external recipients.
A similar risk exists when an agent is explicitly configured to send emails to external domains. Even for legitimate business scenarios, unaudited outbound email can allow sensitive information to leave the organization. Because email is an immediate outbound channel, any misconfiguration can lead to unmonitored data exposure.
Many organizations create this gap unintentionally. Makers often use email actions for testing, notifications, or workflow automation without restricting recipient fields. Without safeguards, these agents can become exfiltration channels for any user who triggers them or for a threat actor exploiting generative orchestration paths.
Dormant agents and unused components might seem harmless, but they can create significant organizational risk. Unmonitored entry points often lack active ownership. These include agents that haven’t been invoked for weeks, unpublished drafts, or actions using Maker authentication. When these elements stay in your environment without oversight, they might contain outdated logic or sensitive connections That don’t meet current security standards.
Dormant assets are especially risky because they often fall outside normal operational visibility. While teams focus on active agents, older configurations are easily forgotten. Threat actors, frequently target exactly these blind spots. For example:
Without proper governance, these artifacts can expose sensitive connectors if they are activated.
When agents use the maker’s personal authentication, they act on behalf of the creator rather than the end user. In this configuration, every user of the agent inherits the maker’s permissions. If those permissions include access to sensitive data, privileged operations, or high impact connectors, the agent becomes a path for privilege escalation.
This exposure often happens unintentionally. Makers might allow author authentication for convenience during development or testing because it is the default setting of certain tools. However, once published, the agent continues to run with elevated permissions even when invoked by regular users. In more severe cases, Model Context Protocol (MCP) tools configured with maker credentials allow threat actors to trigger operations that rely directly on the creator’s identity.
Author authentication weakens separation of duties and bypasses the principle of least privilege. It also increases the risk of credential misuse, unauthorized data access, and unintended lateral movement
Agents that contain hard-coded credentials inside topics or actions introduce a severe security risk. Clear-text secrets embedded directly in agent logic can be read, copied, or extracted by unintended users or automated systems. This often occurs when makers paste API keys, authentication tokens, or connection strings during development or debugging, and the values remain embedded in the production configuration. Such credentials can expose access to external services, internal systems, or sensitive APIs, enabling unauthorized access or lateral movement.
Beyond the immediate leakage risk, hard-coded credentials bypass the standard enterprise controls normally applied to secure secret storage. They are not rotated, not governed by Key Vault policies, and not protected by environment variable isolation. As a result, even basic visibility into agent definitions may expose valuable secrets.
AI agents that include Model Context Protocol (MCP) tools provide a powerful way to integrate with external systems or run custom logic. However, if these MCP tools aren’t actively maintained or reviewed, they can introduce undocumented access patterns into the environment.
This risk when MCP configurations are:
Unmonitored MCP tools might expose capabilities that exceed the agent’s intended purpose. This is especially true if they allow access to privileged operations or sensitive data sources. Without regular oversight, these tools can become hidden entry points that user or threat actors might trigger unintended system interactions.
AI agents that use generative orchestration without defined instructions face a high risk of unintended behavior. Instructions are the primary way to align a generative model with its intended purpose. If instructions are missing, incomplete, or misconfigured, the orchestrator lacks the context needed to limit its output. This makes the agent more vulnerable to user influence from user inputs or hostile prompts.
A lack of guidance can cause an agent to;
For organizations that need predictable and safe behavior, behavior, missing instructions area significant configuration gap.
Orphaned agents are agents whose owners are no longer with organization or their accounts deactivated. Without a valid owner, no one is responsible for oversight, maintenance, updates, or lifecycle management. These agents might continue to run, interact with users, or access data without an accountable individual ensuring the configuration remains secure.
Because ownerless agents bypass standard review cycles, they often contain outdated logic, deprecated connections, or sensitive access patterns that don’t align with current organizational requirements.
Remember the help desk agent we started with? That simple agent setup quietly checked off more than half of the risks in this list.
Keep reading and running the Advanced Hunting queries in the AI Agents folder, to find agents carrying these risks in your own environment before it’s too late.

The 10 risks described above manifest in different ways, but they consistently stem from a small set of underlying security gaps: over‑exposure, weak authentication boundaries, unsafe orchestration, and missing lifecycle governance.

Damage doesn’t begin with the attack. It starts when risks are left untreated.
The section below is a practical checklist of validations and actions that help close common agent security gaps before they’re exploited. Read it once, apply it consistently, and save yourself the cost of cleaning up later. Fixing security debt is always more expensive than preventing it.
Before changing configurations, confirm whether the agent’s behavior is intentional and still aligned with business needs.
Most Copilot Studio agent risks are amplified by unnecessary exposure. Reducing who can reach the agent, and what it can reach, significantly lowers risk.
Agents must not inherit more privilege than necessary, especially through development shortcuts.
Replace author (maker) authentication with user‑based or system‑based authentication wherever possible. For more information, see Control maker-provided credentials for authentication – Microsoft Copilot Studio | Microsoft Learn and Configure user authentication for actions.
Generative agents require explicit guardrails to prevent unintended or unsafe behavior.
Unused capabilities and embedded secrets quietly expand the attack surface.

Effective posture management is essential for maintaining a secure and predictable Copilot Studio environment. As agents grow in capability and integrate with increasingly sensitive systems, organizations must adopt structured governance practices that identify risks early and enforce consistent configuration standards.
The scenarios and detection rules presented in this blog provide a foundation to help you;
By combining automated detection with clear operational policies, you can ensure that their Copilot Studio agents remain secure, aligned, and resilient.
This research is provided by Microsoft Defender Security Research with contributions from Dor Edry and Uri Oren.
The post Copilot Studio agent security: Top 10 risks you can detect and prevent appeared first on Microsoft Security Blog.
Would you like to take your search in SQL Server to the next level? In this episode, check out how to use vector search, securely built into the SQL Server 2025. #sqlserver2025 #sqlai
🌮 Chapter Markers:
0:00 – Introduction
01:10 – Evolution of SQL
02:44 - Intelligent searching with SQL and AI
06:16 – Model definition
08:55 - Generate embeddings
13:19 – Vector index
15:25 – Vector Search
17:50 – Azure OpenAI
19:36 – wrap up
🌮 Resources
Announcement: https://aka.ms/sqlserver2025blog
Sign-up: https://aka.ms/getsqlserver2025
Learn Docs: https://aka.ms/sqlserver2025docs
Product page: https://aka.ms/sqlserver2025
🌮 Follow us on social:
Scott Hanselman | @SHanselman – https://x.com/SHanselman
Azure Friday | @AzureFriday – https://x.com/AzureFriday
Bob Ward | @bobwardms – https://x.com/bobwardms
Blog - https://aka.ms/azuredevelopers/blog
Twitter - https://aka.ms/azuredevelopers/twitter
LinkedIn - https://aka.ms/azuredevelopers/linkedin
Twitch - https://aka.ms/azuredevelopers/twitch
#azuredeveloper #azure
"The way that AI is changing software engineering is a bigger shift than object-oriented programming, the internet, and Agile together.", says Dave Farley, author of Continuous Delivery and Modern Software Engineering.
Dave also shares why programming languages were designed to help engineers decompose problems into smaller chunks, the three fundamental problems of AI coding, why verification becomes the bottleneck in AI-assisted coding, and why engineering discipline, test-driven development, and behavior-driven development matter even more in this new era.
00:00 Introduction to Developer Productivity and Experience
02:13 Dave Farley's Journey in Software Engineering
08:23 The Impact of AI on Software Development
11:00 AI Tools and Their Role in Coding
16:39 The Importance of TDD and BDD in AI Development
20:37 Testing and Feedback Loops in AI Programming
25:30 Navigating Ambiguity in Specifications
29:29 Future of Software Architecture with AI
34:55 Adapting to AI in Software Engineering Practices
37:28 Conclusion and Future Perspectives
About Dave Farley
Dave is a pioneer of continuous delivery, a thought leader and expert practitioner in CD, DevOps, TDD, and software design, and shares his expertise through his consultancy, YouTube channel @ModernSoftwareEngineeringYT , books, and training courses. Dave co-authored the definitive book on Continuous Delivery and has published Modern Software Engineering.
About Hangar DX (https://dx.community/)
The Hangar is a community of senior DevOps and senior software engineers focused on developer experience. This is a space where vetted, experienced professionals can exchange ideas, share hard-earned wisdom, troubleshoot issues, and ultimately help each other in their projects and careers.
We invite developers who work in DX and platform teams at their respective companies or who are interested in developer productivity.
More: https://dx.community/
It takes a village to build something great. In Season 2 of Build Mode, we go deep on how to assemble a founding team that signals ambition, execution, and long-term success. Founders and investors share candid lessons on hiring, structuring, and scaling teams that actually win. New episodes coming February 19.
In this episode, Andy talks with Richard Carson, author of The Book of Change. If you feel like you barely finish one change before the next one hits, this conversation is for you. Richard shares his deeply researched and battle-tested framework called People Sustained Organizational Change Management, or PSOCM. Unlike many change management books, this is not about certifications or slogans. It is about building a repeatable system to diagnose problems, distinguish adaptive from transformational change, and gain executive traction when support is not automatic.
You will hear why so many change efforts fail before they even begin, how to craft a clear problem statement, and what leaders often misunderstand about the type of change they are facing. Richard also explains why he chose the phrase "People Sustained" and how thinking structurally about change can even help at home.
If you're looking for practical, grounded insights on leading through continuous change, this episode is for you!
You can learn more about Richard and his work at RichardCarson.org. Make sure to get the free ebook download.
For more learning on this topic, check out:
If you or someone you know is thinking about getting PMP certified, we've put together a helpful guide called The 5 Best Resources to Help You Pass the PMP Exam on Your First Try. We've helped thousands of people earn their certification, and we'd love to help you too. It's totally free, and it's a great way to get a head start.
Just go to 5BestResources.PeopleAndProjectsPodcast.com to grab your copy. I'd love to help you get your PMP this year!
I know you want to be a more confident leader—that's why you listen to this podcast. LEAD52 is a global community of people like you who are committed to transforming their ability to lead and deliver. It's 52 weeks of leadership learning, delivered right to your inbox, taking less than 5 minutes a week. And it's all for free. Learn more and sign up at GetLEAD52.com. Thanks!
Talent Triangle: Business Acumen
Topics: Change Management, Organizational Change, Leadership, Executive Sponsorship, Problem Identification, Adaptive Change, Transformational Change, Strategic Thinking, Organizational Culture, Project Leadership, Continuous Improvement, Stakeholder Engagement
The following music was used for this episode:
Music: Lullaby of Light feat Cory Friesenhan by Sascha Ende
License (CC BY 4.0): https://filmmusic.io/standard-license
Music: Tropical Vibe by WinnieTheMoog
License (CC BY 4.0): https://filmmusic.io/standard-license
Thank you for joining me for this episode of The People and Projects Podcast!