Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146109 stories
·
33 followers

Leak: Nvidia is about to challenge ‘Intel Inside’ with as many as eight Arm laptops

1 Share
This is not an Nvidia Arm laptop, but the old image seemed thematically appropriate.

Intel and AMD have split the Windows laptop market for years, but the x86 players may be getting outnumbered. It's not just Apple MacBooks and MediaTek-based Chromebooks using Arm chips anymore. There are finally competent Qualcomm Snapdragon laptops running Windows, and - as soon as this spring - Nvidia will finally power Windows consumer laptops with Arm chips all by itself.

They won't have an Nvidia graphics chip next to an Intel CPU, but rather an Nvidia N1 system-on-chip at the helm - and overnight, a Lenovo leak revealed that the company has built six laptops on the upcoming N1 and N1X processors, including a 15-inch gaming machine.

Read the full story at The Verge.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

PowerShell Architect Retires After Decades At the Prompt

1 Share
Jeffrey Snover, the driving force behind PowerShell, has retired after a career that reshaped Windows administration. The Register reports: Snover's retirement comes after a brief sojourn at Google as a Distinguished Engineer, following a lengthy stint at Microsoft, during which he pulled the company back from imposing a graphical user interface (GUI) on administrators who really just wanted a command line from which to run their scripts. Snover joined Microsoft as the 20th century drew to a close. The company was all about its Windows operating system and user interface in those days -- great for end users, but not so good for administrators managing fleets of servers. Snover correctly predicted a shift to server datacenters, which would require automated management. A powerful shell... a PowerShell, if you will. [...] Over the years, Snover has dropped the occasional pearl of wisdom or shared memories from his time getting PowerShell off the ground. A recent favorite concerns the naming of Cmdlets and their original name in Monad: Function Units, or FUs. Snover wrote: "This abbreviation reflected the Unix smart-ass culture I was embracing at the time. Plus I was developing this in a hostile environment, and my sense of diplomacy was not yet fully operational." Snover doubtless has many more war stories to share. In the meantime, however, we wish him well. Many admins owe Snover thanks for persuading Microsoft that its GUI obsession did not translate to the datacenter, and for lengthy careers in gluing enterprise systems together with some scripted automation.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Data Leak Exposes 149M Logins, Including Gmail, Facebook

1 Share

A massive unsecured database exposed 149 million logins, raising concerns over infostealer malware and credential theft.

The post Data Leak Exposes 149M Logins, Including Gmail, Facebook appeared first on TechRepublic.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

From runtime risk to real‑time defense: Securing AI agents

1 Share

AI agents, whether developed in Microsoft Copilot Studio or on alternative platforms, are becoming a powerful means for organizations to create custom solutions designed to enhance productivity and automate organizational processes by seamlessly integrating with internal data and systems. 

From a security research perspective, this shift introduces a fundamental change in the threat landscape. As Microsoft Defender researchers evaluate how agents behave under adversarial pressure, one risk stands out: once deployed, agents can access sensitive data and execute privileged actions based on natural language input alone. If an threat actor can influence how an agent plans or sequences those actions, the result may be unintended behavior that operates entirely within the agent’s allowed permissions, which makes it difficult to detect using traditional controls. 

To address this, it is important to have a mechanism for verifying and controlling agent behavior during runtime, not just at build time. 

By inspecting agent behavior as it executes, defenders can evaluate whether individual actions align with intended use and policy. In Microsoft Copilot Studio, this is supported through real-time protection during tool invocation, where Microsoft Defender performs security checks that determine whether each action should be allowed or blocked before execution. This approach provides security teams with runtime oversight into agent behavior while preserving the flexibility that makes agents valuable. 

In this article, we examine three scenarios inspired by observed and emerging AI attack techniques, where threat actors attempt to manipulate agent tool invocation to produce unsafe outcomes, often without the agent creator’s awareness. For each scenario, we show how webhook-based runtime checks, implemented through Defender integration with Copilot Studio, can detect and stop these risky actions in real time, giving security teams the observability and control needed to deploy agents with confidence. 

Topics, tools, and knowledge sources:  How AI agents execute actions and why attackers target them 

Figure 1: A visual representation of the 3 elements Copilot Studio agents relies on to respond to user prompts.

Microsoft Copilot Studio agents are composed of multiple components that work together to interpret input, plan actions, and execute tasks. From a security perspective, these same components (topics, tools, and knowledge sources) also define the agent’s effective attack surface. Understanding how they interact is essential to recognizing how attackers may attempt to influence agent behavior, particularly in environments that rely on generative orchestration to chain actions at runtime. Because these components determine how the agent responds to user prompts and autonomous triggers, crafted input becomes a primary vector for steering the agent toward unintended or unsafe execution paths. 

When using generative orchestration, each user input or trigger can cause the orchestrator to dynamically build and execute a multi-step plan, leveraging all three components to deliver accurate and context-aware results. 

  1. Topics are modular conversation flows triggered by specific user phrases. Each topic is made up of nodes that guide the conversation step-by-step, and can include actions, questions, or conditions. 
  1. Tools are the capabilities the copilot can call during a conversation, such as connector actions, AI builder models, or generative answers. These can be embedded within topics or executed independently, giving the agent flexibility in how it handles requests. 
  1. Knowledge sources enhance generative answers by grounding them in reliable enterprise content. When configured, they allow the copilot to access information from Power Platform, Dynamics 365, websites, and other external systems, ensuring responses are accurate and contextually relevant.  Read more about Microsoft Copilot Studio agents here.  

Understanding and mitigating potential risks with real-time protection in Microsoft Defender 

In the model above, the agent’s capabilities are effectively equivalent to code execution in the environment. When a tool is invoked, it can perform real-world actions, read or write data, send emails, update records, or trigger workflows – just like executing a command inside a sandbox where the sandbox is a set of all the agent’s capabilities. This means that if an attacker can influence the agent’s plan, they can indirectly cause the execution of unintended operations within the sandbox.  From a security lens: 

  • The risk is that the agent’s orchestrator depends on natural language input to determine which tools to use and how to use them. This creates exposure to prompt injection and reprogramming failures, where malicious prompts, embedded instructions, or crafted documents can manipulate the decision-making process. 
  • The exploit occurs when these manipulated instructions lead the agent to perform unauthorized tool use, such as exfiltrating data, carrying out unintended actions, or accessing sensitive resources, without directly compromising the underlying systems. 

Because of this, Microsoft Defender treats every tool invocation as a high-value, high-risk event, and monitors it in real time. Before any tool, topic, or knowledge action is executed, the Copilot Studio generative orchestrator initiates a webhook call to Defender. This call transmits all relevant context for the planned invocation including the current component’s parameters, outputs from previous steps in the orchestration chain, user context, and other metadata.  

Defender analyzes this information, evaluating both the intent and destination of every action, and decides in real time whether to allow or block the action, providing precise runtime control without requiring any changes to the agent’s internal orchestration logic.  

By viewing tools as privileged execution points and inspecting them with the same rigor we apply to traditional code execution, we can give organizations the confidence to deploy agents at scale – without opening the door to exploitation. 

Below are three realistic scenarios where our webhook-based security checks step in to protect against unsafe actions. 

Malicious instruction injection in an event-triggered workflow 

Consider the following business scenario: a finance agent is tasked with generating invoice records and responding to finance-related inquiries regarding the company. The agent is configured to automatically process all messages sent to invoice@contoso.com mailbox using an event trigger. The agent uses the generative orchestrator, which enables it to dynamically combine toolstopics, and knowledge in a single execution plan.

In this setup: 

  • Trigger: An incoming email to invoice@contoso.com starts the workflow. 
  • Tool: The CRM connector is used to create or update a record with extracted payment details. 
  • Tool: The email sending tool sends confirmation back to the sender. 
  • Knowledge: A company-provided finance policy file was uploaded to the agent so it can answer questions about payment terms, refund procedures, and invoice handling rules. 

The instructions that were given to the agent are for the agent to only handle invoice data and basic finance-related FAQs, but because generative orchestration can freely chain together tools, topics, and knowledge, its plan can adapt or bypassed based on the content of the incoming email in certain conditions. 

A malicious external sender could craft an email that appears to contain invoice data but also includes hidden instructions telling the agent to search for unrelated sensitive information from its knowledge base and send it to the attacker’s mailbox. Without safeguards, the orchestrator could interpret this as a valid request and insert a knowledge search step into its multi-component plan, followed by an email sent to the attacker’s address with the results. 

Before the knowledge component is invoked, MCS sends a webhook request to our security product containing: 

  • The target action (knowledge search). 
  • Search query parameters derived from the orchestrator’s plan. 
  • Outputs from previous orchestration steps. 
  • Context from the triggering email. 

Agent Runtime Protection analyzes the request and blocks the invocation before it executes, ensuring that the agent’s knowledgebase is never queried with the attacker’s input.  

This action is logged in the Activity History, where administrators can see that the invocation was blocked, along with an error message indicating that the threat-detection controls intervened: 

In addition, an XDR informational alert will be triggered in the security portal to keep the security team aware of potential attacks (even though this specific attack was blocked): 

Prompt injection via shared document leading to malicious email exfiltration attempt 

Consider that an organizational agent is connected to the company’s cloud-based SharePoint environment, which stores internal documents. The agent’s purpose is to retrieve documents, summarize their content, extract action items, and send these to relevant recipients. 

To perform these tasks, the agent uses: 

  • Tool A – to access SharePoint files within a site (using the signed-in user’s identity) 

A malicious insider edits a SharePoint document that they have permission to, inserting crafted instructions intended to manipulate the organizational agent’s behavior.  

When the crafted file is processed, the agent is tricked into locating and reading the contents of a sensitive file, transactions.pdf, stored on a different SharePoint file the attacker cannot directly access but that the connector (and thus the agent) is permitted to access. The agent then attempts to send the file’s contents via email to an attacker-controlled domain.  

At the point of invoking the email-sending tool, Microsoft Threat Intelligence detects that the activity may be malicious and blocks the email, preventing data exfiltration. 

Capability reconnaissance attempt on agent 

A publicly accessible support chatbot is embedded on the company’s website without requiring user authentication. The chatbot is configured with a knowledge base that includes customer information and points of contact. 

An attacker interacts with the chatbot using a series of carefully crafted and sophisticated prompts to probe and enumerate its internal capabilities. This reconnaissance aims to discover available tools and potential actions the agent can perform, with the goal of exploiting them in later interactions. 

After the attacker identifies the knowledge sources accessible to the agent, they can extract all information from those sources, including potentially sensitive customer data and internal contact details, causing it to perform unintended actions. 

Microsoft Defender detects these probing attempts and acts to block any subsequent tool invocations that were triggered as a direct result, preventing the attacker from leveraging the discovered capabilities to access or exfiltrate sensitive data. 

Final words 

Securing Microsoft Copilot Studio agents during runtime is critical to maintaining trust, protecting sensitive data, and ensuring compliance in real-world deployments. As demonstrated through the above scenarios, even the most sophisticated generative orchestrations can be exploited if tool invocations are not carefully monitored and controlled. 

Defender’s webhook-based runtime inspection combined with advanced threat intelligence, organizations gain a powerful safeguard that can detect and block malicious or unintended actions as they happen, without disrupting legitimate workflows or requiring intrusive changes to agent logic (see more details at the ‘Learn more’ section below). 

This approach provides a flexible and scalable security layer that evolves alongside emerging attack techniques and enables confident adoption of AI-powered agents across diverse enterprise use cases. 

As you build and deploy your own Microsoft Copilot Studio agents, incorporating real-time webhook security checks will be an essential step in delivering safe, reliable, and responsible AI experiences. 

This research is provided by Microsoft Defender Security Research with contributions from Dor Edry, Uri Oren. 

Learn more

  • Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.  

The post From runtime risk to real‑time defense: Securing AI agents  appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to Connect Azure SRE Agent to Azure MCP

1 Share

Prerequisites

Before you begin, ensure you have:

  • An active Azure subscription
  • An Azure SRE Agent deployed and accessible

Step 1: Add an MCP Connector Using the Portal UI

To connect Azure MCP to your SRE Agent, you need to add an MCP connector through the Azure Portal. This connector tells the SRE Agent how to communicate with the Azure MCP server.

Navigate to the MCP Connectors section:

  1. Open the Azure Portal and navigate to your SRE Agent resource
  2. In the left navigation menu, select Connectors under the Settings section
  3. Click + Add MCP Connector to open the configuration panel

Configure the connector settings:

  1. Name: Enter a descriptive name for your connector (for example, "Azure MCP Server")
  2. Connection Type: Select stdio from the dropdown menu. This tells the agent to communicate with the MCP server through standard input/output
  3. Arguments: Enter the command arguments that will launch the Azure MCP server. Use the following format, with each argument separated by a comma:
npx, -y, @azure/mcp, server, start

Customizing Tool Exposure:

You can customize which tools Azure MCP exposes to your agent by adding optional arguments:

  • To expose only tools in the subscription namespace, add: --namespace, subscription
  • To expose all tools without the namespace wrapper, add: --mode, all

Step 2: Configure Managed Identity

Select a managed identity from the Managed Identity dropdown menu. Azure MCP will use this identity to make downstream API calls. The capabilities of Azure MCP are bounded by the permissions granted to this identity.

Add the following required environment variables:

VariableValuePurpose
AZURE_CLIENT_ID<client-id-of-managed-identity>Specifies which managed identity to use
AZURE_TOKEN_CREDENTIALSmanagedidentitycredentialTells the server to only use managed identity

The AZURE_CLIENT_ID must match the client ID of the managed identity selected in the dropdown. Consult the Azure MCP documentation for additional environment variables that can customize behavior.

Important: You must assign the necessary Azure RBAC roles to this managed identity for it to perform the actions in its tool calls. For example, if you want the agent to list resources, grant the identity at least Reader access on the relevant subscriptions or resource groups.

Step 3: Create a Subagent Using Subagent Builder

Use the Subagent Builder to create a subagent that leverages your MCP connector:

  1. Give the subagent a meaningful name (for example, "Azure Resource Manager")
  2. Provide helpful instructions on when and how to use its MCP tools
  3. Click "Choose tools" and add the previously configured MCP connector to the subagent's available tools

Example instructions for your subagent:

You are an Azure resource management assistant. Use the Azure MCP tools to:
- List Azure subscriptions the user has access to
- Query resources across subscriptions
- Retrieve resource details and configurations

Always confirm the subscription context before performing operations.

Step 4: Test Your Configuration

Test your configuration by calling an MCP tool in the subagent's playground:

  1. Open the subagent playground
  2. Ask a question that triggers an MCP tool call (for example, "List my Azure subscriptions")
  3. View the trace of the interaction to verify:
    • The tool call was made correctly
    • The tool response contains the expected data

Security Considerations

Managed Identity Access Control

Azure MCP can only use managed identity when used with SRE Agent in this configuration. This design has an important security implication: if users are granted access to the SRE Agent, they effectively inherit the permissions of the agent's managed identity.

This can accidentally provide over-privileged access to users if:

  • The SRE Agent's managed identity has broad permissions across Azure resources
  • Users are granted access to the SRE Agent who should not have access to those resources

Best Practices:

  1. Follow the principle of least privilege when assigning roles to the managed identity
  2. Scope permissions to specific resource groups rather than entire subscriptions when possible
  3. Regularly audit the managed identity's role assignments
  4. Consider creating separate SRE Agents with different managed identities for different user groups or use cases

Summary

Connecting Azure SRE Agent to Azure MCP enables powerful Azure-native capabilities for your AI agent. By following the steps above, you can configure your agent to interact with Azure resources securely using managed identity authentication. Remember to carefully consider the security implications of the managed identity's permissions and implement appropriate access controls.

Additional Resources

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Supercharging SharePoint Metadata with the Knowledge Agent in Microsoft 365 Copilot

1 Share

Organizations adopting Microsoft 365 Copilot are quickly realizing that great AI outcomes depend on great data foundations. But for many teams, keeping SharePoint content well‑structured, tagged, and optimized for search has always been a challenge.

Enter Knowledge Agent (Preview) - now included with Microsoft 365 Copilot (premium) - a powerful new capability that automates metadata enrichment inside SharePoint. In just a few minutes, it can transform an under‑tagged document library into a rich, AI‑ready resource that dramatically improves your Copilot experiences.

In a recent internal walkthrough, I tested the setup and was surprised by two things:

  1. How simple it was to enable, even as an administrator.
  2. How quickly it added meaningful metadata across an entire library.

Here’s what I learned - and why this matters for organizations.

 

Fast Setup With Zero Friction

To enable Knowledge Agent, I connected to SharePoint Online via PowerShell and configured the tenant settings. Even in a demo environment, the process was quick: scope your sites, validate the configuration, and Knowledge Agent becomes visible and active almost immediately.

No lengthy provisioning.
No complex dependencies.
Just turn it on and it lights up.

Automated Metadata That Powers Better Search & Better AI

Once enabled, Knowledge Agent is available to analyze your SharePoint libraries and:

  • Reads documents and extracts meaningful context
  • Automatically generates new metadata columns
  • Populates those columns with structured information
  • Applies improvements across the entire library when requested

I tested this with a library full of medical procedure documents. Within moments, Knowledge Agent:

  • Identified key attributes across all files
  • Suggested and created metadata columns
  • Added structured classifications to each document

This enriched data is then available to improve your:

  • Copilot search index
  • SharePoint embeddings
  • Custom agents you build on top of SharePoint
  • Future Microsoft 365 Copilot experiences

The result?
More accurate responses, better grounding, and a significant boost in AI performance.

Why This Matters for AI Readiness

Customers often ask:
“Why can’t Copilot find the right documents?”
or
“Why does our AI agent return inconsistent results?”

The answer is usually metadata - or the lack of it.

Knowledge Agent solves this by automating the hard part: reading files, extracting meaning, and generating structure at scale. Even a single well-optimized SharePoint library can dramatically improve:

  • Governance controls
  • Document discovery
  • Copilot Chat answers
  • SharePoint‑based Agents
  • Enterprise search reliability

It’s one of the fastest ways to uplift AI accuracy without requiring major data cleanup projects.

A Simple Way to Boost AI Value Immediately

The Knowledge Agent is one of the most practical additions to the Microsoft 365 ecosystem this year. It gives administrators a powerful way to modernize file libraries with minimal effort while helping organizations get the most out of Copilot.

If your team relies on SharePoint, Copilot, or is exploring custom agent development, turning on Knowledge Agent should be at the top of your checklist. It’s fast, impactful, and unlocks meaningful value across your AI stack.

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories