Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152242 stories
·
33 followers

Microsoft Copilot MCP: Unlocking the Power of the Model Context Protocol

1 Share

One of the problems that is commonly associated with AI models, such as those used my Microsoft Copilot, is that they are often limited by their training data. The Model Context Protocol (MCP) is designed to help Copilot to go beyond this limit by acting as a connector to content that is not found within the training data. This content may include databases, APIs, file systems, or other data.

Microsoft Copilot MCP – How it works

The most important thing to understand about MCP is that it is designed to be an open protocol that Large Language Model (LLM) applications can use to connect to external data sources or services. Large Language Models are used in a wide variety of generative AI applications.

This means that if an organization wishes to link a generative AI application to its data, then it no longer has to build custom connectors for its application. Instead, MCP can serve as bridge between applications and external data sources. Any Large Language Model based application can leverage this tool. To put it another way, MCP provides a standard way for Large Language Models and AI agents to access external resources. Typically this process involves leveraging SDKs and some simple Python coding.

MCP with Microsoft Copilot Studio

Although MCP is an open protocol that can be used with various Large Language Models, it was created with Microsoft Copilot in mind. Not surprisingly, MCP works especially well with Microsoft Copilot Studio. For those who might not be familiar with this tool, Copilot Studio is a low code / pro code tool that organizations can use to develop their own custom Copilots.

Copilot Studio UI
Copilot Studio in action (Image Credit: Brien Posey/Petri.com)

MCP server

The MCP server is one of the key components used to make MCP work. An MCP Server’s job is to provide capabilities to a Large Language Model. These capabilities fall into three basic categories including tools, resources, and prompts.

Tools

Tools are essentially automated workflows that can be executed in real-time. Any task that you can automate can potentially be turned into a tool. A tool might for instance, use orchestration to search for flights, send an email message, create a calendar entry, perform a database query, or create a purchase order.

Resources

Resources are just that – they are the resources that the Large Language Model needs access to. These resources can consist of structured data, unstructured data (files and folders), or both. MCP servers provide passive access to this data, treating resources as read only. Resources can be used to retrieve documents, access knowledge bases, read calendars, and more.

Microsoft Copilot MCP - Connecting to an MCP server in Copilot Studio
Microsoft Copilot MCP – Connecting to an MCP server in Copilot Studio (Image Credit: Microsoft)

Prompts

A prompt is simply an instruction that is meant to provide guidance to an LLM. As an example, if you type a query into ChatGPT, what you are really doing is creating a prompt. The difference is that MCP makes use of pre-built prompts, which are sometimes called prompt templates. These templates contain LLM prompts that can be used repeatedly.

The runtime workflow

When an end user interacts with Copilot, the underlying AI agent uses the MCP server to fufill the user’s request. Suppose for instance that the user is looking to hire new staff members. That user might enter a query such as, “help me to find a new Marketing Manager”. Upon receiving this prompt, the AI model may determine that it needs to use one of the available tools and then sends a tool invocation request to the MCP server. The MCP server, at that point, runs an orchestrated process that queries LinkedIn as a part of a talent search automation routine. The server then returns schema validated results that the AI agent can then use to formulate a response for the end user.

Where Microsoft Copilot Studio fits in

When you connect Microsoft Copilot Studio to an MCP server, Copilot Studio takes on the role of an MCP host. At that point, Copilot Studio is responsible for maintaining a connection to the server and for discovering the server’s capabilities. Copilot Studio then makes the MCP server’s functionality avaliable to Copilots.

Interestingly, When you connect Microsoft Copilot Studio to an MCP server, Copilot Studio will automatically query the MCP server and download metadata from the server. This metadata contains tool definitions, resource schemas, prompt templates, and version and capability descriptions.

The version information that is included within the metadata is important, because like other Microsoft products, Copilot Studio is designed to make sure that everything is kept up to date. By referencing the version information, Copilot Studio is able to make sure that Copilots always have the latest versions of tools, resources, and prompts and that older, obsolete versions are removed.

It is worth noting that this automated version control is specific to Microsoft environments. If you were to use MCP with a non-Microsoft product, then version control would need to be handled manually.

MCP simplifies Copilot development

One of the most important benefits associated with MCP is that business logic resides within the MCP server rather than inside of the Copilot. This can greatly simplify change management, especially for organizations that have large collections of Copilots.

If for example, an organization were to apply a hotfix to an MCP server, the agents would pick up on it instantly. Likewise, if you create a new backend API, you can expose that API once through the MCP server and all Copilots will have access to it. If you create a new tool or modify an existing tool, Copilot will use the new version immediately.

The development process

Microsoft has done its best to make the development process as straightforward as possible. The prerequisites include an MCP client and a service endpoint. Often times, developers use Visual Studio Code for the development process. One of the advantages to using Visual Studio Code is that it integrates with GitHub Copilot, which can use AI generated outputs to assist with writing the code.

Connecting an agent to a MCP server

Microsoft provides two different options for connecting an agent to an MCP server. The first method (which is generally preferred) involves using the MCP Onboarding Wizard. The second option is to create a custom connector using PowerApps (part of the Power Platform).

Regardless of whether an organization chooses to use the wizard or a custom connector, the organization must implement an authentication mechanism that will allow the agent to access the MCP server and its contents. There are two ways of handling the authentication process.

The first option is to use API key authentication. In doing so, the server uses an API key in place of credentials.

The second option is to base the authentication process on OAuth 2.0. While you can manually configure the OAuth 2.0 settings, the client is also able to use a discovery endpoint to discover and then register itself with the endpoint provider.

Connection Options:

  • MCP Onboarding Wizard – Preferred, guided setup.
  • Custom Connector via PowerApps – Manual configuration.

Authentication Methods:

  • API Key – Simple key-based access.
  • OAuth 2.0 – Secure token-based access with discovery endpoint.

FAQs

What is the Model Context Protocol (MCP)?

MCP is an open protocol designed for LLM-based applications like Microsoft Copilot. It helps Copilot access external data sources such as APIs, databases, and file systems without custom connectors.

How does Microsoft Copilot MCP work?

When Copilot needs external data, it uses the MCP server to invoke tools and workflows. The server returns validated results, enabling Copilot to provide accurate, context-rich responses.

Do I need Copilot Studio to use MCP?

MCP works with any LLM application, but Copilot Studio is optimized for it. Copilot Studio automatically discovers MCP server capabilities and makes them available to your custom copilots.

How hard is it to get started with Microsoft Copilot MCP?

Setup is straightforward: you need an MCP client and endpoint. Most developers use Visual Studio Code and can connect via the Onboarding Wizard or build custom connectors with PowerApps.

The post Microsoft Copilot MCP: Unlocking the Power of the Model Context Protocol appeared first on Petri IT Knowledgebase.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

VW Brings Back Physical Buttons

1 Share
sinij shares a report from Car and Driver: Volkswagen is making a drastic change to its interiors, or at least the interiors of its electric vehicles. The automaker recently unveiled a new cockpit generation with the refreshed ID. Polo -- the diminutive electric hatchback that the brand sells in Europe -- that now comes with physical buttons. [...] The steering wheel gets new clusters of buttons for cruise control and interacting with music playback, while switches for the temperature and fan speed now live in a row along the dashboard. The move back to buttons doesn't come out of nowhere. Volkswagen already started the shift with the new versions of the Golf and Tiguan models in the United States. Unfortunately, some climate controls, such as those for the rear defrost and the heated seats, are still accessed through the touchscreen. Thankfully, they look to retain their dedicated spot at the bottom of the display. Volkswagen hasn't announced which models will receive the new cockpit design. The redesigned interior also may be limited to the brand's electric vehicles, which would limit it to the upcoming refresh for the ID.4 SUV (and potentially the ID.Buzz), as the only VW EV models currently sold in America. "Unfortunately, the glued-on-dash tablet look is still there," adds sinij.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Meta hits pause on Ray-Ban Display expansion plans

1 Share
Meta’s Ray-Ban Display smart glasses.
Plans to launch in France, Italy, Canada, and the UK have been put on hold.

International customers who had their sights set on Meta's Ray-Ban Display smart glasses will have to wait a little longer. Meta announced that it's pausing plans to launch the glasses in France, Italy, Canada, and the UK by early 2026, due to "unprecedented demand and limited inventory."

"Since launching last fall, we've seen an overwhelming amount of interest, and as a result, product waitlists now extend well into 2026," Meta said in a CES blog post. "We'll continue to focus on fulfilling orders in the US while we re-evaluate our approach to international availability."

Meta hasn't provided a new target date for its international rollou …

Read the full story at The Verge.

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Radar Trends to Watch: January 2026

1 Share

Happy New Year! December was a short month, in part because of O’Reilly’s holiday break. But everybody relaxes at the end of the year—either that, or they’re on a sprint to finish something important. We’ll see. OpenAI’s end-of-year sprint was obvious: getting GPT-5.2 out. And getting Disney to invest in them, and license their characters for AI-generated images. They’ve said that they will have guardrails to prevent Mickey from doing anything inappropriate. We’ll see how long that works.

To get a good start on 2026, read Andrej’s Karpathy’s 2025 LLM Year in Review for an insightful summary of where we’ve been.  Then follow up with the Resonant Computing Manifesto, which represents a lot of serious thought about what software should be.

AI

  • Google has released yet another new model: Function Gemma, a version of Gemma 3 270M that has been specifically adapted for function calling. It is designed to be fine-tuned easily and is targeted for small, “edge” devices. FunctionGemma is available on Hugging Face.
  • Anthropic is opening its Agent Skills (aka Claude Skills) spec, and added an administrative interface to give IT admins control over which tools are used and how. OpenAI has already quietly adopted them. Are skills about to become a de facto standard?
  • Google has released Gemini 3 Flash, the final model in its Gemini 3 series. Flash combines reasoning ability with speed and economy.
  • Margaret Mitchell writes about the difference between generative and predictive AI. Predictive AI is more likely to solve problems people are facing—and do so without straining resources.
  • NVIDIA has released Nano, the first of its Nemotron 3 models. The larger models in the family, Super and Ultra, are yet to come. Nano is a 30B parameter mixture-of-experts model. All of the Nemotron 3 models are fully open source (most training data, training recipes, pre- and posttraining software, in addition to weights).
  • GPT-5.2 was released. GPT-5.2 targets “professional knowledge workers”: It was designed for tasks like working on spreadsheets, writing documents, and the like. There are three versions: Thinking (a long-horizon reasoning model), Pro, and Instant (tuned for fast results).
  • Disney is investing in OpenAI. One consequence of this deal is that Disney is licensing its characters to OpenAI so that they can be used in the Sora video generator.
  • We understand what cloud native means. What does AI native mean, and what does MCP (and agents) have to do with it?
  • Researchers at the University of Waterloo have discovered a method for pretraining LLMs that is both more accurate than current techniques and 50% more efficient.
  • Mistral has released Devstral 2, their LLM for coding, along with Vibe, a command line interface for Devstral. Devstral comes in two sizes (123B and 24B) and is arguably open source.
  • Anthropic has donated MCP to the Agentic AI Foundation (AAIF), a new open source foundation spun out by the Linux Foundation. OpenAI has contributed AGENTS.md to the AAIF; Block has contributed its agentic platform goose.
  • Google Research has proposed a new Titans architecture for language models, along with the MIRAS framework. Together, they’re intended to allow models to work more efficiently with memory. Is this the next step beyond transformers?
  • Zebra-Llama is a new family of small hybrid models that achieve high efficiency by combining existing pretrained models. Zebra-Llama combines state space models (SSMs) with multihead latent attention (MLA) to achieve near-transformer accuracy with only 7B to 11B pretraining tokens and an 8B parameter teacher.
  • Now there’s Hugging Face Skills! They’ve used it to give Claude the ability to fine-tune an open source LLM. Hugging Face skills interoperate with Codex, Claude Code, Gemini CLI, and Cursor.
  • A research project at OpenAI has developed a model that will tell you when it has failed to follow instructions. This is called (perhaps inappropriately) “confession.” It may be a way for a model to tell when it has made up an answer.
  • Mistral 3 is here. Mistral 3 includes a Large model, plus three smaller models: Ministral 14B, 8B, and 3B. They all have open weights. Performance is comparable to similarly sized models. All of the models are vision-capable.
  • Wikipedia has developed an excellent guide to detecting AI-generated writing.
  • Claude 4.5 has a soul. . .or at least a “soul document” that was used in training to define its personality. Is this similar to the script in a Golem’s mouth?
  • DeepSeek has released V3.2, which incorporates the company’s sparse attention mechanism (DSA), scalable reinforcement learning, and a task synthesis pipeline. Like its predecessors, it’s an open weights model. There’s also a “Speciale” version, only available via API, that’s been tuned for extended reasoning sessions.
  • Black Forest Labs has released FLUX.2, a vision model that’s almost as good as Google’s Nano Banana but is open weight.

Programming

  • It’s Christmas, so of course Matz maintained the tradition of releasing another major version of Ruby–this year, 4.0.
  • The Tor Project is switching to Rust. The rewrite is named Arti and is ready for use.
  • A cognitive architect is a software developer who doesn’t write functions but decomposes larger problems into pieces. Isn’t this one of the things regular architects—and programmers—already do? We’ve been hearing this message from many different sources: It’s all about higher-order thinking.
  • Perhaps this isn’t news, but Rust in the Linux kernel is no longer considered experimental. It’s here to stay. Not all news is surprising.
  • Is PARK the LAMP stack for AI? PARK is PyTorch, AI, Ray, and Kubernetes. Those tools are shaping up to be the foundation of open source AI development. (Ray is a framework for distributing machine learning workloads.)
  • Bryan Cantrill, one of the founders of Oxide Computer, has published a document about how AI is used at Oxide. It’s well worth reading.
  • Go, Rust, and Zig are three relatively new general-purpose languages. Here’s an excellent comparison of the three.
  • Stack Overflow has released a new conversational search tool, AI Assist. It searches Stack Overflow and Stack Exchange and provides chat-like answers.
  • DocumentDB is an open source (MIT license) document store that combines the capabilities of MongoDB and PostgresSQL. It should be particularly useful for building AI applications, supporting session history, conversational history, and semantic caching.
  • “User experience is your moat… Your moat isn’t your model; it’s whether your users feel at home.” From Christina Wodtke’s Eleganthack. Well worth reading.

Security

  • SantaStealer is a new malware-as-a-service operation. It appears to be a rebranding of older malware service, BluelineStealer, and targets data held in the browser, services like Telegram, Discord and Steam, and cryptocurrency wallets. Just in time for the holidays.
  • Another list from MITRE: The top 25 software weaknesses for 2025. These are the most dangerous items added to the CVE database in 2025, based on severity and frequency. The top items on the list are familiar: cross-site scripting, SQL injection, and cross-site request forgery. Are you vulnerable?
  • What is the normalization of deviance in AI? It’s the false sense of safety that comes from ignoring issues like prompt injection because nothing bad has happened yet while simultaneously building agents that perform actions with real-world consequences.
  • Trains were canceled after an AI-edited image of a bridge collapse was posted on social media.
  • Virtual kidnapping is a thing. Nobody is kidnapped, but doctored images from social media are used to “prove” that a person is in captivity.
  • There’s an easy way to jailbreak LLMs. Write poetry. Writing a prompt as poetry seems to evade the defenses of most language models.
  • GrayNoise IP Check is a free tool that checks where your IP address has appeared in a botnet.
  • Attackers are using LLMs to generate new malware. There are several LLMs offering vibe coding services for assisted malware generation. Researchers from Palo Alto Networks report on their capabilities.

Web

  • Google is adding a “user alignment critic” to Chrome. The critic monitors all actions taken by Gemini to ensure that they’re not triggered by indirect prompt injection. The alignment critic also limits Gemini to sites that are relevant to solving the user’s request.
  • Google is doing smart glasses again. The company’s showing off prototypes of Android XR. It’s Gemini-backed, of course; there will be monocular and binocular versions; and it can work with prescription lenses.
  • Another browser? Lightpanda is a web browser designed for machines—crawlers, agents, and other automated browsing applications—that’s built for speed and efficiency.
  • Yet another browser? Nook is open source, privacy protecting, and fast. And it’s for humans.
  • A VT100 terminal emulator in the browser? That’s what you wanted, right? ghostty-web has xterm.js API compatibility, and is built (of course) with Wasm.
  • The Brave browser is testing an AI-assisted mode using its privacy-preserving AI assistant, Leo. Leo can be used to perform agentic multistep tasks. It’s disabled by default.

Hardware

  • Arduino enthusiasts should familiarize themselves with the differences between the licenses for Arduino’s and Adafruit’s products. Adafruit’s licensing is clearly open source; now that Arduino is owned by Qualcomm, its licensing is confusing to say the least.
  • We’ve done a lot to democratize programming in the last decade. Is it now time to democratize designing microchips? Siddharth Garg and others at NYU think so.

Operations



Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI-powered vulnerability triaging with GitLab Duo Security Agent

1 Share

Security vulnerabilities are discovered constantly in modern applications. Development teams often face hundreds or thousands of findings from security scanners, making it challenging to identify which vulnerabilities pose the greatest risk and should be prioritized. This is where effective vulnerability triaging becomes essential.

In this article, we'll explore how GitLab's integrated security scanning capabilities combined with the GitLab Duo Security Analyst Agent can transform vulnerability management from a time-consuming manual process into an intelligent, efficient workflow.

💡 Join GitLab Transcend on February 10 to learn how agentic AI transforms software delivery. Hear from customers and discover how to jumpstart your own modernization journey. Register now.

What is vulnerability triaging?

Vulnerability triaging is the process of analyzing, prioritizing, and deciding how to address security findings discovered in your applications. Not all vulnerabilities are created equal — some represent critical risks requiring immediate attention, while others may be false positives or pose minimal threat in your specific context.

Traditional triaging involves:

  • Reviewing scan results from multiple security tools
  • Assessing severity based on CVSS scores and exploitability
  • Understanding context such as whether vulnerable code is actually reachable
  • Prioritizing remediation based on business impact and risk
  • Tracking resolution through to deployment

This process becomes overwhelming when dealing with large codebases and frequent scans. GitLab addresses these challenges through integrated security scanning and AI-powered analysis.

How to add integrated security scanners in GitLab

GitLab provides built-in security scanners that integrate seamlessly into your CI/CD pipelines. These scanners run automatically during pipeline execution and populate GitLab's Vulnerability Report with findings from the default branch.

Available security scanners

GitLab offers the following security scanning capabilities:

Example: Adding SAST and Dependency Scanning

To enable security scanning, add the scanners to your .gitlab-ci.yml file.

In this example, we are including SAST and Dependency Scanning templates which automatically run those scanners on the test stage. Each scanner can be overwritten using variables (which differ for each scanner). For example, the SAST_EXCLUDED_PATHS variable tells SAST to skip the directories/files provided. Security jobs can be further overwritten using the GitLab Job Syntax.

include:
  - template: Security/SAST.gitlab-ci.yml
  - template: Security/Dependency-Scanning.gitlab-ci.yml

stages:
  - test

variables:
  SAST_EXCLUDED_PATHS: "spec/, test/, tests/, tmp/"

Example: Adding Container Scanning

GitLab provides a built-in container registry where you can store container images for each GitLab project. To scan those containers for vulnerabilities, you can enable container scanning.

This example shows how a container is built and pushed in the build-container job running in the build stage and how it is then scanned in the same pipeline in the test stage:

include:
  - template: Security/Container-Scanning.gitlab-ci.yml

stages:
  - build
  - test

build-container:
  stage: build
  variables:
    IMAGE: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHA
  before_script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  script:
    - docker build -t $IMAGE .
    - docker push $IMAGE

container_scanning:
  variables:
    CS_IMAGE: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHA

Once configured, these scanners execute automatically in your pipeline and report findings to the Vulnerability Report.

Note: Although not covered in this blog, in merge requests, scanners show the diff of vulnerabilities from a feature branch to the target branch. Additionally, granular security policies can be created to prevent vulnerable code from being merged (without approval) if vulnerabilities are detected, as well as force scanners to run, regardless of how the .gitlab-ci.yml is defined.

Triaging using the Vulnerability Report and Pages

After scanners run, GitLab aggregates all findings in centralized views that make triaging more manageable.

Accessing the Vulnerability Report

Navigate to Security & Compliance > Vulnerability Report in your project or group. This page displays all discovered vulnerabilities with key information:

  • Severity levels (Critical, High, Medium, Low, Info)
  • Status (Detected, Confirmed, Dismissed, Resolved)
  • Scanner type that detected the vulnerability
  • Affected files and lines of code
  • Detection date and pipeline information

Vulnerability Report

Filtering and organizing vulnerabilities

The Vulnerability Report provides powerful filtering options:

  • Filter by severity, status, scanner, identifier, and reachability
  • Group by severity, status, scanner, OWASP Top 10
  • Search for specific CVEs or vulnerability names
  • Sort by detection date or severity
  • View trends over time with the security dashboard

Manual workflow triage

Traditional triaging in GitLab involves:

  1. Reviewing each vulnerability by clicking into the detail page
  2. Assessing the description and understand the potential impact
  3. Examining the affected code through integrated links
  4. Checking for existing fixes or patches in dependencies
  5. Setting status (Confirm, Dismiss with reason, or create an issue)
  6. Assigning ownership for remediation

This is an example of vulnerability data provided to allow for triage including the code flow:

Vulnerability Page 1

Vulnerability Page 2

Vulnerability Code Flow

When on the vulnerability data page, you can select Edit vulnerability to change its status as well as provide a reason. Then you can create an issue and assign ownership for remediation.

Vulnerability Page - Status Change

While this workflow is comprehensive, it requires security expertise and can be time-consuming when dealing with hundreds of findings. This is where GitLab Duo Security Analyst Agent, part of GitLab Duo Agent Platform, becomes invaluable.

About Security Analyst Agent and how to set it up

GitLab Duo Security Analyst Agent is an AI-powered tool that automates vulnerability analysis and triaging. The agent understands your application context, evaluates risk intelligently, and provides actionable recommendations.

What Security Analyst Agent does

The agent analyzes vulnerabilities by:

  • Evaluating exploitability in your specific codebase context
  • Assessing reachability to determine if vulnerable code paths are actually used
  • Prioritizing based on risk rather than just CVSS scores
  • Explaining vulnerabilities in clear, actionable language
  • Recommending remediation steps specific to your application
  • Reducing false positives through contextual analysis

Prerequisites

To use Security Analyst Agent, you need:

  • GitLab Ultimate subscription with GitLab Duo Agent Platform enabled
  • Security scanners configured in your project
  • At least one vulnerability in your Vulnerability Report

Enabling Security Analyst Agent

Security Analyst Agent is a foundational agent. Unlike the general-purpose GitLab Duo agent, foundational agents understand the unique workflows, frameworks, and best practices of their specialized domains. Foundational agents can be accessed directly from your project without any additional configuration.

You can find Security Analyst Agent in the AI Catalog:

AI Catalog

To dive in and see the details of the agent, such as its system prompt and tools:

  1. Navigate to gitlab.com/explore/.
  2. Select AI Catalog from the side tab.
  3. Select Security Analyst Agent from the list.

Security Analyst Agent Details 1

Security Analyst Agent Details 2

The agent is integrated directly into your existing workflow without requiring additional configuration beyond the defined prerequistes.

Using Security Analyst Agent to find most critical vulnerabilities

Now let's explore how to leverage Security Analyst Agent to quickly identify and prioritize the vulnerabilities that matter most.

Starting an analysis

To start an analysis, navigate to your GitLab project (ensure it meets the prerequistes). Then you can open GitLab Duo Chat and select the Security Agent.

Security Analyst Agent selection

From the chat, select the model to use with the agent and make sure to enable Agentic mode.

Security Analyst Agent - Model Selection

A chat will open where you can engage with Security Analyst Agent by using the agent's conversational interface. This agent can perform:

  • Vulnerability triage: Analyze and prioritize security findings across different scan types.
  • Risk assessment: Evaluate the severity, exploitability, and business impact of vulnerabilities.
  • False positive identification: Distinguish genuine threats from benign findings.
  • Compliance management: Understand regulatory requirements and remediation timelines.
  • Security reporting: Generate summaries of security posture and remediation progress.
  • Remediation planning: Create actionable plans to address security vulnerabilities.
  • Security workflow automation: Streamline repetitive security assessment tasks.

Additionally, these are the tools which Security Analyst Agent has at its disposal:

Security Analyst Agent - tools

For example, I can ask "What are the most critical vulnerabilities and which vulnerabilities should I address first?" to make it easy to determine what is important. The agent will respond as follows:

Security Analyst Agent 1

Security Analyst Agent 2

Security Analyst Agent 3

Security Analyst Agent 4

Security Analyst Agent 5

Security Analyst Agent 6

Example queries for effective triaging

Here are powerful queries to use with the Security Analyst Agent:

Identify critical issues:

"Show me vulnerabilities that are actively exploitable in our production code"

Focus on reachable vulnerabilities:

"Which high-severity vulnerabilities are in code paths that are actually executed?"

Understand dependencies:

"What are the most critical dependency vulnerabilities and are patches available?"

Get remediation guidance:

"Explain how to fix the SQL injection vulnerability in user authentication"

You can also directly assign developers to vulnerabilities.

Understanding agent recommendations

When Security Analyst Agent analyzes vulnerabilities, it provides:

Risk assessment: The agent explains why a vulnerability is critical beyond just the CVSS score, considering your application's specific architecture and usage patterns.

Exploitability analysis: It determines whether vulnerable code is actually reachable and exploitable in your environment, helping filter out theoretical risks.

Remediation steps: The agent provides specific, actionable guidance on how to fix vulnerabilities, including code examples when appropriate.

Priority ranking: Instead of overwhelming you with hundreds of findings, the agent helps identify the top issues that should be addressed first.

Real-world workflow example

Here's how a typical triaging session might look:

  1. Start with the big picture: "Analyze the security posture of this project and highlight the top 5 most critical vulnerabilities."
  2. Dive into specifics: For each critical vulnerability identified, ask "Is this vulnerability actually exploitable in our application?"
  3. Plan remediation: "What's the recommended fix for this SQL injection issue, and are there any side effects to consider?"
  4. Track progress: After addressing critical issues, ask "What vulnerabilities should I prioritize next?"

Benefits of agent-assisted triaging

Using Security Analyst Agent transforms vulnerability management:

  • Time savings: Reduce hours of manual analysis to minutes of guided review
  • Better prioritization: Focus on vulnerabilities that actually pose risk to your specific application
  • Knowledge transfer: Learn security best practices through agent explanations
  • Consistent standards: Apply consistent triaging logic across all projects
  • Reduced alert fatigue: Filter noise and false positives effectively

Get started today

Vulnerability triaging doesn't have to be an overwhelming manual process. By combining GitLab's integrated security scanners with GitLab Duo Security Analyst Agent, development teams can quickly identify and prioritize the vulnerabilities that truly matter.

The agent's ability to understand context, assess real risk, and provide actionable guidance transforms security scanning from a compliance checkbox into a practical, efficient part of your development workflow. Instead of drowning in hundreds of vulnerability reports, you can focus your energy on addressing the issues that actually threaten your application's security.

Start by enabling security scanners in your GitLab pipelines, then leverage Security Analyst Agent to make intelligent, informed decisions about vulnerability remediation. Your future self — and your security team — will thank you.

Ready to get started? Check out the GitLab Duo Agent Platform documentation and security scanning documentation to begin transforming your vulnerability management workflow today.

Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Secure Plugins, Agents, and Graph Connectors in Copilot

1 Share

As organizations begin extending Microsoft 365 Copilot through plugins, agents, and Graph connectors, the responsibility for securing AI interactions expands beyond basic tenant governance. These components enable Copilot to access external systems, perform actions on behalf of users, and integrate business data that exists outside Microsoft 365. With that flexibility comes elevated risk. Administrators must ensure these extensions operate within defined security boundaries, comply with and support auditing requirements, and never create unintended pathways for sensitive data to flow into the wrong systems or identities.

Microsoft Purview plays a central role in governing AI behavior, including Copilot’s interaction with external systems. Purview’s AI Hub adds a formal framework for identifying sensitive content, evaluating risk, enforcing safety checks, monitoring AI output, and validating whether Copilot-initiated actions comply with organizational policy. Securing plugins, agents, and connectors is not simply good practice; it is part of the broader AI safety architecture that Microsoft now expects organizations to adopt.

Understanding Copilot Extensibility Components

Before you can secure anything, you need complete clarity on what Copilot extensibility actually includes.

Graph Connectors

Graph connectors allow external data sources to be indexed into Microsoft Search and the Semantic Index. This makes the data discoverable to Copilot, meaning permissions, visibility, and indexing decisions directly impact AI behavior. Because connectors can introduce a large amount of previously siloed data into Copilot’s reach, they require strict scoping and careful review.

Agents

Agents are programmable constructs that can retrieve information, execute business logic, or call external APIs on behalf of a user. They extend Copilot from being a passive interpreter of content to an active participant in workflows. Agents must be treated as high-trust components because they can introduce new capabilities that Copilot would never have by default.

Plugins

Plugins extend agent behavior with additional actions or frameworks. They can provide two-way integration with external systems, enabling Copilot to query, create, or update data. Plugins sit at the intersection of identity, data access, and automation, which means improper configuration can lead to privilege escalation or unintended data movement.

Together, these components form the operational surface where Copilot interacts with information that goes beyond Microsoft 365. Each one must be governed tightly.

Risks Introduced by Extending Copilot

Microsoft’s AI security model makes one thing very clear: AI is not the risk; improper configuration is. Copilot adheres to all security controls, but plugins, connectors, and agents can expand that boundary if they are not adequately governed.

Key risks include:

  • Broader data visibility through indexing external systems that were never intended to be searched.
  • Privilege amplification if agents or plugins have more permissions than the users invoking them.
  • Uncontrolled data movement if external APIs receive sensitive content without classification or protection.
  • Inadequate auditing if plugin or agent activity is not captured in Purview’s recording pipeline.
  • Shadow AI pathways if connectors surface content no one realized was accessible.

Microsoft Purview’s AI safety features, including output monitoring, risk analytics, and data classification enforcement, are explicitly designed to mitigate these risks. But the foundational responsibility remains in how organizations configure extensibility itself.

Securing Graph Connectors as a Copilot Data Boundary

Graph connectors widen the Semantic Index by introducing external datasets. If these datasets are not controlled, Copilot may surface or summarize information without the proper governance frameworks applied.

Essential Controls for Graph Connectors

Enforce Least Privilege
Connector identities must have only the permissions necessary to access and index the external content. Over-permissive service accounts immediately become AI exposure risks.

Validate Access Control Mappings
Connector ACLs directly determine who can search and, therefore, who Copilot can assist. Permissions must match the exact organizational structure of the external system.

Control What Gets Indexed
Connector indexing scopes should be configured to exclude content that is:

  • Sensitive
  • Unclassified
  • Not governed under Purview policies
  • Not intended for organizational search visibility

User-Level Permission Trimming Still Applies
Even with external data, Copilot respects the Microsoft Graph permission model. If a user cannot see a connector’s indexed item, Copilot cannot use it.

Monitor Connector Health
Connector logs, ingestion status, failed crawls, permission errors, and item counts should be part of your operational security checks.

Graph connectors are robust, but they fundamentally alter what Copilot can discover. Securing them is mandatory.

Securing Agents and Custom Actions

Agents introduce programmable logic into Copilot, allowing the AI to perform tasks that extend far beyond summarizing content or retrieving information. Because agents can call APIs, trigger workflows, or execute logic on behalf of a user, they must be governed with the same discipline applied to high-trust applications. Securing them begins with strict control over the actions they are allowed to perform. An agent should operate within a narrow and well-defined permission set, avoiding broad or unnecessary access to external systems. Over-permissioning is one of the fastest ways to expand Copilot’s reach unintentionally, so organizations must regularly review what each agent can do and confirm that its scope aligns with business requirements.

Administrative control is equally important. The ability to create, register, or update agents should only be granted to privileged identities protected by strong authentication, device compliance requirements, and Conditional Access controls. Because agents essentially introduce new capabilities into your AI ecosystem, their configuration must never fall into the hands of standard users or unmonitored service accounts. Purview’s auditing and monitoring tools become essential here, as every agent execution, external call, or data retrieval should be captured in logs. This provides traceability if an agent behaves unexpectedly or retrieves data it should not.

The organization should also validate output handling to ensure agent responses comply with corporate policy.

Microsoft Purview’s AI Safety controls help evaluate whether agent-generated content contains inaccurate details, sensitive data, or content that violates regulatory obligations.

By testing how an agent behaves in different scenarios, including edge cases, administrators gain confidence that it will operate safely once deployed. Securing agents ultimately means controlling their permissions, tightening their administrative boundaries, monitoring their actions, and validating their output through the broader governance framework Copilot relies on.

Plugin Governance and Data Flow Control

Plugins extend agent functionality and allow Copilot to interact with external frameworks or automation systems. Because plugins can initiate both inbound and outbound data flows, they require strong governance to prevent unintended information disclosure. Organizations must begin by establishing a controlled deployment model that allows only approved plugins to be used. This pre-approval process ensures each plugin undergoes a security and compliance review before it becomes part of the AI ecosystem. Once deployed, plugin usage should be restricted to specific roles or user groups rather than made universally available. Role-based assignment reduces exposure and keeps sensitive integrations out of reach of users who have no operational need for them.

Data security must be considered at every stage of plugin interaction.

Purview’s classification and labeling capabilities should be applied to data flowing into or out of plugin interactions, ensuring sensitive information is not inadvertently processed by external systems or returned in Copilot responses without proper protections. Equally important is output monitoring, which helps detect whether a plugin produces content that may contain regulated data, internal secrets, or unsafe operational instructions. This is especially relevant when plugins perform write operations or integrate with systems that store sensitive business information.

Continuous monitoring of plugin behavior is necessary to maintain governance integrity. Observing usage patterns, reviewing logs, and identifying anomalies help detect situations in which a plugin may be invoked unexpectedly or outside its intended workflows. Conditional Access also plays a role by blocking plugin use from high-risk sessions or unmanaged devices. By combining operational oversight, strict permissioning, strong data-handling controls, and session-level governance, organizations maintain predictable and controlled data flow across all plugin interactions.

Identity, Consent, and Administrative Controls

Any extension that interfaces with Copilot inevitably interacts with identity and permissions. Identity Hardening Requirements:

  • Admin Consent for High-Risk Permissions
    Extensibility components should request only the permissions required for their function.
  • Strict App Consent Policies
    Users should not be able to self-consent to plugins or agent integrations.
  • Periodic Review of Connected Apps
    Administrators must regularly evaluate all registered connectors, plugins, and agents for permission drift or outdated configurations.
  • Conditional Access for API and Graph Usage
    Use CA policies to restrict app and service principal access to compliant environments.

This ensures plugins, agents, and connectors never exceed their intended authority.

Validation, Testing, and Continuous Monitoring

Securing Copilot extensibility is not a one-time configuration exercise. Plugins, agents, and connectors introduce new operational paths through which data can move, actions can be executed, and systems can be influenced.

Because these capabilities extend beyond native Microsoft 365 workloads, organizations must adopt a rigorous validation and continuous monitoring approach.

Proper testing ensures that these components behave as expected under real-world conditions, respect the boundaries defined by your governance model, and do not accidentally expose sensitive information.

Validation begins with ensuring that data boundaries are enforced correctly. Administrators should test each connector, plugin, and agent under controlled conditions to confirm that they only access the datasets they were explicitly designed to interact with. For connectors, this means verifying that indexed content from external systems appears in Microsoft Search only for users with the proper access rights. For agents, administrators must confirm that actions remain restricted to their intended scope and that agents do not retrieve or generate information beyond the permissions assigned. Plugin validation should go further by evaluating not only the data retrieved but also how the plugin handles and returns output to Copilot.

Once initial validation is complete, organizations should conduct scenario-based testing to understand how extensibility components behave under different user profiles and security contexts. This includes testing what happens when:

  • A user with minimal permissions interacts with a connector or agent
  • A privileged user attempts to invoke high-risk plugin actions
  • Conditional Access policies restrict specific actions or sessions
  • Sensitive content is input into or returned from AI-driven processes

These tests can reveal weaknesses or unintended pathways that are not visible during normal configuration review. They also validate that the organization’s Purview controls, such as DLP, sensitivity labels, and safe output evaluation, are functioning correctly across the entire extensibility surface.

Continuous monitoring is essential because extensibility components evolve. Connector indexing patterns can change as external systems grow. API endpoints used by plugins may update. Agents may require new logic as workflows evolve. Without ongoing oversight, these changes can introduce blind spots in your governance model. Administrators should use Purview’s AI activity insights, audit logging, and classification-based monitoring to observe how Copilot interacts with extensibility components over time. These tools help detect anomalies such as unexpected data access, rapid increases in connector ingestion volume, or plugin outputs containing sensitive information.

Monitoring should also involve periodic review of app permissions and service principal access. Since connectors and agents rely heavily on identity and privilege mapping, permission creep can occur as administrators adjust roles, onboarding processes evolve, or APIs require expanded capabilities. Scheduled security reviews help ensure permissions remain aligned with least-privilege best practices and that no extensibility component gains rights beyond what is operationally necessary.

Organizations should incorporate red-team-style exercises to test the resilience of their extensibility governance against misuse or misconfiguration. These exercises can include simulating malicious prompts, attempting to exploit plugin functionality, or deliberately introducing misaligned permissions to validate whether Purview controls and DLP rules block unsafe operations. This type of testing verifies whether AI behavior remains inside the defined safety and compliance boundary even when confronted with adversarial or unexpected scenarios.

By combining structured validation, persona-based testing, continuous telemetry monitoring, and periodic security reviews, organizations create a resilient governance loop around Copilot extensibility. This ensures that connectors, agents, and plugins remain aligned to policy, behave consistently under stress, and maintain compliance with corporate and regulatory requirements as the environment grows and evolves.

Thoughts

Securing Copilot extensibility is not just about protecting external systems. It is about preserving the entire AI ecosystem inside your organization. Plugins, agents, and connectors extend the reach of AI, but they also extend your responsibility to enforce least privilege, monitor data flows, validate behavior, and apply continuous governance.

Microsoft Purview now provides the central control plane for AI safety, including content classification, output monitoring, risk evaluation, and compliance controls. By aligning your extensibility governance with Purview and Microsoft 365 security, you ensure that Copilot operates inside a secure, entirely governed, and intentionally defined boundary.

When these controls are correctly implemented, Copilot becomes a safe, predictable, and transformative capability. When ignored, extensions become the fastest path to unintended data exposure.

Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories