Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148925 stories
·
33 followers

Malicious AI Assistant Extensions Harvest LLM Chat Histories

1 Share

Microsoft Defender has investigated malicious Chromium-based browser extensions that impersonate legitimate AI assistant tools to collect LLM chat histories and browsing data. Reporting indicates these extensions have reached approximately 900,000 installs. Microsoft Defender telemetry also confirms activity across more than 20,000 enterprise tenants, where users frequently interact with AI tools using sensitive inputs.

The extensions collected full URLs and AI chat content from platforms such as ChatGPT and DeepSeek, exposing organizations to potential leakage of proprietary code, internal workflows, strategic discussions, and other confidential data.

At scale, this activity turns a seemingly trusted productivity extension into a persistent data collection mechanism embedded in everyday enterprise browser usage, highlighting the growing risk browser extensions pose in corporate environments.

Attack chain overview

Attack chain illustrating how a malicious AI‑themed Chromium extension progresses from marketplace distribution to persistent collection and exfiltration of LLM chat content and browsing telemetry.

Reconnaissance

The threat actor targeted the rapidly growing ecosystem of AI-assistant browser extensions and the user behaviors surrounding them. Many knowledge workers install sidebar tools to interact with models such as ChatGPT and DeepSeek, often granting broad page-level permissions for convenience. These extensions also operate across Chromium-based browsers such as Google Chrome and Microsoft Edge using a largely uniform architecture.

We also observed cases where agentic browsers automatically downloaded these extensions without requiring explicit user approval, reflecting how convincing the names and descriptions appeared. Together, these factors created a large potential audience that frequently handles sensitive information in the browser and a platform where look-alike extensions could blend in with minimal friction.

The actors also reviewed legitimate extensions, such as AITOPIA, to emulate familiar branding, permission prompts, and interaction patterns. This allowed the malicious extensions to align with user expectations while enabling large-scale telemetry collection from browser activity.

Weaponization

The threat actor developed a Chromium-based browser extension compatible with both Google Chrome and Microsoft Edge. The extension was designed to passively observe user activity, collecting visited URLs and segments of AI-assisted chat content generated during normal browser use.

Collected data was staged locally and prepared for periodic transmission, enabling continuous visibility into user browsing behavior and interactions with AI platforms.

To reduce suspicion, the extension presented its activity as benign analytics commonly associated with productivity tools. From a defender perspective, this stage introduced a browser-resident data collection capability focused on URLs and AI chat content, along with scheduled outbound communication to external infrastructure.

Delivery

The malicious extension was distributed through the Chrome Web Store, using AI-themed branding and descriptions to resemble legitimate productivity extensions. Because Microsoft Edge supports Chrome Web Store extensions, a single listing enabled distribution across both browsers without requiring additional infrastructure.

User familiarity with installing AI sidebar tools, combined with permissive enterprise extension policies, allowed the extension to reach a broad audience. This trusted distribution channel enabled the extension to reach both personal and corporate environments through routine browser extension installation.

Exploitation

Following installation, the extension leveraged the Chromium extension permission model to begin collecting data without further user interaction. The granted permissions provided visibility into a wide range of browsing activity, including internal sites and AI chat interfaces.

A misleading consent mechanism further enabled this behavior. Although users could initially disable data collection, subsequent updates automatically re-enabled telemetry, restoring data access without clear user awareness.

By relying on user trust, ambiguous consent language, and default extension behaviors, the threat actor maintained continuous access to browser-resident data streams.

Installation

Persistence was achieved through normal browser extension behavior rather than traditional malware techniques. Once installed, the extension automatically reloaded whenever the browser started, requiring no elevated privileges or additional user actions.

Local extension storage maintained session identifiers and queued telemetry, allowing the extension to resume collection after browser restarts or service worker reloads. This approach allowed the data collection functionality to continue across browser sessions while appearing similar to a typical installed browser extension.

Command and Control (C2)

At regular intervals, the extension transmitted collected data to threat actor–controlled infrastructure using HTTPS POST requests to domains including deepaichats[.]com and chatsaigpt[.]com. By relying on common web protocols and periodic upload activity, the outbound traffic appeared similar to routine browser communications.

After transmission, local buffers were cleared, reducing on-disk artifacts and limiting local forensic visibility. This lightweight command-and-control model allowed the extension to regularly transmit browsing telemetry and AI chat content from both Chrome and Microsoft Edge environments.

Actions on Objective

The threat actor’s objective appeared to be ongoing data collection and visibility into user activity. Through the installed extension, the threat actor collected browsing telemetry and AI-related content, including prompts and responses from platforms such as ChatGPT and DeepSeek. Telemetry was enabled by default after updates, even if previously declined, meaning users could unknowingly continue contributing data without explicit consent.

This data provided insight into internal applications, workflows, and potentially sensitive information that users routinely shared with AI tools. By maintaining periodic exfiltration tied to persistent session identifiers, the threat actor could maintain an evolving view of user activity, effectively turning the extension into a long-term data collection capability embedded in normal browser usage.

Technical Analysis

The extension runs a background script that logs nearly all visited URLs and excerpts of AI chat messages. The data is stored locally in Base64-encoded JSON and periodically uploaded to remote endpoints, including deepaichats[.]com.

Collected data includes full URLs (including internal sites), previous and next navigation context, chat snippets, model names, and a persistent UUID. Telemetry is enabled by default after updates, even if previously declined. The code includes minimal filtering, weak consent handling, and limited data protection controls.

Overall, the extension functions as a broad telemetry collection mechanism that introduces privacy and compliance risks in enterprise environments.

The following screenshots show extensions observed during the investigation:

Figure 1. Details page for the browser extension fnmhidmjnmklgjpcoonkmkhjpjechg, as displayed in the browser extension management interface.
Figure 2. Details page for the browser extension inhcgfpbfdjbjogdfjbclgolkmhnooop, as displayed in the browser extension management interface.

Mitigation and protection guidance

  1. Monitor network POST traffic to the extension’s known endpoints (*.chatsaigpt.com, *. deepaichats.com, *.chataigpt.pro, *.chatgptsidebar.pro) and assess impacted devices to understand scope of data exfiltrated.
  2. Inventory, audit, and apply restrictions for browser extensions installed in your organization, using Browser extensions assessment in Microsoft Defender Vulnerability Management.
  3. Enable Microsoft Defender SmartScreen and Network Protection.
  4. Leverage Microsoft Purview data security to implement AI data security and compliance controls around sensitive data being used in browser-based AI chat applications.
  5. Create, monitor, and enforce organizational policies and procedures on AI use within your organization.
  6. Finally, educate users to avoid side‑loaded or unverified productivity extensions. Also suggest end users review their installed extensions in chrome or edge and remove unknown extensions.

Microsoft Defender XDR detections 

Microsoft Defender customers can refer to the list of applicable detections below. Microsoft Defender XDR coordinates detection, prevention, investigation, and response across endpoints, identities, SaaS apps, email & collaboration tools to provide integrated protection against attacks like the threat discussed in this blog.

Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence.

TacticObserved activityMicrosoft Defender coverage
Execution, PersistenceMalicious extensions are installed and loadedMicrosoft Defender for Endpoint
– Attempt to add or modify suspicious browser extension, Suspicious browser extension load
Trojan:JS/ChatGPTStealer.GVA!MTB, Trojan:JS/Rossetaph
ExfiltrationUser ChatGPT and DeepSeek conversation histories are exfiltrated  Microsoft Defender for Endpoint
Attack C2s are blocked by Network Protection

Hunting queries   

Microsoft Defender XDR

Browser launched with malicious extension IDs

Purpose: high confidence signal that a known‑bad extension is present or side‑loaded.

DeviceProcessEvents
| where FileName in~ ("chrome.exe","msedge.exe")
| where ProcessCommandLine has_any ("fnmihdojmnkclgjpcoonokmkhjpjechg", "inhcgfpbfdjbjogdfjbclgolkmhnooop"  )  // “Chat GPT for Chrome with GPT‑5, Claude Sonnet & DeepSeek & AI Sidebar with Deepseek, ChatGPT, Claude and more”)
| project Timestamp, DeviceName, Account=InitiatingProcessAccountName, FileName, ProcessCommandLine, InitiatingProcessParentFileName
| order by Timestamp desc

Outbound Connections to the Attacker’s Infrastructure

Purpose: Direct evidence of browser traffic to the campaign’s domains.

DeviceNetworkEvents
| where RemoteUrl has_any ( "chatsaigpt.com","deepaichats.com","chataigpt.pro","chatgptsidebar.pro")
| project Timestamp, DeviceName, InitiatingProcessFileName, InitiatingProcessCommandLine,RemoteUrl, RemoteIP, RemotePort, Protocol
| order by Timestamp desc

Installations of Malicious IDs

Purpose: Enumerate all devices where either of the two malicious IDs is installed.

DeviceTvmBrowserExtensions
| where ExtensionId in ("fnmihdojmnkclgjpcoonokmkhjpjechg", "inhcgfpbfdjbjogdfjbclgolkmhnooop")
| summarize Devices=dcount(DeviceName) by BrowserName
| order by Devices desc

Detecting On-Disk Artifacts of Malicious Extensions

Purpose: Identify any systems where the malicious Chrome or Edge Extensions are present by detecting file activity inside their known extension directories.

DeviceFileEvents
| where FolderPath has_any ( @"\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\Extensions\\fnmihdojmnkclgjpcoonokmkhjpjechg",@"\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\Extensions\\inhcgfpbfdjbjogdfjbclgolkmhnooop",@"\\AppData\\Local\\Microsoft\\Edge\\User Data\\Default\\Extensions\\fnmihdojmnkclgjpcoonokmkhjpjechg",@"\\AppData\\Local\\Microsoft\\Edge\\User Data\\Default\\Extensions\\inhcgfpbfdjbjogdfjbclgolkmhnooop")
| where ActionType in~ ("FileCreated","FileModified","FileRenamed")
| project Timestamp, DeviceName, InitiatingProcessFileName, ActionType, FolderPath, FileName, SHA256, AccountName
| order by Timestamp desc

References

This research is provided by Microsoft Defender Security Research with contributions from Geoff McDonald and Dana Baril.

Learn more 

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.   

The post Malicious AI Assistant Extensions Harvest LLM Chat Histories appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

20 developers share their unfiltered thoughts on AI

1 Share

When it comes to AI, few can say no – some rely on it heavily, while others remain skeptical:

I was probably the most skeptical person about AI, but when I started using it, I realized it’s just a tool.

Some even say, “It’s still not good enough for me. I’ll probably use it in a couple of years once it matures.”

But for many, AI is now essential: it speeds up tasks, supports research, and even offers crash courses in new technologies.

Love it or hate it, it’s here to stay.

AI opens a world of endless possibilities

Developers see AI’s potential in all sorts of ways. Some use it like a supercharged search engine to dig up code documentation. Others point to breakthroughs in medicine, like AI-assisted cancer screening or discovering new cures.

Before, you could only imagine trying something – now I can spend 20–30 minutes and see what’s possible.

DeepMind’s AlphaFold, which finally solved the decades-old protein folding problem, was a standout. “For me, that was the first time I thought, Oh man, this is really something.”

… and it isn’t here to take your job

Worried about AI taking over your job? Most developers aren’t. Sure, it can handle repetitive tasks, but it can’t take responsibility. “AI will never ask me why, only humans can challenge, understand, and question,” said a senior developer.

The general agreement: engineers, mathematicians, and doctors aren’t going anywhere. AI is a tool to help us, not a replacement for us. But, fears persist outside the office too: many worry that society is becoming too reliant on AI, leading to a loss of critical thinking.

Curiosity is the best defense against fear.

Still, there’s hope. Curiosity, participants agreed, is the antidote to fear. Engaging with AI directly (experimenting, learning, and testing its limits) reduces anxiety and keeps humans in control.

Curious what your colleagues think about AI? Watch the video, share your thoughts, and maybe even agree that the best use case could be… the Will Smith eating spaghetti videos.

The post 20 developers share their unfiltered thoughts on AI appeared first on ShiftMag.

Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

DeveloperWeek 2026: Making AI tools that are actually good

1 Share
From interoperability to knowledge architecture to creating AI tools people can actually use, here’s a recap of what we learned from DeveloperWeek 2026.
Read the whole story
alvinashcraft
18 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

ReSharper for Visual Studio Code, Cursor, and Compatible Editors Is Out

1 Share

ReSharper has been a trusted productivity tool for C# developers in Visual Studio for over 20 years. Today, we’re taking the next step and officially releasing the ReSharper extension for Visual Studio Code and compatible editors.

After a year in Public Preview, ReSharper has been refined to bring its C# code analysis and productivity features to developers who prefer VS Code and other editors – including AI-first coding environments like Cursor and Google Antigravity.

Whether you’re coming from ReSharper in Microsoft Visual Studio, JetBrains Rider, or you’re a VS Code C# developer, the goal is the same – to help you write, navigate, and maintain C# code with confidence and ease.

Why ReSharper for VS Code and compatible editors

ReSharper brings JetBrains’ decades-long C# expertise into lightweight, flexible editor workflows to elevate your code quality.

What it’s designed for:

  • Professional-grade C# code quality
    Advanced inspections, quick-fixes, refactoring, and formatting for C#, Razor, Blazor, and XAML.
  • Refining AI-generated code
    ReSharper helps review and refine AI-assisted code to make sure it meets professional standards before it ships.
  • Wide editor compatibility
    ReSharper works seamlessly across all compatible editors, meeting your needs wherever you code.
  • Proven JetBrains expertise
    Built on over two decades of experience developing .NET tooling used by teams worldwide.
  • Free for non-commercial use
    Available at no cost for learning, hobby projects, and non-commercial development.

Availability

ReSharper is available from:

How to install ReSharper

You can install the extension via the Extensions view:

  1. Open Visual Studio Code or another compatible editor.
  2. Go to the Extensions view.
  3. Search for ReSharper.
  4. Click Install.

You can also install the extension via the Command Palette:

  1. Open Visual Studio Code or another compatible editor.
  2. Open the Command Palette (Ctrl+P / Cmd+P).
  3. Paste: ext install JetBrains.resharper-code
  4. After pasting the command, press Enter, and ReSharper will be installed automatically.

Key features at a glance

ReSharper focuses on the core workflows C# developers use daily.

  • Insightful code analysis
    Real-time inspections and quick-fixes help keep your code readable, maintainable, and consistent across projects.
  • Smart coding assistance
    Context-aware code completion, auto-imports, live templates, and inline documentation go way beyond the standard capabilities of a code editor.
  • Solution Explorer
    A central hub for managing files, folders, NuGet packages, source generators, and projects across a solution – just like the one in JetBrains Rider or ReSharper in Microsoft Visual Studio.
  • Reliable unit testing
    Run and manage tests for NUnit, xUnit.net, and MSTest directly in VS Code or a compatible editor, with easy navigation to failing tests.
  • Refactorings you can trust
    Rename works across your solution while safely handling conflicts and references.
  • Fast navigation, including to external and decompiled sources
    Navigate to symbols, usages, files, and types across your solution. When source code isn’t available, ReSharper can decompile assemblies and take you directly to the relevant declarations.

For more information on ReSharper’s functionality, please see our Documentation.

What’s next

The next major area of focus for ReSharper for VS Code is debugging support. Based on feedback collected during the Preview, we’re actively working on support for launching debugging sessions and attaching to processes in .NET and .NET Framework applications.

Beyond debugging, our roadmap includes continued quality improvements and expanding the set of available refactorings.

We’ll be listening closely to your feedback as we define the next priorities. If there’s something that would make ReSharper indispensable in your workflow, we’d love to hear from you.

Licensing

ReSharper for VS Code and compatible editors is available under ReSharper, dotUltimate, and All Products Pack licenses. You can review the pricing options here

The extension will continue to be available for free for non-commercial use, including learning and self-education, open-source contributions without earning commercial benefits, any form of content creation, and hobby development.

Get started

  1. Install ReSharper.
  2. Open a workspace/folder in VS Code, Cursor, or another compatible editor.
  3. ReSharper will automatically detect any .sln/.slnx/.slnf (solution) files or a csproj file in the folder:
  • If only one solution is found, it will open automatically.
  • If multiple solutions are found, click the Open Solution button in a pop-up menu to choose which one to open.

If you encounter any issues, have feedback to share, or additional features to request, you can do so by creating a ticket here.

Read the whole story
alvinashcraft
25 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

A complete guide to GitLab Container Scanning

1 Share

Container vulnerabilities don't wait for your next deployment. They can emerge at any point, including when you build an image or while containers run in production. GitLab addresses this reality with multiple container scanning approaches, each designed for different stages of your container lifecycle.

In this guide, we'll explore the different types of container scanning GitLab offers, how to enable each one, and common configurations to get you started.

Why container scanning matters

Security vulnerabilities in container images create risk throughout your application lifecycle. Base images, OS packages, and application dependencies can all harbor vulnerabilities that attackers actively exploit. Container scanning detects these risks early, before they reach production, and provides remediation paths when available.

Container scanning is a critical component of Software Composition Analysis (SCA), helping you understand and secure the external dependencies your containerized applications rely on.

The five types of GitLab Container Scanning

GitLab offers five distinct container scanning approaches, each serving a specific purpose in your security strategy.

1. Pipeline-based Container Scanning

  • What it does: Scans container images during your CI/CD pipeline execution, catching vulnerabilities before deployment
  • Best for: Shift-left security, blocking vulnerable images from reaching production
  • Tier availability: Free, Premium, and Ultimate (with enhanced features in Ultimate)
  • Documentation

GitLab uses the Trivy security scanner to analyze container images for known vulnerabilities. When your pipeline runs, the scanner examines your images and generates a detailed report.

How to enable pipeline-based Container Scanning

Option A: Preconfigured merge request

  • Navigate to Secure > Security configuration in your project.
  • Find the "Container Scanning" row.
  • Select Configure with a merge request.
  • This automatically creates a merge request with the necessary configuration.

Option B: Manual configuration

  • Add the following to your .gitlab-ci.yml:
include:
  - template: Jobs/Container-Scanning.gitlab-ci.yml

Common configurations

Scan a specific image:

To scan a specific image, overwrite the CS_IMAGE variable in the container_scanning job.

include:
  - template: Jobs/Container-Scanning.gitlab-ci.yml

container_scanning:
  variables:
    CS_IMAGE: myregistry.com/myapp:latest

Filter by severity threshold:

To only find vulnerabilities with a certain severity criteria, overwrite the CS_SEVERITY_THRESHOLD variable in the container_scanning job. In the example below, only vulnerabilities with a severity of High or greater will be displayed.

include:
  - template: Jobs/Container-Scanning.gitlab-ci.yml

container_scanning:
  variables:
    CS_SEVERITY_THRESHOLD: "HIGH"

Viewing vulnerabilities in a merge request

Viewing Container Scanning vulnerabilities directly within merge requests makes security reviews seamless and efficient. Once Container Scanning is configured in your CI/CD pipeline, GitLab automatically display detected vulnerabilities in the merge request's Security widget.

Container Scanning vulnerabilities displayed in MR

  • Navigate to any merge request and scroll to the "Security Scanning" section to see a summary of newly introduced and existing vulnerabilities found in your container images.
  • Click on a Vulnerability to access detailed information about the finding, including severity level, affected packages, and available remediation guidance.

GitLab Security View details in MR

GitLab Security View details in MR

This visibility enables developers and security teams to catch and address container vulnerabilities before they reach production, making security an integral part of your code review process rather than a separate gate.

Viewing vulnerabilities in Vulnerability Report

Beyond merge request reviews, GitLab provides a centralized Vulnerability Report that gives security teams comprehensive visibility across all Container Scanning findings in your project.

Vulnerability Report sorted by Container Scanning

  • Access this report by navigating to Security & Compliance > Vulnerability Report in your project sidebar.
  • Here you'll find an aggregated view of all container vulnerabilities detected across your branches, with powerful filtering options to sort by severity, status, scanner type, or specific container images.
  • You can click on a vulnerabilty to access its Vulnerablity page.

Vulnerability page - 1st view

Vulnerability page - 2nd view

Vulnerability page - 3rd view

Vulnerability Details shows exactly which container images and layers are impacted, making it easier to trace the vulnerability back to its source. You can assign vulnerabilities to team members, change their status (detected, confirmed, resolved, dismissed), add comments for collaboration, and link related issues for tracking remediation work.

This workflow transforms vulnerability management from a spreadsheet exercise into an integrated part of your development process, ensuring that container security findings are tracked, prioritized, and resolved systematically.

View the Dependency List

GitLab's Dependency List provides a comprehensive software bill of materials (SBOM) that catalogs every component within your container images, giving you complete transparency into your software supply chain.

  • Navigate to Security & Compliance > Dependency List to access an inventory of all packages, libraries, and dependencies detected by Container Scanning across your project.
  • This view is invaluable for understanding what's actually running inside your containers, from base OS packages to application-level dependencies.

GitLab Dependency List

You can filter the list by package manager, license type, or vulnerability status to quickly identify which components pose security risks or compliance concerns. Each dependency entry shows associated vulnerabilities, allowing you to understand security issues in the context of your actual software components rather than as isolated findings.

2. Container Scanning for Registry

  • What it does: Automatically scans images pushed to your GitLab Container Registry with the latest tag
  • Best for: Continuous monitoring of registry images without manual pipeline triggers
  • Tier availability: Ultimate only
  • Documentation

When you push a container image tagged latest, GitLab's security policy bot automatically triggers a scan against the default branch. Unlike pipeline-based scanning, this approach works with Continuous Vulnerability Scanning to monitor for newly published advisories.

How to enable Container Scanning for Registry

  1. Navigate to Secure > Security configuration.
  2. Scroll to the Container Scanning For Registry section.
  3. Toggle the feature on.

Container Scanning for Registry

Prerequisites

  • Maintainer role or higher in the project
  • Project must not be empty (requires at least one commit on the default branch)
  • Container Registry notifications must be configured
  • Package Metadata Database must be configured (enabled by default on GitLab.com)

Vulnerabilities appear under the Container Registry vulnerabilities tab in your Vulnerability Report.

3. Multi-Container Scanning

  • What it does: Scans multiple container images in parallel within a single pipeline
  • Best for: Microservices architectures and projects with multiple container images
  • Tier availability: Free, Premium, and Ultimate (currently in Beta)
  • Documentation

Multi-Container Scanning uses dynamic child pipelines to run scans concurrently, significantly reducing overall pipeline execution time when you need to scan multiple images.

How to enable Multi-Container scanning

  1. Create a .gitlab-multi-image.yml file in your repository root:
scanTargets:
  - name: alpine
    tag: "3.19"
  - name: python
    tag: "3.9-slim"
  - name: nginx
    tag: "1.25"
  1. Include the template in your .gitlab-ci.yml:
include:
  - template: Jobs/Multi-Container-Scanning.latest.gitlab-ci.yml

Advanced configuration

Scan images from private registries:

auths:
  registry.gitlab.com:
    username: ${CI_REGISTRY_USER}
    password: ${CI_REGISTRY_PASSWORD}

scanTargets:
  - name: registry.gitlab.com/private/image
    tag: latest

Include license information:

includeLicenses: true

scanTargets:
  - name: postgres
    tag: "15-alpine"

4. Continuous Vulnerability Scanning

  • What it does: Automatically creates vulnerabilities when new security advisories are published, no pipeline required
  • Best for: Proactive security monitoring between deployments
  • Tier availability: Ultimate only
  • Documentation

Traditional scanning only catches vulnerabilities at scan time. But what happens when a new CVE is published tomorrow for a package you scanned yesterday? Continuous Vulnerability Scanning solves this by monitoring the GitLab Advisory Database and automatically creating vulnerability records when new advisories affect your components.

How it works

  1. Your Container Scanning or Dependency Scanning job generates a CycloneDX SBOM.
  2. GitLab registers your project's components from this SBOM.
  3. When new advisories are published, GitLab checks if your components are affected.
  4. Vulnerabilities are automatically created in your vulnerability report.

Key considerations

  • Scans run via background jobs (Sidekiq), not CI pipelines.
  • Only advisories published within the last 14 days are considered for new component detection.
  • Vulnerabilities use "GitLab SBoM Vulnerability Scanner" as the scanner name.
  • To mark vulnerabilities as resolved, you still need to run a pipeline-based scan.

5. Operational Container Scanning

  • What it does: Scans running containers in your Kubernetes cluster on a scheduled cadence
  • Best for: Post-deployment security monitoring and runtime vulnerability detection
  • Tier availability: Ultimate only
  • Documentation

Operational Container Scanning bridges the gap between build-time security and runtime security. Using the GitLab Agent for Kubernetes, it scans containers actually running in your clusters—catching vulnerabilities that emerge after deployment.

How to enable Operational Container Scanning

If you are using the GitLab Kubernetes Agent, you can add the following to your agent configuration file:

container_scanning:
  cadence: '0 0 * * *'  # Daily at midnight
  vulnerability_report:
    namespaces:
      include:
        - production
        - staging

You can also create a scan execution policy that enforces scanning on a schedule by the GitLab Kubernetes Agent.

Scan execution policy - Operational Container Scanning

Viewing results

  • Navigate to Operate > Kubernetes clusters.
  • Select the Agent tab, and choose your agent.
  • Then select the Security tab to view cluster vulnerabilities.
  • Results also appear under the Operational Vulnerabilities tab in the Vulnerability Report.

Enhancing posture with GitLab Security Policies

GitLab Security Policies enable you to enforce consistent security standards across your container workflows through automated, policy-driven controls. These policies shift security left by embedding requirements directly into your development pipeline, ensuring vulnerabilities are caught and addressed before code reaches production.

Scan execution and pipeline policies

Scan execution policies automate when and how Container Scanning runs across your projects. Define policies that trigger container scans on every merge request, schedule recurring scans of your main branch, and more. These policies ensure comprehensive coverage without relying on developers to manually configure scanning in each project's CI/CD pipeline.

You can specify which scanner versions to use and configure scanning parameters centrally, maintaining consistency across your organization while adapting to new container security threats.

Scan execution policy configuration

Pipeline execution policies provide flexible controls for injecting (or overriding) custom jobs into a pipeline based on your compliance needs.

Use these policies to automatically inject Container Scanning jobs into your pipeline, fail builds when container vulnerabilities exceed your risk tolerance, trigger additional security checks for specific branches or tags, or enforce compliance requirements for container images destined for production environments. Pipeline execution policies act as automated guardrails, ensuring your security standards are consistently applied across all container deployments without manual intervention.

Pipeline execution policy

Merge request approval policies

Merge request approval policies enforce security gates by requiring designated approvers to review and sign off on merge requests containing container vulnerabilities.

Configure policies that block merge when critical or high-severity vulnerabilities are detected, or require security team approval for any merge request introducing new container findings. These policies prevent vulnerable container images from advancing through your pipeline while maintaining development velocity for low-risk changes.

Merge request approval policy performing block in MR

Choosing the right approach

Scanning TypeWhen to UseKey Benefit
Pipeline-basedEvery buildShift-left security, blocks vulnerable builds
Registry scanningContinuous monitoringCatches new CVEs in stored images
Multi-containerMicroservicesParallel scanning, faster pipelines
Continuous vulnerabilityBetween deploymentsProactive advisory monitoring
OperationalProduction monitoringRuntime vulnerability detection

For comprehensive security, consider combining multiple approaches. Use pipeline-based scanning to catch issues during development, container scanning for registry for continuous monitoring, and operational scanning for production visibility.

Get started today

The fastest path to container security is enabling pipeline-based scanning:

  1. Navigate to your project's Secure > Security configuration.
  2. Click Configure with a merge request for Container Scanning.
  3. Merge the resulting merge request.
  4. Your next pipeline will include vulnerability scanning.

From there, layer in additional scanning types based on your security requirements and GitLab tier.

Container security isn't a one-time activity, it's an ongoing process. With GitLab's comprehensive container scanning capabilities, you can detect vulnerabilities at every stage of your container lifecycle, from build to runtime.

For more information on how GitLab can help enhance your security posture, visit the GitLab Security and Governance Solutions Page.

Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Extend GitLab Duo Agent Platform: Connect any tool with MCP

1 Share

Managing software development often means juggling multiple tools: tracking issues in Jira, writing code in your IDE, and collaborating through GitLab. Context switching between these platforms disrupts focus and slows down delivery.

With GitLab Duo Agent Platform's MCP support, you can now connect Jira or any tool that supports MCP directly to your AI-powered development environment. Query issues, update tickets, and sync your workflow — all through natural language, without ever leaving your IDE.

What you'll learn

In this tutorial, we'll walk you through:

  • Setting up the Jira/Atlassian OAuth application for secure authentication
  • Configuring GitLab Duo Agent Platform as an MCP client
  • Three practical use cases demonstrating real-world workflows

Prerequisites

Before getting started, ensure you have the following:

RequirementDetails
GitLab instanceGitLab 18.8+ with Duo Agent Platform enabled
Jira accountJira Cloud instance with admin access to create OAuth applications
IDEVisual Studio Code with GitLab Workflow extension installed
MCP supportMCP support enabled in GitLab

Understanding the architecture

GitLab Duo Agent Platform acts as an MCP client, connecting to the Atlassian MCP server to access your Jira project management data. Atlassian MCP server handles authentication, translates natural language requests into API calls, and returns structured data back to GitLab Duo Agent Platform — all while maintaining security and audit controls.

Part 1: Configure Jira OAuth application

To securely connect GitLab Duo Agent Platform to your Jira instance, you'll need to create an OAuth 2.0 application in the Atlassian Developer Console. This grants to GitLab the MCP server authorized access to your Jira data.

Setup steps

If you prefer to configure manually, follow these steps:

  1. Navigate to the Atlassian Developer Console
  2. Create a new OAuth 2.0 app
    • Click CreateOAuth 2.0 integration
    • Enter a name (e.g., "gitlab-dap-mcp")
    • Accept the terms and click Create
  3. Configure permissions
    • Navigate to Permissions in the left sidebar.
    • Add Jira API and configure the following scopes:
      • read:jira-work — Read issues, projects, and boards
      • write:jira-work — Create and update issues
      • read:jira-user — Read user information
  4. Set up authorization
    • Go to Authorization in the left sidebar
    • Add a callback URL for your environment (https://gitlab.com/oauth/callback)
    • Save your changes
  5. Retrieve credentials
    • Navigate to Settings
    • Copy your Client ID and Client Secret
    • Store these securely — you'll need them for the MCP configuration

Interactive walkthrough: Jira OAuth setup

Click on the image below to get started.

Jira OAuth setup tour

Part 2: Configure GitLab Duo Agent Platform MCP client

With your OAuth credentials ready, you can now configure GitLab Duo Agent Platform to connect to the Atlassian MCP server.

Create your MCP configuration file

Create the MCP configuration file in your GitLab project at .gitlab/duo/mcp.json:

{
  "mcpServers": {
    "atlassian": {
      "type": "http",
      "url": "https://mcp.atlassian.com/v1/mcp",
      "auth": {
        "type": "oauth2",
        "clientId": "YOUR_CLIENT_ID",
        "clientSecret": "YOUR_CLIENT_SECRET",
        "authorizationUrl": "https://auth.atlassian.com/oauth/authorize",
        "tokenUrl": "https://auth.atlassian.com/oauth/token"
      },
      "approvedTools": true
    }
  }
}

Replace YOUR_CLIENT_ID and YOUR_CLIENT_SECRET with the credentials you generated in Part 1.

Enable MCP in GitLab

  1. Navigate to your Group SettingsGitLab DuoConfiguration
  2. Make sure “Allow external MCP tools” is checked

Verify the connection

Open your project in VS Code and ask in GitLab Duo Agent Platform chat:

What MCP tools do you have access to?

Then

Test the MCP JIRA configuration in this project

At this point you'll be redirected from the IDE to the MCP Atlassian website to approve access:

Redirect to MCP Atlassian website



Approve access



Select your JIRA instance and approve



Success!



Verify with the MCP Dashboard

GitLab also provides a built-in MCP Dashboard directly in your IDE for this.

In VS Code or VSCodium, open the Command Palette (Cmd+Shift+P on macOS, Ctrl+Shift+P on Windows/Linux) and search for "GitLab: Show MCP Dashboard". The dashboard opens in a new editor tab and gives you:

  • Connection status for each configured MCP server
  • Available tools exposed by the server (e.g., jira_get_issue, jira_create_issue)
  • Server logs so you can see exactly which tools are being called in real time

MCP servers dashboard and status



Server details and permissions



MCP Server logs



Interactive walkthrough: Testing MCP

Part 3: Use cases in action

Now that your integration is configured, let's explore three practical workflows that demonstrate the power of connecting Jira to GitLab Duo Agent Platform.

Planning assistant

Scenario: You're preparing for sprint planning and need to quickly assess the backlog, understand priorities, and identify blockers.

This demo shows you how to:

  • Query the backlog
  • Identify unassigned high-priority issues
  • Get AI-powered sprint recommendations

Example prompts

Try these prompts in GitLab Duo Agent Platform Chat:

List all the unassigned issues in JIRA for project GITLAB
Suggest the two top issues to prioritize and summarize them. Assign them to me.

Interactive walkthrough: Project planning

Issue triage and creation from code

Scenario: While reviewing code, you discover a bug and want to create a Jira issue with relevant context — without leaving your IDE.

This demo walks you through:

  • Identifying a bug while coding
  • Creating a detailed Jira issue via natural language
  • Auto-populating issue fields with code context
  • Linking the issue to your current branch

Example prompts

Search in JIRA for a bug related to: Null pointer exception in PaymentService.processRefund().
If it does not exist create it with all the context needed from the code. Find possible blockers that this bug may cause.
Create a new branch called issue-gitlab-18, checkout, and link it to the issue we just created. Assign the JIRA issue to me and mark it as in-progress.

Interactive walkthrough: Bug review and task automation

Cross-system incident investigation

Scenario: A production incident occurs, and you need to correlate information from Jira (incident ticket), GitLab Project Management, your codebase, and merge requests to identify the root cause.

This demo demonstrates:

  • Fetching incident details from Jira
  • Correlating with recent merge requests in GitLab
  • Identifying potentially related code changes
  • Generating an incident timeline
  • Design a remediation plan and create it as a work item in GitLab

Example prompts

"We have a production incident INC-1 about checkout failures. Can you help me investigate with all available context?"
Create a timeline of events for incident INC-1 including related Jira issues and recent deployments
Propose a remediation plan

Interactive walkthrough: Cross-system troubleshooting and remediation

Troubleshooting

These are some common setup issues and quick fixes:

IssueSolution
"MCP server not found"Verify the mcp.json file is in the correct location and properly formatted
"Authentication failed"Re-check your OAuth credentials and ensure scopes are correctly configured in Atlassian
"No Jira tools available"Restart VS Code after updating mcp.json and ensure MCP is enabled in GitLab
"Connection timeout"Check your network connectivity to mcp.atlassian.com


For detailed troubleshooting, see the GitLab MCP clients documentation.

Security considerations

When integrating Jira with GitLab Duo Agent Platform:

  • OAuth tokens — Make sure credentials remain secure
  • Principle of least privilege — Only grant the minimum required Jira scopes
  • Token rotation — Regularly rotate your OAuth credentials as part of security hygiene

Summary

Connecting GitLab Duo Agent Platform to different tools through MCP transforms how you interact with your development lifecycle. In this article, you have learned how to:

  • Query issues naturally — Ask questions about your backlog, sprints, and incidents in natural language.
  • Create and update issues on all your DevSecOps environment — File bugs and update tickets without leaving your IDE.
  • Correlate across systems — Combine Jira data with GitLab project management, merge requests, and pipelines for complete visibility.
  • Reduce context switching — Keep your focus on code while staying connected to project management.

This integration exemplifies the power of MCP: standardized, secure access to your tools through AI, enabling developers to work more efficiently without sacrificing governance or security.

Read more

Read the whole story
alvinashcraft
37 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories