Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153616 stories
·
33 followers

iOS 27 might add a lot more customization to the Camera app

1 Share
The Camera Control button on an iPhone 16 Pro being used to change settings.

Apple's next iOS update could include something phone photographers have been waiting for: a lot more control over the Camera app. According to Bloomberg's Mark Gurman, the Camera app will be "fully customizable" in iOS 27 and users will be able to "pick their own set of controls - called widgets - that run along the top of the interface."

Gurman notes that the Camera app will keep the current set of default widgets, but starting with iOS 27, you will "be able to switch over to an 'advanced' array of options or choose their own items." From there, you'll reportedly be able to select from a new "Add Widgets" tray, with widgets split into thr …

Read the full story at The Verge.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools

1 Share
An anonymous reader quotes a report from the Financial Times (via Ars Technica): Amazon employees are using an internal AI tool to automate non-essential tasks in a bid to show managers they are using the technology more frequently. The Seattle-based group has started to widely deploy its in-house "MeshClaw" product in recent weeks, allowing employees to create AI agents that can connect to workplace software and carry out tasks on a user's behalf, according to three people familiar with the matter. Some employees said colleagues were using the software to automate additional, unnecessary AI activity to increase their consumption of tokens -- units of data processed by models. They said the move reflected pressure to adopt the technology after Amazon introduced targets for more than 80 percent of developers to use AI each week, and earlier this year began tracking AI token consumption on internal leader boards. "There is just so much pressure to use these tools," one Amazon employee told the FT. "Some people are just using MeshClaw to maximize their token usage." Amazon has told employees that the AI token statistics would not be used in performance evaluations. But several staff members said they believed managers were monitoring the data. "Managers are looking at it," said another current employee. "When they track usage it creates perverse incentives and some people are very competitive about it."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

WWDC 2026 Preview: Apple Readies Siri Overhaul, AI Updates, and More

1 Share

Apple’s WWDC 2026 is expected to preview iOS 27, a smarter Siri, broader AI model options, and macOS 27 design refinements.

The post WWDC 2026 Preview: Apple Readies Siri Overhaul, AI Updates, and More appeared first on TechRepublic.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Accelerating detection engineering using AI-assisted synthetic attack logs generation

1 Share

Logs and telemetry are the foundation of modern cybersecurity. They enable threat detection, incident response, forensic investigation, and compliance across endpoints, networks, and cloud environments. Yet, despite their importance, high‑quality security attack logs are notoriously difficult to collect, especially at scale. 

Real‑world security telemetry is often composed of repeated benign activity occurring across environments and with very rare malicious activity. Gathering, labeling, and maintaining datasets with real attack logs is costly and operationally challenging. It requires not only labeling malicious activities, but also fully reconstructing attack scenarios. These challenges significantly slow detection engineering and limit the quality of both the rule-based detection authoring and anomaly-detection approaches. 

In this post, we explore a different path: using AI to generate realistic, high‑fidelity synthetic security attack logs. By translating attacker behaviors, expressed as tactics, techniques, and procedures (TTPs)—directly into structured telemetry, we aim to accelerate detection development while preserving realism and security. 

Why is this work important for Microsoft Defender customers? 

For Microsoft Defender customers, this work is crucial because it directly addresses the challenge of obtaining high-quality, realistic security attack logs needed for effective threat detection and response. By leveraging AI-driven synthetic log generation, organizations can accelerate the development of detection rules and AI-based automation approaches, while ensuring privacy and reducing operational overhead. Synthetic logs enable customers to simulate a broader range of attack scenarios—including rare and emerging threats—without exposing sensitive data or relying on costly lab-based simulations. Ultimately, this approach enhances the agility and effectiveness of Microsoft Defender detection and response capabilities, helping customers stay ahead of evolving cyber threats. 

Why Synthetic Security Logs in addition to Lab Simulations? 

Synthetic data has been widely adopted in various fields as a privacy-conscious substitute for real data, and it offers even greater advantages in cybersecurity. It enables the creation of safe, shareable datasets that avoid exposure of sensitive customer information, allows simulation of rare or emerging attacks that are challenging to observe in real environments, accelerates the process of detection engineering and testing, and supports reproducible experiments for benchmarking and evaluation. 

While synthetic logs are not a replacement for all lab-based validation, they can complement lab simulations by speeding up early-stage detection design, testing, and coverage expansion. Traditionally, generating realistic attack telemetry requires executing real attacks in controlled lab environments. While accurate, this approach is slow, labor‑intensive, and difficult to scale. It also limits agility for the security teams responsible for defending our systems and delays the rollout of new threat detections into production. This blog examines whether AI-assisted synthetic log generation can provide similar fidelity, without the operational overhead of lab‑based attack execution. 

Core Idea: From TTPs to Logs

Attackers can abuse TTP through various actions that exploit different processes. At a high level, the proposed workflow consumes “TTP + Action” as input and produces structured security logs as output. 

Input: High‑level attacker TTPs from the MITRE ATT&CK framework [1], a widely used knowledge base of adversary tactics and techniques, and concrete attacker actions. See the example below. 

Tactic Technique Action 
Stealth T1202 – Indirect Command Execution  The attackers executed forfiles and obfuscated their actions using variable expansion of %PROGRAMFILES and hex characters (for example, 0x5d). They obfuscated the use of echo, open, read, find, and exec to extract file contents, then passed the output to a Python interpreter for execution. 

Output: Realistic log entries with correctly populated fields such as “Command Line”, “Process Name”, “Parent Process Name”, and other relevant telemetry fields. 

Goal: The goal is not to reproduce logs verbatim, but to generate realistic, semantically correct logs that would accurately trigger detections, mirroring real attacker behavior. 

Approaches for Synthetic Attack Log Generation

We explore three increasingly sophisticated techniques for generating logs. 

  1. Prompt‑Engineered Generation: Our baseline approach uses a series of carefully designed expert‑crafted prompts. The workflow comprises a structured, multi‑stage dialogue: 
    • Prompting: The model is given a detailed attack scenario and context. 
    • Iterative Generation: Logs are generated across multiple turns to maintain coherence. 
    • Evaluation: An independent large language model (LLM)-as-a-Judge assesses realism and consistency. 

As depicted in the following image, the prompts explicitly instruct the model to reason like a cybersecurity researcher, leverage MITRE ATT&CK knowledge, and produce coherent attack narratives. 

Diagram that shows a three-stage AI agent pipeline: prompting for attack scenarios,
iterative generation of logs, and LLM-as-a-Judge evaluation.
  1. Agentic Workflow-based GenerationWhile the first approach works well in simpler cases, it struggles with complex, multi‑stage scenarios. To address these limitations, we introduced an agentic workflow using three specialized agents focused on different tasks: 
    • Generator Agent: Produces an initial set of logs based on the input. 
    • Evaluator Agent: Reviews logs and provides structured feedback. 
    • Improver Agent: Suggests targeted refinements based on feedback. 

As depicted in the image below, these agents collaborate in an iterative loop (generate, evaluate, improve), allowing the system to correct errors, fill gaps, and refine details over multiple turns. This collaborative process significantly improves log completeness and fidelity, especially for complex attack chains. 

Diagram that shows a cyclical agentic workflow where generator, evaluator, and improver
agents collaborate to produce synthetic telemetry logs.
  1. Multi-Turn Reinforcement Learning with Verifiable Rewards: While the synthetic logs generated by the agentic workflow are often semantically correct, preserving key properties like parent‑child process relationships and event ordering, they still differ noticeably from real event logs, especially in process paths, command‑line arguments, service names and so on. This limits the usage of these logs to test detection efficacy; effective detection engineering requires reliably distinguishing benign activity from malicious behavior.  
    To address this challenge, we conduct experiments using Reinforcement Learning with Verifiable Rewards (RLVR). Instead of rigid rewards used by the evaluator agent in the previous agentic workflow approach, we use partial rewards to learn the policies as follows: 
    • We use an LLM‑as‑a‑Judge as follows to compare the synthesized data against ground‑truth logs.  
    • The model only awards partial rewards based on semantic alignment and imposes a penalty if the generated string is not an exact match of the ground-truth logs, producing a more context-aware and flexible reward signal to guide the learning process. 
    • The judge also produces reasoning, making evaluations transparent, and auditable. 
Diagram that shows the LLM-as-a-Judge evaluation comparing generated logs to ground
truth, issuing rewards or penalties to drive policy updates.

While this direction of research shows a lot of promise, it is heavily dependent on the amount of labeled training data. To address this limitation, we applied data augmentations, including: 

  • Paraphrasing attack narratives while preserving technical intent 
  • Perturbing parameters (e.g., replacing executable names with plausible alternatives, re-ordering flags, etc.) 

This allowed us to scale from hundreds to thousands of training examples. 

Evaluation Datasets

To ensure our approach generalizes across environments and attack types, we evaluated it on three complementary datasets: 

  1. Goal‑Driven (GD) Campaigns: These are tightly scoped datasets produced by repeatable attack simulations conducted by our threat researchers. GDs are built around a specific security objective (e.g., detecting credential dumping on Windows servers). They provide clean ground truth and well‑defined attacker actions. We used a total of 10 different GD executions to evaluate our approaches. 
  1. Security Datasets Project: An open‑source initiative [2] that provides malicious and benign datasets from multiple platforms, enabling broader evaluation and generalizability across different environments.  
  1. ATLASv2 Dataset: The ATLASv2 dataset [3] is comprised of Windows Security Auditing logs, Sysmon logs, Firefox logs, and Domain Name System (DNS) telemetry. These logs are generated across two Windows VMs by executing 10 multi‑stage attack scenarios and introducing realistic noise and cross‑host behaviors. We limited the evaluation of synthetic attack logs to malicious activity during the attack windows. 

Note: The external datasets from the Security Datasets Project and ATLASv2 are used strictly for research and validation of our log generation methods. These datasets are not used in the development, training, or deployment of any commercial products. 

Evaluation 

Methodology: We evaluated the prompt engineering and agentic workflow approach on the three datasets across multiple reasoning and non‑reasoning models, using recall as our primary metric. Recall measures the model’s ability to generate semantically relevant log instances (true positives) expected for a given attack scenario. Our LLM‑as‑a‑Judge performs flexible matching, focusing on: 

  • New process name 
  • Parent process name 
  • Command line semantics 

For example, a synthetic log containing “forfiles.exe” can successfully match a ground‑truth entry with the full path “D:\Windows\System32\forfiles.exe”

Key Results: The results in experimental evaluation demonstrate that prompt-only  approaches establish a baseline but show inconsistent performance. The agentic workflows deliver dramatic recall improvements across all datasets. Reasoning models, combined with agentic refinement, achieve the highest fidelity.  

Finally, our experiments training reinforcement learning approaches conclude that while it shows a significant promise, a substantial amount of labeled data will be required for the agent to learn effective policies to make the synthetic data identical to benign logs. 

Table 1 and Table 2 report the performance of the prompt-based and agentic workflow-based approaches, respectively. For reasoning models (o1, o3 and o3-mini), we report the recall values using a Medium reasoning effort. Overall, agentic collaboration emerges as the most effective technique for high‑quality synthetic attack logs generation. 

Table 1: Recall values for prompt-based log generation.
Table 2: Recall values for agentic workflow-based log generation.

Across the evaluation datasets we used, AI‑driven synthetic log generation shows strong potential to produce semantically meaningful logs from TTPs and attacker actions. It can capture multi‑event sequences, preserve parent‑child process relationships, and generate realistic command lines.

This capability can accelerate detection engineering by reducing dependence on costly lab setups and enabling rapid experimentation, without sacrificing realism or safety. Our early experiments with reinforcement learning with verifiable rewards also look promising and could improve verbatim alignment when sufficient training data is available. 

References

  • ATLASv2: ATLAS Attack Engagements, Version 2: 2401.01341 

This research is provided by Microsoft Defender Security Research with contributions from Raghav Batta and  members of Microsoft Threat Intelligence.

Learn more

For the latest security research from the Microsoft Threat Intelligence community, check out the Microsoft Threat Intelligence Blog.

To get notified about new publications and to join discussions on social media, follow us on LinkedInX (formerly Twitter), and Bluesky.

To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.   

The post Accelerating detection engineering using AI-assisted synthetic attack logs generation appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Defense at AI speed: Microsoft’s new multi-model agentic security system tops leading industry benchmark

1 Share

Today Microsoft announced a major step forward in AI-powered cyber defense: our new agentic security system helped researchers find 16 new vulnerabilities across the Windows networking and authentication stack—including four Critical remote code execution flaws in components such as the Windows kernel TCP/IP stack and the IKEv2 service. They used the new Microsoft Security multi-model agentic scanning harness (codename MDASH) which was built by Microsoft’s Autonomous Code Security team. Unlike single-model approaches, the harness orchestrates more than 100 specialized AI agents across an ensemble of frontier and distilled models to discover, debate, and prove exploitable bugs end-to-end.

The results speak for themselves: 21 of 21 planted vulnerabilities found with zero false positives on a private test driver; 96% recall against five years of confirmed Microsoft Security Response Center (MSRC) cases in clfs.sys and 100% in tcpip.sys; and an industry-leading 88.45% score on the public CyberGym benchmark of 1,507 real-world vulnerabilities—the top score on the leaderboard, roughly five points ahead of the next entry.

The strategic implication is clear: AI vulnerability discovery has crossed from research curiosity into production-grade defense at enterprise scale, and the durable advantage lies in the agentic system around the model rather than any single model itself. Codename MDASH is being used by Microsoft security engineering teams and tested by a small set of customers as part of a limited private preview.

This post explains how codename MDASH works, what we shipped today, what we learned along the way, and how you can sign up for the private preview.  

AI-powered vulnerability discovery at hyper-scale

The Microsoft Autonomous Code Security (ACS) team was assembled to take AI-powered vulnerability research from a research curiosity to production engineering at enterprise scale. Several members of this team came to Microsoft from Team Atlanta, the team that won the $20 million DARPA AI Cyber Challenge by building an autonomous cyber-reasoning system that found and patched real bugs in complex open-source projects. The lessons from that work, especially the level of engineering required to make the frontier language models perform professional-level security auditing, are what our new multi-model agentic scanning harness (codename MDASH) is built around.

Microsoft’s code base is challenging for security auditing for a few reasons: 

  • Massive proprietary surface. Windows, Hyper-V, Azure, and the device-driver and service ecosystems around them are private Microsoft codebases—not part of any commodity language model’s training corpus, and genuinely hard to reason about: kernel calling conventions, IRP and lock invariants, IPC trust boundaries, and component-internal idioms do not yield to pattern matching. On this surface, a model has to actually reason. 
  • DevSecOps at scale. Every finding has a real owner, a triage process, and a Patch Tuesday to land on. There is no quiet drawer for speculative findings; if a tool produces noise, the noise is everyone’s problem. 
  • High-value targets. Windows, Hyper-V, Xbox, and Azure serve billions of users. The payoff for finding a single hard bug is unusually high—and so is the cost of a false positive in a tier-one component. 

The findings in this post are the result of close collaboration between ACS and Microsoft Windows Attack Research and Protection (WARP). WARP owns the deep, hard end of Windows offensive research; ACS brings the AI-powered discovery and validation pipeline. Together, the teams have collaborated to build a mature harness.

Codename: MDASH—Microsoft Security’s new multi-model agentic scanning harness

Codename MDASH is, at its core, an agentic vulnerability discovery and remediation system. The model is one input. The system is the product.

A useful mental model is to think of it as a structured pipeline that takes a code base and emits validated, proven findings:

  • Prepare stage: Ingests the source target, builds language-aware indices, and then draws the attack surface and threat models by analyzing the past commits. 
  • Scan stage: Runs specialized auditor agents over candidate code paths, emitting candidate findings with hypotheses and evidence. 
  • Validate stage:  Runs a second cohort of agents—debaters—that argue for and against each finding’s reachability and exploitability. 
  • Dedup stage: Collapses semantically equivalent findings (for example, patch-based grouping). 
  • Prove stage: Constructs and executes triggering inputs where the bug class admits it. The prove stage validates the pre-condition dynamically and formulates the bug-triggering inputs to prove existence of vulnerability (for example, ASan in C/C++). 

Three properties make this work in practice:  

  1. An ensemble of diverse models that are effectively managed by codename MDASH. No single model is best at every stage. The multi-model agentic scanning harness runs a configurable panel of models. That includes SOTA models as the heavy reasoner, distilled models as a cost-effective debater for high-volume passes, and a second separate SOTA model as an independent counterpoint. Disagreement between models is itself a signal: when an auditor flags something as suspect and the debater can’t refute it, that finding’s posterior credibility goes up.
  1. Specialized agents. An auditor does not reason like a debater, which does not reason like a prover. Each pipeline stage has its own role, prompt regime, tools, and stop criteria. We don’t expect one prompt to do everything; we don’t expect one agent to recognize, validate, and exploit a bug in a single pass. Codename MDASH has more than 100 specialized agents, constructed through deep research with past common vulnerabilities and exposures (CVEs) and their patches, working independently to discover the bugs, and their auditing results will be ensembled as a single report.
  1. End-to-end pipeline with extensible plugins. The pipeline is opinionated, but it is not closed. Plugins let domain experts inject context the foundation models can’t see on their own—kernel calling conventions, IRP rules, lock invariants, IPC trust boundaries, codec state machines. The CLFS proving plugin we describe below is one such example: a domain plugin that knows how to construct a triggering log file given a candidate finding. For example, the Windows team extended reasoning with custom code analysis database, or CodeQL database can be also leveraged. 

The payoff for this architecture is portability across model generations. The pipeline’s targeting, validation, dedup, and prove stages are model agnostic by construction, which allows the harness to get the best of what any model has to offer. When a new model lands, A/B testing it against the current panel is one configuration flip. When a model improves, the customer’s prior investment—scope files, plugins, configurations, calibrations—all carry over, allowing customers to ride the frontier of security value.  

Using codename MDASH for security research

To evaluate bug-finding capabilities of the multi-model agentic scanning harness you need to first ground on code that has never been seen by a model. This eliminates the possibility that a model “learned the answers to the test.” We scanned StorageDrive, a sample device driver used in Microsoft interviews for offensive security researchers. The driver contains 21 deliberately injected vulnerabilities, including kernel use-after-frees (UAFs), integer handling issues, IOCTL validation gaps, and locking errors. Because StorageDrive is a private codebase that has never been published, we can safely assume it was not included in the training data of modern language models.

We ran the harness on StorageDrive using its default configuration. The results were striking: all 21 ground-truth vulnerabilities were correctly identified, with zero false positives in this run.

This simple test shows that the reasoning and vulnerability discovery capabilities of codename MDASH can approximate professional offensive researchers.

We then use the harness to conduct security auditing of the most security-critical part of Windows, namely, TCP/IP network stack.

The 5.12.2026 Patch Tuesday cohort

Across the Windows network stack and adjacent services, today’s Patch Tuesday includes 16 CVEs our engineering teams found using codename MDASH.

ComponentDescriptionCVESeverityType
tcpip.sysRemote unauth 
SSRR IPv4 packets causing UAF 
CVE-2026-33827 Critical Remote Code Execution
tcpip.sys NULL deref via crafted IPv6 extension headersCVE-2026-40413 Important Denial of Service (DoS)
tcpip.sys Kernel DoS via ESP SA refcount underflowCVE-2026-40405 Important Denial of Service 
ikeext.dll Unauth IKEv2 SA_INIT double-free triggers LocalSystem RCECVE-2026-33824 Critical Remote Code Execution 
tcpip.sys Use-after-free in Ipv4pReassembleDatagram leading to disclosure CVE-2026-40406 Important Information Disclosure 
tcpip.sys IPsec cross-SA fragment splicing via reassembly CVE-2026-35422 Important Security Feature Bypass 
tcpip.sys Unauthenticated local Windows Filtering Platform (WFP) RPC disables name cache CVE-2026-32209 Important Security Feature Bypass 
ikeext.dll Memory leak CVE-2026-35424 Important Denial of Service 
telnet.exe  Out-of-bounds (OOB) read in FProcessSB via malformed TO_AUTHCVE-2026-35423 Important Information Disclosure 
tcpip.sys IPv6+TCP MDL-split packet triggers NULL derefCVE-2026-40414 Important Denial of Service 
tcpip.sys ICMPv6 packet triggers NdisGetDataBuffer NULL 
deref 
CVE-2026-40401 Important Denial of Service 
tcpip.sys Pre-auth remote UAF via SA double-decrementCVE-2026-40415 Important Remote Code Execution 
http.sys Unauth remote QUIC control-stream OOB readCVE-2026-33096 Important Denial of Service 
tcpip.sys Kernel stack buffer overflow via RPC blobCVE-2026-40399 Important Elevation of Privilege 
netlogon.dll Unauthenticated CLDAP User= filter stack overflowCVE-2026-41089 Critical Remote Code Execution 
dnsapi.dllCrafted UDP DNS response triggers heap OOBCVE-2026-41096 Critical Remote Code Execution 

These vulnerabilities are 10 kernel-mode / 6 usermode. The majority are reachable from a network position with no credentials. Let’s take a closer look.

Two deep dives

The two findings below are characteristic of what the new Microsoft Security multi-model agentic scanning harness pipeline can do that a single model harness cannot. The first is a kernel race-condition use-after-free that requires reasoning about object lifetime across non-trivial control flow and three independent concurrent free paths. The second is an alias-aliasing double-free that spans six source files and is only visible against the contrast of a correctly handled site elsewhere in the same code base.

CVE-2026-33827—Remote unauthenticated UAF in tcpip.sys via SSRR

The vulnerability arises in the Windows IPv4 receive path due to improper lifetime management of a reference-counted Path object within Ipv4pReceiveRoutingHeader. After invoking a routing lookup, the function drops its sole owned reference to the Path through a dereference operation, but later reuses the same pointer when handling Strict Source and Record Route (SSRR) processing. Because the object’s reference count might reach zero at the earlier release point, the underlying memory can be returned to a per-processor lookaside allocator and subsequently reused, turning the later access into a classical use-after-free in kernel context.

This occurs on a network-triggerable path that processes attacker-controlled packet metadata, making it reachable at elevated IRQL within the networking stack. The core issue is escalated by the concurrency model of the path cache and associated cleanup routines. Once the caller relinquishes ownership, the Path object’s liveness depends entirely on external references held by shared data structures. Multiple independent subsystems—including the path-cache scavenger, explicit flush routines, and interface state-driven garbage collection—can concurrently remove the object and drop the final reference. These operations are not synchronized with the receive-side execution window in this function, and no lock is held to serialize access. As a result, on SMP systems the freed object can be reclaimed and overwritten before the subsequent dereference, converting a simple ordering bug into a race-driven use-after-free with real execution feasibility.

From an exploitation standpoint, the vulnerability is reachable by a remote, unauthenticated attacker through crafted IPv4 packets carrying the SSRR option that pass standard validation checks. The stale pointer dereference can trigger a chain of access through freed memory, potentially leading to controlled reads and a stronger corruption primitive if the reclaimed allocation is attacker-influenced. Although exploitation requires winning a narrow timing window and shaping allocator reuse, the combination of remote reachability, kernel execution context, and the potential for controlled memory manipulation elevates the issue to Critical severity.

Why single-model systems missed this bug

A single model harness tends to miss this bug because the lifetime violation is not locally visible even within the same function. The release of the Path reference and its later reuse are separated by non-trivial control flow—an alternate branch, multiple validation checks, and several early-drop conditions—which break the straightforward “release-then-use” pattern most detectors rely on. Without tracking reference ownership across these intermediate states, the model sees two independent operations rather than a temporal dependency. As a result, the dereference does not look suspicious in isolation, even though the reference count semantics guarantee the pointer might already be invalid.

The decisive signal also lives outside the immediate context. The same logical operation appears elsewhere with the correct order; all needed data is derived from the object before dropping the reference. This makes this call-site an inconsistency rather than an obvious misuse.

Detecting that requires cross-file reasoning: identifying analogous patterns, aligning their intent, and noticing the deviation. On top of that, reachability depends on composing multiple conditions—an input that sets the SSRR flag, default configuration that allows the path, and concurrent subsystems that can reclaim the object during the exposed window. A single-shot analysis collapses these steps and loses the interaction between them, whereas a staged approach can connect the ownership violation, the concurrency model, and the externally controlled trigger into a coherent exploitation path.

Disclosure. CVE-2026-33827, patched in April Patch Tuesday. 

CVE-2026-33824: Unauthenticated IKEv2 SA_INIT + fragmentation → double-free → LocalSystem RCE

The vulnerability lived in the IKEEXT service, the Windows component responsible for IKE and AuthIP keying for IPsec, and was reachable by a remote, unauthenticated attacker over UDP/500 on any host configured as an IKEv2 responder (RRAS VPN, DirectAccess, Always-On VPN infrastructure, or any machine with an inbound connection security rule). By sending a crafted IKE_SA_INIT carrying Microsoft’s “IPsec Security Realm Id” vendor-ID payload, followed by a single IKEv2 fragment (RFC 7383 SKF) that reassembles immediately, an attacker could trigger a deterministic double-free of a 16-byte heap allocation inside the service.

Because IKEEXT runs as LocalSystem inside svchost.exe, this represents a pre-authentication remote code execution path into one of the highest-privilege contexts on the system. The root cause is a textbook ownership bug. When IKEEXT reinjects a reassembled fragment back through its receive pipeline, it duplicates the packet’s receive context with a flat memcpy. This is a shallow copy: it clones the struct’s bytes but not the heap allocations it points to. One of those allocations is the attacker-supplied security-realm identifier, and after the copy, both the queued context and the live Main Mode SA hold the same pointer, and both believe they own it.

On teardown, each one frees it, resulting in a double-free. The trigger sequence is two UDP packets, no race, no special timing. The IKEEXT service runs as LocalSystem in svchost.exe. A double-free of a fixed-size heap chunk is a well-understood corruption primitive in modern Windows; we are not publishing further exploitation details. Reachability requires that the host has an IKEv2 responder policy that accepts the proposed transforms—the bug is reachable on RRAS VPN, DirectAccess, Always-On VPN, and IPsec connection security rules in their typical configurations, but a bare Start-Service IKEEXT with no responder policy is not vulnerable. The IKEEXT service is DEMAND_START by default; where responder policy exists, BFE will start it on the first inbound IKE packet, so the attacker does not need IKEEXT to already be running.

Why single-model systems missed this bug

The bug is an aliasing lifecycle bug spanning six files: ike_A.c (the bad memcpy), ike_B.c (the alias origin and the first stack-local copy), ike_C.c (the wrong free), ike_D.c (both the right pattern and the second free), ike_E.c (where the buffer gets populated remotely), and ike_F.c (the IKEv2 dispatcher and the UAF read site that precedes the second free). No single-file analysis sees it. The strongest piece of evidence that the bug is real is the correct version of the same pattern, in the same code base, in ike_D.c—immediately after the memcpy of the selector. Catching this requires the auditor to recognize the missing step at one site by reference to the present step at another. Our specialized auditor agents are designed to surface exactly these comparisons; the debate stage forces them to stand up under cross-examination.

Disclosure. CVE-2026-33824, patched in April Patch Tuesday.   

How capable is codename MDASH?

The Patch Tuesday cohort and the StorageDrive are forward-looking signals. Two retrospective benchmarks tell us how the system performs against ground truth on real, well-reviewed code.  

Recall on historical MSRC cases. We re-ran codename MDASH against pre-patch snapshots of two heavily reviewed Windows components and measured whether the historical MSRC-confirmed bugs would have been (re-)discovered: 

  • clfs.sys: 96% recall on 28 MSRC cases spanning five years. 
  • tcpip.sys100% recall on 7 MSRC cases spanning five years. 

These are the strongest internal numbers we publish, and they are meaningful for a specific reason: the MSRC case database is the ground truth for what real attackers exploited, what required a Patch Tuesday, and what defenders had to react to. A system that recovers 96% of a five-year MSRC backlog in a heavily reviewed kernel component is not finding theoretical weaknesses; it is finding the bugs that mattered. 

We are deliberate about what these numbers do and do not claim. They are retrospective recall benchmarks on internal code with a finite case count. They tell us that the system would have been useful had it existed at the time. They do not, by themselves, predict that the next 38 bugs in CLFS will be found at the same rate. The forward-looking signal is the Patch Tuesday cohort itself. 

The CLFS proving extension as a worked example. The 96% CLFS recall number is in part a story about the prove stage. Many CLFS findings look interesting until you try to construct a triggering log file; a candidate finding without a proof is, in practice, an entry on a triage backlog. The CLFS-specific proving plugin we wrote knows how to construct triggering logs given a candidate finding: it understands the on-disk container layout, the block-validation sequence, and the in-memory state machine well enough to drive a candidate path to its sink. This is precisely what plugin extensibility is for: the foundation models do not, and should not be expected to, internalize Microsoft-specific filesystem invariants. The plugin embeds them, the model uses them, and the outcome is bugs that survive being proven, not bugs that get filed and forgotten.

CyberGym. On the public CyberGym benchmark—a corpus of 1,507 real-world vulnerability reproduction tasks drawn from across 188 OSS-Fuzz projects—the Microsoft Security multi-model agentic scanning harness reaches an 88.45% success rate, the highest score on CyberGym’s published leaderboard at the time of writing and roughly five points above the next entry, 83.1%. This result was obtained by using generally available models. The strong results suggest that the surrounding agentic system contributes substantially to end-to-end performance, beyond raw model capability. For evaluation, we used CyberGym’s default configuration (level 1), which provides the vulnerable source code and a high-level vulnerability description. To interface with CyberGym’s evaluation protocol, we extended the harnesses prove stage to autonomously submit proof-of-concept (PoC) inputs and retrieve flags.

Our failure analysis of the remaining roughly 12% reveals two notable structural patterns: among findings that targeted the wrong code area, 82% came from tasks with vague descriptions that also lacked function or file identifiers, suggesting that description quality is a major factor in scan accuracy. We also found cases where the agent constructed libFuzzer-style inputs, but the benchmark task actually required honggfuzz-format inputs, leading to otherwise sound reproductions failing on harness-format mismatch.

What this all means

We are at a moment in the industry where AI-powered vulnerability discovery stops being speculative and starts being an engineering problem. The findings in this Patch Tuesday and the retrospective recall on five years of CLFS MSRC cases are evidence that AI vulnerability findings can scale.

What we have learned building MDASH and using it across Microsoft is more portable: the harness does the work, and the model is one input.

This matters in three concrete ways.

First, discovery requires composition that no single prompt can achieve. The bugs in this post—the tcpip.sys race, the ikeext.dll alias chain—are not visible to a model handed a single function. They are visible to a system that can sequence cross-file pattern comparison, multi-step reachability analysis, debate between specialized agents, and end-to-end proof construction. Single-model harnesses undersold what models can do; over-trusted single agents overshoot what models can do reliably. The art is the harness around the model, and the harness is most of the engineering.

Second, validation is the difference between a finding and a fix. A scanner that flags candidate bugs is a scanner that produces a triage backlog. The Patch Tuesday cohort is what it is because the system that produced it does not stop at candidate—it debates, dedups, and proves. Validation is not a checkbox; it is its own pipeline of agents and plugins, and it is where most of the day-over-day engineering ends up.

Third, the system absorbs model improvements, which is what makes it durable. When a new model lands, the targeting, debating, dedup, and proof stages do not need to be rewritten; we change a configuration and re-run an A/B test. The customer’s investment—per-project context, scan plugins, proving agents—carries over. This is the architectural property that matters most over time, because the model lottery is going to keep playing out, and any system whose value is gated on a particular model is a system that has to be rebuilt every six months.

For defenders—at any scale, on any code they own—the implication is the same. The right question to ask of an AI vulnerability tool is not which model does it use? but what does it do with the model, and what survives when the next model arrives?

Conclusion

The Microsoft Security multi-model agentic scanning harness (codename MDASH) is helping our engineering teams meaningfully improve security outcomes using generally available AI models—today. It is also being tested by customers as part of our limited private preview. To join the private preview, please sign up here.

Many thanks to the teams across Microsoft working to improve the security of our customers, including the Autonomous Code Security team and the Microsoft Windows Attack Research & Protection (WARP) whose work led to the findings in this post. 

We look forward to sharing more updates with customers and the industry as we work to make the world a safer place for all. 

The post Defense at AI speed: Microsoft’s new multi-model agentic security system tops leading industry benchmark appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Agent Hackathon Challenge Kickoff

1 Share
From: Microsoft Developer
Duration: 5:19
Views: 24

In this segment from Agent Academy Live, we’re officially kicking off the Agent Academy Hackathon!

Join us as we share what the hackathon is all about, how you can participate, and what you can build using Copilot Studio and AI agents. Whether you're just getting started or already building advanced agentic solutions, this hackathon is your chance to learn, experiment, and showcase your ideas.

This is your opportunity to turn ideas into real, working agents—and connect with a global community of builders along the way.

Learn to build AI agents step-by-step: https://aka.ms/agent-academy
Join the Agent Academy Hackathon: http://aka.ms/agent-academy-hackathon
Explore what you can do with Copilot Cowork: https://aka.ms/cowork-collective

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories