Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150080 stories
·
33 followers

Secure agentic AI end-to-end

1 Share

Next week, RSAC™ Conference celebrates its 35-year anniversary as a forum that brings the security community together to address new challenges and embrace opportunities in our quest to make the world a safer place for all. As we look towards that milestone, agentic AI is reshaping industries rapidly as customers transform to become Frontier Firms—those anchored in intelligence and trust and using agents to elevate human ambition, holistically reimagining their business to achieve their highest aspirations. Our recent research shows that 80% of Fortune 500 companies are already using agents.1

At the same time, this innovation is happening against a sea change in AI-powered attacks where agents can become “double agents.” And chief information officers (CIOs), chief information security officers (CISOs), and security decision makers are grappling with the resulting security implications: How do they observe, govern, and secure agents? How do they secure their foundations in this new era? How can they use agentic AI to protect their organization and detect and respond to traditional and emerging threats?

The answer starts with trust, and security has always been the root of trust. In this agentic era, security must be woven into, and around, every layer of the AI estate. It must be ambient and autonomous, just like the AI it protects. This is our vision for security as the core primitive of the AI stack.

At RSAC 2026, we are delivering on that vision with new purpose-built capabilities designed to help organizations secure agents, secure their foundations, and defend using agents and experts. Fueled by more than 100 trillion daily signals, Microsoft Security helps protect 1.6 million customers, one billion identities, and 24 billion Copilot interactions.2 Read on to learn how we can help you secure agentic AI.

Secure agents

Earlier this month, we announced that Agent 365 will be generally available on May 1. Agent 365—the control plane for agents—gives IT, security, and business teams the visibility and tools they need to observe, secure, and govern agents at scale using the infrastructure you already have and trust. It includes new Microsoft Defender, Entra, and Purview capabilities to help you secure agent access, prevent data oversharing, and defend against emerging threats.

Agent 365 is included in Microsoft 365 E7: The Frontier Suite along with Microsoft 365 Copilot, Microsoft Entra Suite, and Microsoft 365 E5, which includes many of the advanced Microsoft Security capabilities below to deliver comprehensive protection for your organization.

Secure your foundations

Along with securing agents, we also need to think of securing AI comprehensively. To truly secure agentic AI, we must secure foundations—the systems that agentic AI is built and runs on and the people who are developing and using AI. At RSAC 2026, we are introducing new capabilities to help you gain visibility into risks across your enterprise, secure identities with continuous adaptive access, safeguard sensitive data across AI workflows, and defend against threats at the speed and scale of AI.

Gain visibility into risks across your enterprise

As AI adoption accelerates, so does the need for comprehensive and continuous visibility into AI risks across your environment—from agents to AI apps and services. We are addressing this challenge with new capabilities that give you insight into risks across your enterprise so you know where AI is showing up, how it is being used, and where your exposure to risk may be growing. New capabilities include:

  • Security Dashboard for AI provides CISOs and security teams with unified visibility into AI-related risk across the organization. Now generally available.
  • Entra Internet Access Shadow AI Detection uses the network layer to identify previously unknown AI applications and surface unmanaged AI usage that might otherwise go undetected. Generally available March 31.
  • Enhanced Intune app inventory provides rich visibility into your app estate installed on devices, including AI-enabled apps, to support targeted remediation of high-risk software. Generally available in May.

Secure identities with continuous, adaptive access

Identity is the foundation of modern security, the most targeted layer in any environment, and the first line of defense. With Microsoft Entra, you can secure access and deliver comprehensive identity security using new capabilities that help you harden your identity infrastructure, improve tenant governance, modernize authentication, and make intelligent access decisions.

  • Entra Backup and Recovery strengthens resilience with an automated backup of Entra directory objects to enable rapid recovery in case of accidental data deletion or unauthorized changes. Now available in preview.
  • Entra Tenant Governance helps organizations discover unmanaged (shadow) Entra tenants and establish consistent tenant policies and governance in multi-tenant environments. Now available in preview.
  • Entra passkey capabilities now include synced passkeys and passkey profiles to enable maximum flexibility for end-users, making it easy to move between devices, while organizations looking for maximum control still have the option of device-bound passkeys. Plus, Entra passkeys are now natively integrated into the Windows Hello experience, making phishing-resistant passkey authentication more seamless on Windows devices. Synced passkeys and passkey profiles are generally available, passkey integration into Windows Hello is in preview. 
  • Entra external Multi-Factor Authentication (MFA) allows organizations to connect external MFA providers directly with Microsoft Entra so they can leverage pre-existing MFA investments or use highly specialized MFA methods. Now generally available.
  • Entra adaptive risk remediation helps users securely regain access without help-desk friction through automatic self-remediation across authentication methods, adapting to where they are in their modern authentication journey. Generally available in April.
  • Unified identity security provides end-to-end coverage across identity infrastructure, the identity control plane, and identity threat detection and response (ITDR)—built for rapid response and real-time decisions. The new identity security dashboard in Microsoft Defender highlights the most impactful insights across human and non-human identities to help accelerate response, and the new identity risk score unifies account-level risk signals to deliver a comprehensive view of user risk to inform real-time access decisions and SecOps investigations. Now available in preview.

Safeguard sensitive data across AI workflows

With AI embedded in everyday work, sensitive data increasingly moves through prompts, responses, and grounding flows—often faster than policies can keep up. Security teams need visibility into how AI interacts with data as well as the ability to stop data oversharing and data leakage. Microsoft brings data security directly into the AI control plane, giving organizations clear insight into risk, real-time enforcement at the point of use, and the confidence to enable AI responsibly across the enterprise. New Microsoft Purview capabilities include:

  • Expanded Purview data loss prevention for Microsoft 365 Copilot helps block sensitive information such as PII, credit card numbers, and custom data types in prompts from being processed or used for web grounding. Generally available March 31.
  • Purview embedded in Copilot Control System provides a unified view of AI‑related data risk directly in the Microsoft 365 Admin Center. Generally available in April.
  • Purview customizable data security reports enable tailored reporting and drilldowns to prioritized data security risks. Available in preview March 31.

Defend against threats across endpoints, cloud, and AI services

Security teams need proactive 24/7 threat protection that disrupts threats early and contains them automatically. Microsoft is extending predictive shielding to proactively limit impact and reduce exposure, expanding our container security capabilities, and introducing network-layer protection against malicious AI prompts.

  • Entra Internet Access prompt injection protection helps block malicious AI prompts across apps and agents by enforcing universal network-level policies. Generally available March 31.
  • Enhanced Defender for Cloud container security includes binary drift and antimalware prevention to close gaps attackers exploit in containerized environments. Now available in preview.
  • Defender for Cloud posture management adds broader coverage and supports Amazon Web Services and Google Cloud Platform, delivering security recommendations and compliance insights for newly discovered resources. Available in preview in April.
  • Defender predictive shielding dynamically adjusts identity and access policies during active attacks, reducing exposure and limiting impact. Now available in preview.

Defend with agents and experts

To defend in the agentic age, we need agentic defense. This means having an agentic defense platform and security agents embedded directly into the flow of work, augmented by deep human expertise and comprehensive security services when you need them.

Agents built into the flow of security work

Security teams move fastest with targeted help where and when work is happening. As alerts surface and investigations unfold across identities, data, endpoints, and cloud workloads, AI-powered assistance needs to operate alongside defenders. With Security Copilot now included in Microsoft 365 E5 and E7, we are empowering defenders with agents embedded directly into daily security and IT operations that help accelerate response and reduce manual effort so they can focus on what matters most.

New agents available now include:

  • Security Analyst Agent in Microsoft Defender helps accelerate threat investigations by providing contextual analysis and guided workflows. Available in preview March 26.
  • Security Alert Triage Agent in Microsoft Defender has the capabilities of the phishing triage agent and then extends to cloud and identity to autonomously analyze, classify, prioritize, and resolve repetitive low-value alerts at scale. Available in preview in April.
  • Conditional Access Optimization Agent in Microsoft Entra enhancements add context-aware recommendations, deeper analysis, and phased rollout to strengthen identity security. Agent generally available, enhancements now available in preview.
  • Data Security Posture Agent in Microsoft Purview enhancements include a credential scanning capability that can be used to proactively detect credential exposure in your data. Now available in preview.
  • Data Security Triage Agent in Microsoft Purview enhancements include an advanced AI reasoning layer and improved interpretation of custom Sensitive Information Types (SITs), to improve agent outputs during alert triage. Agent generally available, enhancements available in preview March 31.
  • Over 15 new partner-built agents extend Security Copilot with additional capabilities, all available in the Security Store.

Scale with an agentic defense platform

To help defenders and agents work together in a more coordinated, intelligence-driven way, Microsoft is expanding Sentinel, the agentic defense platform, to unify context, automate end-to-end workflows, and standardize access, governance, and deployment across security solutions.

  • Sentinel data federation powered by Microsoft Fabric investigates external security data in place in Databricks, Microsoft Fabric, and Azure Data Lake Storage while preserving governance. Now available in preview.
  • Sentinel playbook generator with natural language orchestration helps accelerate investigations and automate complex workflows. Now available in preview.
  • Sentinel granular delegated administrator privileges and unified role-based access control enable secure and scaling management for partners and enterprise customers with cross-tenant collaboration. Now available in preview.
  • Security Store embedded in Purview and Entra makes it easier to discover and deploy agents directly within existing security experiences. Generally available March 31.
  • Sentinel custom graphs powered by Microsoft Fabric enable views unique to your organization of relationships across your environment. Now available in preview.
  • Sentinel model context protocol (MCP) entity analyzer helps automate faster with natural language and harnesses the flexibility of code to accelerate responses. Generally available in April.

Strengthen with experts

Even the most mature security organizations face moments that call for deeper partnership—a sophisticated attack, a complex investigation, a situation where seasoned expertise alongside your team makes all the difference. The Microsoft Defender Experts Suite brings together expert-led services—technical advisory, managed extended detection and response (MXDR), and end-to-end proactive and reactive incident response—to help you defend against advanced cyber threats, build long-term resilience, and modernize security operations with confidence.

Apply Zero Trust for AI

Zero Trust has always been built on three principles: verify explicitly, use least privilege, and assume breach. As AI becomes embedded across your entire environment—from the models you build on, to the data they consume, to the agents that act on your behalf—applying those principles has never been more critical. At RSAC 2026, we’re extending our Zero Trust architecture, the full AI lifecycle—from data ingestion and model training to deployment agent behavior. And we’re making it actionable with an updated Zero Trust for AI reference architecture, workshop, assessment tool, and new patterns and practices articles to help you improve your security posture.

See you at RSAC

If you’re joining the global security community in San Francisco for RSAC 2026 Conference, we invite you to connect with us. Join us at our Microsoft Pre-Day event and stop by our booth at the RSAC Conference North Expo (N-5744) to explore our latest innovations across Microsoft Agent 365, Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft Sentinel, and Microsoft Security Copilot and see firsthand how we can help your organization secure agents, secure your foundation, and help you defend with agents and experts. The future of security is ambient, autonomous, and built for the era of AI. Let’s build it together.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Based on Microsoft first-party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

2Microsoft Fiscal Year 2026 First Quarter Earnings Conference Call and Microsoft Fiscal Year 2026 Second Quarter Earnings Conference Call

The post Secure agentic AI end-to-end appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

AI Won’t Replace Security Tools – It’s Helping Them Prioritize Biggest Threats

1 Share

For Mackenzie Jackson (Developer and Security Advocate, Aikido Security) modern security is a nonstop game of whack-a-mole, with alerts and vulnerabilities keeping teams busy putting out fires instead of preventing them.

But that chaos of cybersecurity is familiar territory for him: he investigates attacks and helps teams turn those findings into actionable steps.

But strip away the complexity, and his advice on security is surprisingly simple:

One of the biggest areas for smaller teams to focus on is simply stopping the bleeding.

You don’t need a flawless system, you need to regain control, and by implementing proactive measures companies neutralize threats before they ever touch production. It’s not a complete solution, but it’s a necessary foundation.

Cybersecurity rests on two pillars: people and access

From the outside, cybersecurity looks like a web of interconnected threats and technically, and it is. But when incidents are investigated, the story tends to collapse into something much more… human:

When you actually investigate a breach, what happened? Well, someone was probably phished, their credentials stolen, and that gave access to a system.

From there, attackers escalate, finding additional credentials, uncovering secrets, moving laterally through systems. Despite all the layers of technical complexity, most breaches still come down to two variables: people and access. This doesn’t make security easy, but it does make it clearer.

Brakes make race cars faster – and security works the same way

One of the oldest problems in cybersecurity is organizational: How do you convince leadership to invest in something that, ideally, prevents things from happening?

Fear is the usual tactic so you talk about reputational damage, financial loss, worst-case scenarios. It works, but only to a point and that is why Jackson suggests a different framing:

Brakes make race cars go faster.

It’s a counterintuitive analogy, but an effective one: without brakes, speed becomes dangerous. With them, drivers can push harder, take sharper turns, and move faster with confidence. Security, in this sense is an enabler:

If we build security now, we can innovate faster… establish your brakes so that you can go faster with confidence.

The alternative, adding security later, under pressure from compliance or customer demands almost always slows teams down.

Security tools are here to stay, but AI gives them context

The arrival of AI introduced a pattern: urgency first, understanding later.

After tools like GPT entered the mainstream, companies rushed to integrate AI into their security products. But much of that early adoption, Jackson suggests, was surface-level. The real value of AI lies elsewhere:

AI is a terrible scanner… but it’s great at understanding context.

Traditional security tools are deterministic and that is why they answer yes-or-no questions. Is there a vulnerability? Does this code contain a known issue? AI, by contrast, is non-deterministic. It doesn’t always give the same answer twice and that makes it unreliable for detection, but powerful for interpretation:

If you give it vulnerabilities and ask how severe this is, how exploitable it is that’s where AI becomes incredibly useful.

In other words, AI doesn’t replace security tools. It complements them, helping teams prioritize what actually matters.

AI doesn’t make attackers smarter, it makes attacks easier

So if AI isn’t fundamentally changing how attacks work, what is it changing? Scale.

AI has given script kiddies superpowers.

This phrase captures the shift precisely: AI doesn’t necessarily make attackers more skilled, it makes attacks easier to execute, faster to launch, and accessible to a much larger pool of people. But the core mechanics of attacks remain the same:

It’s not moving the bar up… it’s changing the scale.

And that, perhaps, is the most important takeaway. Because if the nature of attacks hasn’t fundamentally changed, neither has the foundation of defense. Good security hygiene. Strong access control. Protecting the software development lifecycle, Jackson points out.

The tools may evolve. The threats may accelerate. But the principles still hold.

The post AI Won’t Replace Security Tools – It’s Helping Them Prioritize Biggest Threats appeared first on ShiftMag.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Beyond Code Review

1 Share

Not that long ago, we were resigned to the idea that humans would need to inspect every line of AI-generated code. We’d do it personally, code reviews would always be part of a serious software practice, and the ability to read and review code would become an even more important part of a developer’s skillset. At the same time, I suspect we all knew that was untenable, that AI would quickly generate much more code than humans could reasonably review. Understanding someone else’s code is harder than understanding your own, and understanding machine-generated code is harder still. At some point—and that point comes fairly early on—all the time you saved by letting AI write your code is spent reviewing it. It’s a lesson we’ve learned before; it’s been decades since anyone except for a few specialists needed to inspect the assembly code generated by a compiler. And, as Kellan Elliott-McRae has written, it’s not clear that code review has ever justified the cost. While sitting around a table inspecting lines of code might catch problems of style or poorly implemented algorithms, code review remains an expensive solution to relatively minor problems.

With that in mind, specification-driven development (SDD) shifts the emphasis from review to verification, from prompting to specification, and from testing to still more testing. The goal of software development isn’t code that passes human review; it’s systems whose behavior lives up to a well-defined specification that describes what the customer wants. Finding out what the customer needs and designing an architecture to meet those needs requires human intelligence. As Ankit Jain points out in Latent Space, we need to make the transition from asking whether the code is written correctly to asking whether we’re solving the right problem. Understanding the problem we need to solve is part of the specification process—and it’s something that, historically, our industry hasn’t done well.

Verifying that the system actually performs as intended is another critical part of the software development process. Does it solve the problem as described in the specification? Does it meet the requirements for what Neal Ford calls “architectural characteristics” or “-ilities”: scalability, auditability, performance, and many other characteristics that are embodied in software systems but that can rarely be inferred from looking at the code, and that AI systems can’t yet reason about? These characteristics should be captured in the specification. The focus of the software development process moves from writing code to determining what the code should do and verifying that it indeed does what it’s supposed to do. It moves from the middle of the process to the beginning and the end. AI can play a role along the way, but specification and verification are where human judgment is most important.

Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.

Drew Breunig and others point out that this is inherently a circular process, not a linear one. A specification isn’t something you write at the start of the process and never touch again. It needs to be updated whenever the system’s desired behavior changes: whenever a bug fix results in a new test, whenever users clarify what they want, whenever the developers understand the system’s goals more deeply. I’m impressed with how agile this process is. It is not the agile of sprints and standups but the agile of incremental development. Specification leads to planning, which leads to implementation, which leads to verification. If verification fails, we update the spec and iterate. Drew has built Plumb, a command line tool that can be plugged into Git, to support an automated loop through specification and testing. What distinguishes Plumb is its ability to help software developers look at the decisions that resulted in the current version of the software: diffs, of course, but also conversations with AI, the specifications, the plans, and the tests. As Drew says, Plumb is intended as an inspiration or a starting point, and it’s clearly missing important features—but it’s already useful.

Can SDD replace code review? Probably; again, code review is an expensive way to do something that may not be all that useful in the long run. But maybe that’s the wrong question. If you don’t listen carefully, SDD sounds like a reinvention of the waterfall process: a linear drive from writing a detailed spec to burning thousands of CDs that are stored into a warehouse. We need to listen to SDD itself to ask the right questions: How do we know that a software system solves the right problem? What kinds of tests can verify that the system solves the right problem? When is automated testing inappropriate, and when do we need human engineers to judge a system’s fitness? And how can we express all of that knowledge in a specification that leads a language model to produce working software?

We don’t place as much value in specifications as we did in the last century; we tend to see spec writing as an obsolete ceremony at the start of a project. That’s unfortunate, because we’ve lost a lot of institutional knowledge about how to write good, detailed specifications. The key to making specifications relevant again is realizing that they’re the start of a circular process that continues through verification. The specification is the repository for the project’s real goals: what it’s supposed to do and why—and those goals necessarily change during the course of a project. A software-driven development loop that runs through testing—not just unit testing but fitness testing, acceptance testing, and human judgment about the results—lays the groundwork for a new kind of process in which humans won’t be swamped by reviewing AI-generated code.



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Dropdowns Inside Scrollable Containers: Why They Break And How To Fix Them Properly

1 Share

The scenario is almost always the same, which is a data table inside a scrollable container. Every row has an action menu, a small dropdown with some options, like Edit, Duplicate, and Delete. You build it, it seems to work perfectly in isolation, and then someone puts it inside that scrollable div and things fall apart. I’ve seen this exact bug in three different codebases: the container, the stack, and the framework, all different. The bug, though, is totally identical.

The dropdown gets clipped at the container’s edge. Or it shows up behind content that should logically be below it. Or it works fine until the user scrolls, and then it drifts. You reach for z-index: 9999. Sometimes it helps, but other times it does absolutely nothing. That inconsistency is the first clue that something deeper is happening.

The reason it keeps coming back is that three separate browser systems are involved, and most developers understand each one on its own but never think about what happens when all three collide: overflow, stacking contexts, and containing blocks.

Once you understand how all three interact, the failure modes stop feeling random. In fact, they become predictable.

The Three Things Actually Causing This

Let’s look at each of those items in detail.

The Overflow Problem

When you set overflow: hidden, overflow: scroll, or overflow: auto on an element, the browser will clip anything that extends beyond its bounds, including absolutely positioned descendants.

.scroll-container {
  overflow: auto;
  height: 300px;
  /* This will clip the dropdown, full stop */
}

.dropdown {
  position: absolute;
  /* Doesn't matter -- still clipped by .scroll-container */
}

That surprised me the first time I ran into it. I’d assumed position: absolute would let an element escape a container’s clipping. It doesn’t.

In practice, that means an absolutely positioned menu can be cut off by any ancestor that has a non-visible overflow value, even if that ancestor isn’t the menu’s containing block. Clipping and positioning are separate systems. They just happen to collide in ways that look completely random until you understand both.

Here’s a React example using createPortal:

import { createPortal } from 'react-dom';
import { useState, useEffect, useRef } from 'react';

function Dropdown({ anchorRef, isOpen, children }) {
  const [position, setPosition] = useState({ top: 0, left: 0 });

  useEffect(() => {
    if (isOpen && anchorRef.current) {
      const rect = anchorRef.current.getBoundingClientRect();
      setPosition({
        top: rect.bottom + window.scrollY,
        left: rect.left + window.scrollX,
      });
    }
  }, [isOpen, anchorRef]);

  if (!isOpen) return null;

  return createPortal(
    <div
      id="dropdown-demo"
      role="menu"
      className="dropdown-menu"
      style={{ position: 'absolute', top: position.top, left: position.left }}
    >
      {children}
    </div>,
    document.body
  );
}

And, of course, we can’t ignore accessibility. Fixed elements that appear over content must still be keyboard-reachable. If the focus order doesn’t naturally move into the fixed dropdown, you’ll need to manage it using code. It’s also worth checking that it doesn’t sit over other interactive content with no way to dismiss it. That one bites you in keyboard testing.

CSS Anchor Positioning: Where I Think This Is Heading

CSS Anchor Positioning is the direction I’m most interested in right now. I wasn’t sure how much of the spec was actually usable when I first looked at it. It lets you declare the relationship between a dropdown and its trigger directly in CSS, and the browser handles the coordinates.

.trigger {
  anchor-name: --my-trigger;
}

.dropdown-menu {
  position: absolute;
  position-anchor: --my-trigger;
  top: anchor(bottom);
  left: anchor(left);
  position-try-fallbacks: flip-block, flip-inline;
}

The position-try-fallbacks property is what makes this worth using over a manual calculation. The browser tries alternative placements before giving up, so a dropdown at the bottom of the viewport automatically flips upward instead of getting cut off.

Browser support is solid in Chromium-based browsers and growing in Safari. Firefox needs a polyfill. The @oddbird/css-anchor-positioning package covers the core spec. I’ve hit layout edge cases with it that required fallbacks I didn’t anticipate, so treat it as a progressive enhancement or pair it with a JavaScript fallback for Firefox.

In short, promising but not universal yet. Test in your target browsers.

And as far as accessibility is concerned, declaring a visual relationship in CSS doesn’t tell the accessibility tree anything. aria-controls, aria-expanded, aria-haspopup — that part is still on you.

Sometimes The Fix Is Just Moving The Element

Before reaching for a portal or making coordinate calculations, I always ask one question first: Does this dropdown actually need to live inside the scroll container?

If it doesn’t, moving the markup to a higher-level wrapper eliminates the problem entirely, with no JavaScript and no coordinate calculations.

This isn’t always possible. If the button and dropdown are encapsulated in the same component, moving one without the other means rethinking the whole API. But when you can do it, there’s nothing to debug. The problem just doesn’t exist.

What Modern CSS Still Doesn’t Solve

CSS has come a long way here, but there are still places it lets you down.

The position: fixed and transform issues are still there. It’s in the spec intentionally, which means no CSS workaround exists. If you’re using an animation library that wraps your layout in a transformed element, you’re back to needing portals or anchor positioning.

CSS Anchor Positioning is promising, but new. As mentioned earlier, Firefox still needs a polyfill at the time I’m writing this. I’ve hit layout edge cases with it that required fallbacks I didn’t anticipate. If you need consistent behavior across all browsers today, you’re still reaching for JavaScript for the tricky parts.

The addition I’ve actually changed my workflow for is the HTML Popover API, now available in all modern browsers. Elements with the popover attribute render in the browser’s top layer, above everything, with no JavaScript positioning needed.

<button popovertarget="dropdown-demo">Open</button>
<div id="dropdown-demo" popover="manual" role="menu">Popover content</div>

Escape handling, dismiss-on-click-outside, and solid accessibility semantics come free for things like tooltips, disclosure widgets, and simple overlays. It’s the first tool I reach for now.

That said, it doesn’t solve positioning. It solves layering. You still need anchor positioning or JavaScript to align a popover to its trigger. The Popover API handles the layering. Anchor positioning handles the placement. Used together, they cover most of what you’d previously reach for a library to do.

A Decision Guide For Your Situation

After going through all of this the hard way, here’s how I actually think about the choice now.

  • Use a portal.
    I’d use this when the trigger lives deep in nested scroll containers. I used this pattern for table action menus and paired it with focus restoration and accessibility checks. It’s the most reliable option, but budget time for the extra wiring.
  • Use fixed positioning.
    This is for when you’re in vanilla JavaScript or a lightweight framework and can verify no ancestor applies transforms or filters. It’s simple to set up and simple to debug, as long as that one constraint holds.
  • Use CSS Anchor Positioning.
    Reach for this when your browser support allows it. If Firefox support is required, pair it with the @oddbird polyfill. This is where the platform is ultimately heading and will eventually become your go-to approach.
  • Restructure the DOM.
    Use this when the architecture permits it, and you want zero runtime complexity. I believe it’s likely the most underrated option.
  • Combine patterns.
    Do this when you want anchor positioning as your primary approach, paired with a JavaScript fallback for unsupported browsers. Or a portal for DOM placement paired with getBoundingClientRect() for coordinate accuracy.
Conclusion

I used to treat this bug as a one-off issue — something to patch and move on from. But once I sat with it long enough to understand all three systems involved — overflow clipping, stacking contexts, and containing blocks — it stopped feeling random. I could look at a broken dropdown and immediately trace which ancestor was responsible. That shift in how I read the DOM was the real takeaway.

There’s no single right answer. What I reached for depended on what I could control in the codebase: portals when the ancestor tree was unpredictable; fixed positioning when it was clean and simple; moving the element when nothing was stopping me; and anchor positioning now, where I can.

Whatever you end up choosing, don’t treat accessibility as the last step. In my experience, that’s exactly when it gets skipped. The ARIA relationships, the focus management, the keyboard behavior — those aren’t polish. They’re part of what makes the thing actually work.

Check out the full source code in my GitHub repo.

Further Reading

These are the references I kept coming back to while working through this:



Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Your AI agent can now create, edit, and manage content on WordPress.com

1 Share

Last October, we introduced MCP support on WordPress.com, giving AI agents like Claude, ChatGPT, OpenClaw, and Cursor a window into your site’s content, analytics, and settings. 

Thousands of you connected your favorite AI tools, asked questions about your sites, and saved hours of dashboard diving.

But you told us you wanted more. Reading your site data was useful, but you wanted your agent to be able to actually do things for you!

That’s why we added write capabilities, turning your AI agent into your most versatile WordPress collaborator.

From reading to writing

With write capabilities, your AI agent can now:

  • Draft and publish blog posts: Provide copy or describe what you want to publish, and your AI agent can create the post directly on your site.
  • Build and update pages: Create landing pages, About pages, and more, complete with your site’s design specs and block patterns.
  • Manage comments: Approve, reply to, or clean up comments without ever opening your dashboard.
  • Organize your content: Create, rename, and restructure categories and tags across your site.
  • Update media metadata: Fix alt text, captions, and titles for better accessibility and SEO.

And all of this happens through natural conversation. Just tell your AI agent what you want to do, and it handles the rest.

19 new abilities, the same interface

These new capabilities add 19 new writing abilities across six content types: posts, pages, comments, categories, tags, and media. Besides enabling the new tools in your WordPress.com MCP dashboard, there’s nothing new to install to get started.

Here’s a taste of what you can do with your AI agent:

  • “I just finished writing this post. Publish it as a draft, categorize it as ‘Travel,’ add relevant tags, and write me a meta description under 160 characters.”
  • “I want to start publishing recipes on my blog. Set up a ‘Recipes’ category with subcategories for Breakfast, Lunch, Dinner, and Desserts.”
  • “Create an About page with sections for our team, mission, and contact info.”
  • “I want to add a testimonials section to my About page. Find a pattern in my theme that works for that and set it up as a draft — I’ll supply the actual quotes.”
  • “Approve all the pending comments on my latest post and reply to the one asking about pricing.”
  • “Add a ‘Tutorials’ category under ‘Resources’ and tag my latest three posts with ‘Beginner.'”
  • “Audit my website for Accessibility and create a report.”
  • “Find all images in my media library that are missing alt text and suggest some based on the filename or attachment context.”

Your AI agent discovers the available operations, figures out what’s needed, and walks you through the process — confirming every step before making changes.

Design-aware updates

One of the most powerful aspects of the write capabilities is the integration with your site’s theme. Before creating content, your AI agent can search your theme’s design and understand  its colors, fonts, spacing, and block patterns. 

This results in outputs that inherits your site’s design system and adapts automatically when you change themes.

Safety you can trust

We know that giving an AI agent the ability to modify your site is a big step. That’s why we’ve built this with multiple layers of protection:

Every change requires your approval. Before creating, updating, or deleting anything, your AI agent describes exactly what it plans to do and asks for your explicit confirmation. Nothing happens without approval from you.

New posts default to drafts. When your AI agent creates a post or page, it starts as a draft, giving you a chance to review before anything goes live. If you update a published post, your agent warns you that changes will be visible immediately.

Deletion is reversible (where possible). Deleting posts, pages, comments, or media moves them to the trash, where they’re recoverable for 30 days. For categories and tags — which WordPress doesn’t support trashing — your agent explicitly warns that deletion is permanent and requires additional confirmation. 

All changes are visible through your Activity Log. See all of your AI agent’s activity in your site’s dashboard (or just ask your AI agent for a list of changes it has made).

WordPress permissions are enforced. The write capabilities respect the same user role permissions as the rest of WordPress.com. An Editor can create and edit posts, but can’t change site settings. A Contributor can draft posts but can’t publish. Your existing access controls are automatically carried over.

You choose what’s enabled. Every operation, from creating posts to updating media, has its own toggle in your MCP settings. Enable only what you need on the sites you need it, and leave everything else off.

Get started

Write capabilities are available today on all WordPress.com paid plans. Here’s how to start:

  1. Enable MCP on your account at wordpress.com/me/mcp.
  2. Toggle on the write capabilities you want to use.
  3. Connect your AI clientClaude, Cursor, ChatGPT, or any MCP-enabled tool.
  4. Start creating. 

For the full list of available operations and technical details, check out our MCP Tools Reference and prompt examples to spark your creativity.

When we launched MCP on WordPress.com, we said that understanding your site shouldn’t mean piecing together insights from half a dozen places. Now, managing your site shouldn’t mean it either.

Your AI agent is ready. What will you create?





Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Look into the future of the web platform

1 Share

Last week I spoke at the very lovely Web Day Out in Brighton. My talk was about browser support, based on the work I’ve done over the past almost five years on Baseline. I ran through the various things you need to consider when deciding whether to use features that don’t meet your Baseline target. For example, if you are using Baseline Widely available as a target, how do you decide whether it’s “safe” to use a feature that’s still Baseline Newly available? One of the factors to consider, especially when planning a new project, is what will be part of your Baseline come launch day?

Newly available is the point of interoperability, a feature is part of Baseline the minute the last of the core browsers ships a stable version that includes it. From then the clock starts ticking until 30 months have passed and the feature becomes Baseline Widely available. For most people that’s the point at which they can use the feature without worrying about the fallback experience.

The data behind Baseline is what tells you if something is Newly or Widely available. It also gives you a new power, you can now look into the future of the web platform. If you are planning a project today, what’s the launch date? For a brand new site or application, that might be six months to a year from now. Rather than tying yourself to what’s Widely available today, you can probably include anything that will be Widely available on that date. The same is true of picking a different Baseline target based on a Baseline year. If Baseline 2023 makes sense today, and the new site will launch in nine months to a year, perhaps settle for Baseline 2024. Moving a year forward would give you features like Declarative Shadow DOM, @starting-style, and AVIF.

Whenever I talk about new features in CSS, people immediately ask when it will be available everywhere, or when it will be safe to use. The data gives you a way to understand that, and also to talk about the decisions with stakeholders. I think this is one of the most exciting things to come from this project, as we’ve never had this kind of future-facing view before.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories