An open licensing standard that aims to make AI companies pay for the content they vacuum up across the web is now an official specification. Really Simple Licensing 1.0 - or RSL for short - gives publishers the ability to dictate licensing and compensation rules to the web crawlers that visit their sites.
The RSL Collective announced the standard in September with backing from Yahoo, Ziff Davis, and O'Reilly Media. It's an expansion of the robots.txt file, which outlines the parts of a website a web crawler can access. Though RSL alone can't block AI scrapers that don't pay for a license, the web infrastructure providers that support the s …
Just lasso the object you want to edit, and click Erase to remove it or Isolate to separate it from the background.
Figma is launching three new AI-powered creative tools to help users edit their images without jumping to another platform. The new tools are available in Figma Design and Figma Draw, and can be used to quickly remove objects from an image, isolate objects so they can be repositioned, and extend images beyond their previous dimensions.
The Erase object and Isolate object tools are designed to work alongside Figma's existing lasso tool, which allows users to draw around specific sections of the image they want to edit. Any objects or people within these selections can then be instantly erased from the image while filling in the background be …
As email threats grow more sophisticated and layered security architectures become more common, organizations need clear, data-driven insights to evaluate how their security solutions perform together. Benchmarking plays a critical role in helping security leaders understand not just individual product efficacy, but how integrated solutions contribute to overall protection.
Microsoft’s commitment to transparency continues with the release of our second email security benchmarking report, informed by valuable customer and partner feedback. Continuing our prior benchmarking analysis, this testing relies on real-world email threats observed across the Microsoft ecosystem, rather than synthetic data or artificial testing environments. The study compares environments protected exclusively by Microsoft Defender with those using a Secure Email Gateway (SEG) positioned in front of Defender, as well as environments where Integrated Cloud Email Security (ICES) solutions add a secondary layer of detection after Defender. In addition, the benchmarking analysis for ICES vendors now includes malicious catch by Defender’s zero-hour-auto purge, which is a post-delivery capability that removes additional malicious emails after filtering is completed by any ICES solution in place, as shown in Figure 1. Throughout this process, we maintain the highest standards of security and privacy, to help ensure all data is aggregated and anonymized, consistent with practices used in the Microsoft Digital Defense Report 2025.
In this second report, we updated our testing methodology based on discussions with partners and gaining a deeper understanding of their architectures, to provide a more accurate and transparent view of layered email protection. First, we addressed integration patterns such as journaling and connector-based reinjection, which previously could cause the same cyberthreat to appear as detected by both Microsoft Defender and an ICES vendor even when Defender ultimately blocked it. These scenarios risked inflating or misattributing performance metrics, so our revised approach corrects this. Second, we now include Microsoft Defender zero-hour auto purge post-delivery detections alongside ICES vendor actions. This addition highlights cyberthreats that ICES vendors missed but were later remediated by Microsoft Defender, to help ensure customers see the full picture of real-world protection. Together, these changes make the benchmarking results more representative of how layered defenses operate in practice.
ICES vendors, benchmarking
Microsoft’s quarterly analysis shows that layering ICES solutions with Microsoft Defender continues to provide a benefit in reducing marketing and bulk email, with an average improvement of 9.4% across specific vendors. This helps minimize inbox clutter and improves user productivity in environments where promotional noise is a concern. For filtering of spam and malicious messages, the incremental gains remain modest, averaging 1.65% and 0.5% respectively.
When looking only at the subset of malicious messages that reached the inbox, Microsoft Defender’s zero-hour auto purge on average removed 45% of malicious mail post-delivery, while ICES vendors on average contributed 55% in post-delivery filtering of malicious mail. Per vendor details can be found in Figure 3. This highlights why post-delivery remediation is essential, even in a layered approach, for real-world protection.
Figure 3. Post-delivery malicious catch by Microsoft Defender.
SEG vendors, benchmarking
For the SEG vendors benchmarking metrics a cyberthreat was considered “missed” if it was not detected pre-delivery, or if it was not removed shortly after delivery (post-delivery).
Defender missed fewer threats in this study compared to other solutions, consistent with trends observed in our prior report.
In the face of increasingly complex email threats, clarity and transparency remain essential for informed decision-making. Our goal is to provide customers with actionable insights based on real-world data, so security leaders can confidently evaluate how layered solutions perform together.
We’ve listened to feedback from customers and partners and refined our methodology to better reflect real-world deployment patterns. These updates help ensure that vendors are more accurately represented than before, and that benchmarking results are fair, comprehensive, and useful for planning.
We will continue publishing quarterly benchmarking updates and evolving our approach in collaboration with our customers and partners, so benchmarking remains a trusted resource for optimizing email security strategies. Access the benchmarking site for more information.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
Anyone who uses AI systems knows the frustration: a prompt is given, the response misses the mark, and the cycle repeats. This trial-and-error loop can feel unpredictable and discouraging. To address this, we are excited to introduce Promptions (prompt + options), a UI framework that helps developers build AI interfaces with more precise user control.
Its simple design makes it easy to integrate into any setting that relies on added context, including customer support, education, and medicine. Promptions is available under the MIT license on Microsoft Foundry Labs (opens in new tab) and GitHub.
Background
Promptions builds on our research, “Dynamic Prompt Middleware: Contextual Prompt Refinement Controls for Comprehension Tasks.” This project examined how knowledge workers use generative AI when their goal is to understand rather than create. While much public discussion centers on AI producing text or images, understanding involves asking AI to explain, clarify, or teach—a task that can quickly become complex. Consider a spreadsheet formula: one user may want a simple syntax breakdown, another a debugging guide, and another an explanation suitable for teaching colleagues. The same formula can require entirely different explanations depending on the user’s role, expertise, and goals.
A great deal of complexity sits beneath these seemingly simple requests. Users often find that the way they phrase a question doesn’t match the level of detail the AI needs. Clarifying what they really want can require long, carefully worded prompts that are tiring to produce. And because the connection between natural language and system behavior isn’t always transparent, it can be difficult to predict how the AI will interpret a given request. In the end, users spend more time managing the interaction itself than understanding the material they hoped to learn.
Identifying how users want to guide AI outputs
To explore why these challenges persist and how people can better steer AI toward customized results, we conducted two studies with knowledge workers across technical and nontechnical roles. Their experiences highlighted important gaps that guided Promptions’ design.
Our first study involved 38 professionals across engineering, research, marketing, and program management. Participants reviewed design mock-ups that provided static prompt-refinement options—such as length, tone, or start with—for shaping AI responses.
Although these static options were helpful, they couldn’t adapt to the specific formula, code snippets, or text the participant was trying to understand. Participants also wanted direct ways to customize the tone, detail, or format of the response without having to type instructions.
Why dynamic refinement matters
The second study tested prototypes in a controlled experiment. We compared the static design from the first study, called the “Static Prompt Refinement Control” (Static PRC), against a “Dynamic Prompt Refinement Control” (Dynamic PRC) with features that responded to participants’ feedback. Sixteen technical professionals familiar with generative AI completed six tasks, spanning code explanation, understanding a complex topic, and learning a new skill. Each participant tested both systems, with task assignments balanced to ensure fair comparison.
Comparing Dynamic PRC to Static PRC revealed key insights into how dynamic prompt-refinement options change users’ sense of control and exploration and how those options help them reflect on their understanding.
Static prompt refinement
Static PRC offered a set of pre‑selected controls (Figure 1) identified in the initial study. We expected these options to be useful across many types of explanation-seeking prompts.
Figure 1: The static PRC interface
Dynamic prompt refinement
We built the Dynamic PRC system to automatically produce prompt options and refinements based on the user’s input, presenting them in real time so that users could adjust these controls and guide the AI’s responses more precisely (Figure 2).
Figure 2. Interaction flow in the Dynamic PRC system. (1) The user asks the system to explain a long Excel formula. (2) Dynamic PRC generates refinement options: Explanation Detail Level, Focus Areas, and Learning Objectives. (3) The user modifies these options. (4) The AI returns an explanation based on the selected options. (5) In the session chat panel, the user adds a request to control the structure or format of the response. (6) Dynamic PRC generates new option sets based on this input. (7) The AI produces an updated explanation reflecting the newly applied options.
Azure AI Foundry Labs
Get a glimpse of potential future directions for AI, with these experimental technologies from Microsoft Research.
Participants consistently reported that dynamic controls made it easier to express the nuances of their tasks without repeatedly rephrasing their prompts. This reduced the effort of prompt engineering and allowed users to focus more on understanding content than managing the mechanics of phrasing.
Figure 3. Comparison of user preferences for Static PRC versus Dynamic PRC across key evaluation criteria.
Contextual options prompted users to try refinements they might not have considered on their own. This behavior suggests that Dynamic PRC can broaden how users engage with AI explanations, helping them uncover new ways to approach tasks beyond their initial intent. Beyond exploration, the dynamic controls prompted participants to think more deliberately about their goals. Options like “Learning Objective” and “Response Format” helped them clarify what they needed, whether guidance on applying a concept or step-by-step troubleshooting help.
Figure 4. Participant ratings comparing the effectiveness of Static PRC and Dynamic PRC
While participants valued Dynamic PRC’s adaptability, they also found it more difficult to interpret. Some struggled to anticipate how a selected option would influence the response, noting that the controls seemed opaque because the effect became clear only after the output appeared.
However, the overall positive response to Dynamic PRC showed us that Promptions could be broadly useful, leading us to share it with the developer community.
Technical design
Promptions works as a lightweight middleware layer that sits between the user and the underlying language model (Figure 5). It has two main components:
Option Module. This module reviews the user’s prompt and conversation history, then generates a set of refinement options. These are presented as interactive UI elements (radio buttons, checkboxes, text fields) that directly shape how the AI interprets the prompt.
Chat Module. This module produces the AI’s response based on the refined prompt. When a user changes an option, the response immediately updates, making the interaction feel more like an evolving conversation than a cycle of repeated prompts.
Figure 5. Promptions middleware workflow. (1) The Option Module reads the user’s prompt and conversation history and (2) generates prompt options. (3) These options are rendered inline by a dedicated component. (4) The Chat Module incorporates these refined options alongside the original prompt and history to produce a response. (5) When the user adjusts the controls, the refinements update and the Chat Module regenerates the response accordingly.
Adding Promptions to an application
Promptions easily integrates into any conversational chat interface. Developers only need to add a component to display the options and connect it to the AI system. There’s no need to store date between sessions, which keeps implementation simple. The Microsoft Foundry Labs (opens in new tab) repository includes two sample applications, a generic chatbot and an image generator, that demonstrate this design in practice.
Promptions is well-suited for interfaces where users need to provide context but don’t want to write it all out. Instead of typing lengthy explanations, they can adjust the controls that guide the AI’s response to match their preferences.
Questions for further exploration
Promptions raises important questions for future research. Key usability challenges include clarifying how dynamic options affect AI output and managing the complexity of multiple controls. Other questions involve balancing immediate adjustments with persistent settings and enabling users to share options collaboratively.
On the technical side, questions focus on generating more effective options, validating and customizing dynamic interfaces, gathering relevant context automatically, and supporting the ability to save and share option sets across sessions.
These questions, along with broader considerations of collaboration, ethics, security, and scalability, are guiding our ongoing work on Promptions and related systems.
In this recorded Live! 360 keynote, Mads Kristensen and Nik Karpinsky walk us through The Road to Visual Studio 2026 — a faster, smarter IDE built for professional C#, C++, and .NET developers. You’ll see how the team re-architected Visual Studio for performance, reduced everyday “paper cuts,” and integrated GitHub Copilot and AI agents in ways that feel natural to your existing workflows—not disruptive.
From faster startup times to AI-assisted debugging and modernizing older .NET apps, this session highlights how Visual Studio 2026 helps you deliver higher-quality code with less friction.
⌚Chapters: 00:59 — Visual Studio 2026 vision 06:16 — Removing the "paper-cuts" 07:30 — Smooth upgrade path 08:24 — Backwards-compatible extensions 09:20 — Modern Fluent UI with color themes 10:45 — Unified JSON-backed settings 12:00 — Quality of life changes 13:00 — Performance — faster and more responsive on the same hardware 19:33 — AI-assisted development with Copilot & agent tools 23:22 — Profiler agent diagnostics and performance optimizations 28:30 — Test & code review agents 30:30 — Modernization workflows & upgrading older .NET apps 34:44 — Decoupling the IDE and build tools 35:06 — Visual Studio Insiders, the newest features first 39:50 — Monthly updates & features 40:39 — Microsoft ecosystem updates