Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149948 stories
·
33 followers

Microsoft says Copilot on Windows 11 is getting “Bye” command, MS 365 Copilot coming to Chrome as an extension

1 Share

Microsoft confirmed that it’s testing a “Bye” phrase for Copilot on Windows 11, particularly Microsoft 365 Copilot, which is pre-installed on all PCs, regardless of whether you’ve a subscription or not. Microsoft 365 Copilot is also coming to Google Chrome as an extension, as if Copilot inside Edge was not enough.

Up until now, you could only use “Hey Copilot” to open Copilot on Windows 11, but what if you want to end the session? You need to manually drag the cursor and click the ‘X’ icon, but that changes with the ‘Bye Copilot‘ phrase.

Hey Copilot voice setting

In an update to the Microsoft 365 roadmap, the Windows giant confirmed that it’s testing the ‘Bye’ phrase for Copilot.

In our tests, Windows Latest observed that ‘Bye Copilot’ already works on our production PCs, so it’s likely the feature has already rolled out to some users.

In the roadmap, Microsoft also says “Bye Copilot” is coming to the Microsoft 365 Copilot app. It’s only a matter of time until these phrases are universally applied across the operating system. If you’re in a voice mode interaction with Copilot, you will be able to say “Bye Copilot” to exit the conversation.

“Now users can close a voice session on Windows by simply saying bye or goodbye when they want to close the voice session,” Microsoft noted in a roadmap update.

Microsoft is saying that the feature could create a ‘hands-free’ experience for AI Windows and eventually AI Agents.

“This, paired with ‘Hey Copilot’ wake word, provides them with a complete hands-free experience for voice in Microsoft 365 Copilot on Windows devices,” Microsoft confirmed.

This is part of a bigger plan, which is to turn Windows into an agentic OS, whether you like it or not. The feature will begin rolling out to everyone in December 2025 for all PCs that have Copilot and the Microsoft 365 Copilot app installed. A subscription is not required to say ‘goodbye’ to Copilot.

What might the future of Windows look like?

When we connect the dots, we actually see where Windows is heading.

Copilot operator
Copilot Actions using Agent Workspace on Windows 11

At some point, you’ll be able to say Hey Copilot to open Copilot and start a conversation where you can also trigger Copilot Action. Copilot Actions starts performing tasks on your PC by accessing your personal files and folders. For example, it’ll be able to browse the Downloads folder in File Explorer, read a selected PDF, then create a presentation.

Once the ‘action’ is completed, you can say “Goodbye” to end the session.

Microsoft 365 Copilot is coming to Google Chrome

Copilot in Google Chrome

If you want to use Copilot in Google’s browser, you need to rely on the web version, but that might not be the best experience.

Microsoft is now building a new extension for Chrome called ‘Microsoft 365 Copilot’, and it brings Copilot Chat as well as Copilot Search directly into the browser.

We don’t know if it’ll be integrated in the browser’s address bar, as Microsoft’s extensions have previously tried to change the default search engine in Chrome. But you’ll be able to access Copilot chat from the extension menu. If you’re part of an organization, Microsoft says you can also access enterprise content.

Initially, Copilot’s extension for Chrome will be limited to asking questions, summarising webpages and offering access to AI-powered search. All of that suggests the extension will have read access to the website you are viewing, but we don’t know if it’ll also require access to browsing history.

The post Microsoft says Copilot on Windows 11 is getting “Bye” command, MS 365 Copilot coming to Chrome as an extension appeared first on Windows Latest

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Amazon Tells Its Engineers: Use Our AI Coding Tool 'Kiro'

1 Share
"Amazon suggested its engineers eschew AI code generation tools from third-party companies in favor of its own ," reports Reuters, "a move to bolster its proprietary Kiro service, which it released in July, according to an internal memo viewed by Reuters." In the memo, posted to Amazon's internal news site, the company said, "While we continue to support existing tools in use today, we do not plan to support additional third party, AI development tools. "As part of our builder community, you all play a critical role shaping these products and we use your feedback to aggressively improve them," according to the memo. The guidance would seem to preclude Amazon employees from using other popular software coding tools like OpenAI's Codex, Anthropic's Claude Code, and those from startup Cursor. That is despite Amazon having invested about $8 billion into Anthropic and reaching a seven-year $38 billion deal with OpenAI to sell it cloud-computing services..."To make these experiences truly exceptional, we need your help," according to the memo, which was signed by Peter DeSantis, senior vice president of AWS utility computing, and Dave Treadwell, senior vice president of eCommerce Foundation. "We're making Kiro our recommended AI-native development tool for Amazon...." In October, Amazon revised its internal guidance for OpenAI's Codex to "Do Not Use" following a roughly six month assessment, according to a memo reviewed by Reuters. And Claude Code was briefly designated as "Do Not Use," before that was reversed following a reporter inquiry at the time. The article adds that Amazon "has been fighting a reputation that it is trailing competitors in development of AI tools as rivals like OpenAI and Google speed ahead..."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Python In The Age Of AI

1 Share
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Steam Next Fest games using AI? And Blender 5 is here!

1 Share

Hello and Welcome, I’m your Code Monkey!

December starts tomorrow! Are you ready for it? This year has flown right by! I've simultaneously done so much and yet there was so much more left that I wanted to do. But let's not get ahead of ourselves, there's still a full month left in the year, so use that time to build something awesome!

I'm currently in the process of doing something I've wanted to do for ages, remaster all my tutorials to Unity 6! This is being quite a lot more work than I thought but I'm really looking forward to upgrading all the hundreds of tutorials I've done over the years and have them available for you to easily download and learn!

Oh and if you need anything to make your projects a reality then there are still some Black Friday deals going on, the Asset Store has added some new deals, and I'm running my own Black Friday sale for just a few more days.

  • Game Dev: Next Fest AI Usage; s&box Open Source

  • Tech: Blender 5.0

  • Fun: Guess AI Art


Game Dev

Data about AI usage in Next Fest

AI is everywhere and people are finding more and more ways to use it, sometimes in good ways and sometimes in bad ways. Steam a while ago introduced an AI disclosure that states on the game page whether the game uses AI or not and how does it use it.

Steam Next Fest has just wrapped up and someone made an extremely detailed report on the games in the fest, how many use AI and how exactly do they use it.

There were about 3000 games included in the October Next Fest, and 507 (17%) of those used AI in some way.

This graph showcases how the AI was used, with the biggest use case being In Game Art (56%) followed by Marketing (26%) and then Voice Acting (12%) and Music (11%). However most of the games that used AI art also made sure to include in the disclosure how the AI art was curated and refined by humans.

Interesting is how 53% of developers used AI in just one category, but 47% used it in multiple ways. Meaning developers are finding AI is usable in many different use cases, not just one single thing.

In terms of Code, this is one area that is likely widely underreported since it's not really visible in the final product, according to this study only 8% say they used AI coding.

But of course what really matters is the final result. With AI being such a polarizing topic many people (myself included) have wondered what is the result of this disclosure, do players look at it and actively avoid game with AI disclosures? Or do players not care about it at all? I tried doing some research myself a while ago and the answer seems to be that players don't care. Just like assets, all they want are fun games to play.

And actually a great recent example is ARC Raiders. This is the new hot game that people love, it is out on Steam and it has the AI disclosure saying "During the development process, we may use procedural- and AI-based tools to assist with content creation. In all such cases, the final product reflects the creativity and expression of our own development team." and it has Very Positive reviews.

Also on this topic there is a recent hot take by Tim Sweeney (Epic CEO) stating that the AI disclosure on Steam doesn't make sense since in the future AI will be involved in every single game production.

I am always interested in seeing actual data to make better decisions so I really enjoyed this study with a ton of detail. I'd be curious to see a similar study in the February Next Fest to see if the trend is going up or down.


Affiliate

Black Friday CONTINUES! FREE Environment

Unity BLACK FRIDAY Sale is still ongoing! You can see everything on sale right here!

Top assets 50% OFF, and Flash Deals changing every day up to 95% OFF!

I made a video talking about my best recommendations from the sale. Lots of awesome stuff that will help you a lot!

The Publisher of the Week this time is Black Horizon Studios, publisher with environments and tools.

Get the FREE Ultimate Nature Pack which is a gorgeous environment. It features lots of trees and grass, as well as a snow scene and nice oasis.

Get it HERE and use coupon BLACKHORIZON2025 at checkout to get it for FREE!

There’s a MASSIVE HumbleBundle with thousands of Realistic and Stylized environments at 99% OFF!

Contains both Unity and Unreal assets.

Get it HERE!


Game Dev

s&box Game Engine goes open source!

Have you played Garry's Mod? What about Rust? Those are excellent games that have been massively successful. The company behind them, Facepunch, has been using all that money to build an engine called s&box (read "sandbox") which they have just open sourced!

It really seems we are in a golden age of game engines. There are so many great engines, all of them very good and all of them free and some of them open source. More competition is always great for all game devs.

However the open source part is the s&box code itself, this engine is actually built on top of Source 2, which is Valve's proprietary engine that is not open source. They describe it as a long term project meant to take all the best things of Source 1, Unity, Unreal and put them all in one game engine.

You can get s&box from their website which then lets you download it from Steam and from there it basically works like a Platform where you can play games that were built using it.

Looking at the list of games that have already been created it does seem clearly that this is very capable, there's a lot of variety in game types.

The engine runs on C# 14 with .NET 10, sharing your games is effortless within the engine, multiplayer is baked in, it includes hot reload, cloud assets, shader graph, visual scripting and tons more that you expect from any modern engine.

There is also a Play Fund where you can earn money when people play your games and they have already paid out $250,000 to developers, so this could be a good reason to explore this engine to try to get into it and make some money while the platform is small.

I loved playing Garry's Mod as a kid, I haven't played it since but looking at videos for this engine really brought back memories. I remember making all kinds of vehicle by just adding rockets to a cart and watching it all fly away, fun times!



Tech

Blender 5.0 is out!

Blender is one of the heavyweights of the games industry, at least for indie developers. It's impressive how completely free open source software is so good.

The new massive 5.0 version has just been fully released! It includes lots of new modifiers to place objects and modify meshes, UV selection has been greatly improved, Sculpting got various improvements, animation and rigging updates, geometry nodes, rendering and more.

It is a massive new version that brings lots of improvements, again whilst keeping the software free and open source. If you use it a lot consider sending a donation their way.

I have tried using Blender myself, I went through a 10 hour course and managed to actually learn the basics which I was pretty happy with! It is definitely an extremely powerful tool but one that requires quite a bit of training, just like any other skill like programming.



Fun

Can you guess AI art?

AI art is everywhere nowadays and it is constantly improving. The days of guessing AI art by looking at fingers are long gone, now it can accurately draw almost anything.

Some people hate AI art and have a tendency to accuse people of using AI to draw something, although in many cases that assumption is incorrect and leads to false accusations which can ruin someone.

Do you think you can identify which art is AI or not? Here is a website to do just that.

It shows you 50 works of art, it's up to you to tag them as AI or not. Go ahead give it a try.

I got 56% correct, so basically a coin flip. It really is nearly impossible to tell with confidence nowadays, the tools are just too good.




Get Rewards by Sending the Game Dev Report to a friend!

(please don’t try to cheat the system with temp emails, it won’t work, just makes it annoying for me to validate)

Thanks for reading!

Code Monkey

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Go from prompt to production using a set of AI tools, or just one (Google Antigravity)

1 Share

We’ve passed the first phase of AI dev tooling. When I first saw AI-assisted code completion and generation, I was wow-ed. Still am. Amazing stuff. Then agentic coding environments went a step further. We could generate entire apps with products like Replit or v0! Following that, we all got new types of agentic IDEs, CLIs, background coding agents, and more. With all these options, there isn’t just one way to work with AI in software engineering.

I’m noticing that I’m using AI tools to command (perform actions on my environment or codebase), to code (write or review code), and to conduct (coordinate agents who work on my behalf). Whether these are done via separate tools or the same one, this seems like a paradigm that will persist for a while.

Let’s see this in action. I’ll first do this with a set of popular tools—Google AI Studio, Gemini CLI, Gemini Code Assist, and Jules—and then do the same exercise with the new Google Antigravity agent-first development platform.

Architecture diagram generated with nano-banana

I’ve accepted that I’ll never be a professional baseball player. It’s just not in the cards. But can I use AI to help me pretend that I played! Let’s build an application that uses AI to take an uploaded picture and generate images of that person in various real-life baseball situations.

Build with a set of AI tools

Gemini 3 Pro is excellent at frontend code and Google AI Studio is a fantastic way to get started building my app. I went to the “Build” section where I could provide a natural language prompt to start vibe-coding my baseball app. Here’s an example of “commanding” with AI tools.

Google AI Studio

After a few seconds of thinking, I saw a stash of files created for my application. Then a preview popped up that I could actually interact with.

Vibe coded app in Google AI Studio

Jeez, only one prompt and I have an awesome AI app. How cool is that? The Nano Banana model is just remarkable.

Now I wanted to do more with this app and bring it into my IDE to make some updates before deploying it. In the top right of the screen, there’s a GitHub icon. After I clicked that, I was asked to authenticate with my GitHub account. Next, I had to provide details about which repo to create for this new codebase.

Create GitHub repo from Google AI Studio

Then Google AI Studio showed me all the changes it made in the local repo. I get one last chance to review things before staging and committing the changes.

Push changes to GitHub

A moment later, I had a fully populated GitHub repo. This gave me the intermediate storage I needed to pick up and continue with my IDE and agentic CLI.

Vibe coded app code in my GitHub repo

I jumped into Visual Studio Code with the installed Gemini Code Assist plugin. I’ve also got the Gemini CLI integration set up, so everything is all in one place.

Visual Studio Code with Gemini Code Assist and the Gemini CLI

Here, I can command and code my way to a finished app. I could ask (command) for a summary of the application itself and how it’s put together. But even more useful, I issued a command asking for how this app was authenticating with the Gemini API.

Gemini Code Assist helping me understand the codebase

Very helpful! Notice that it found a config file that shows a mapping from GEMINI_API_KEY (which is the environment variable I need to set) to the API_KEY referred to in code. Good to know.

Here’s where I could continue to code my way through the app with AI assistance if there were specific changes I felt like making ahead of deploying it. I wrote a mix of code (and used the Gemini CLI) to add a Node server to serve this static content and access the environment variable from the runtime.

Let’s do some conducting. I didn’t feel like writing up a whole README and wanted some help from AI. Here’s where Jules comes in, and its extension for the Gemini CLI. Notice that I have Gemini CLI extensions for Jules and Cloud Run already installed.

Two MCP servers added to the Gemini CLI

I can go ahead and ask Jules to create a better README, and then continue on my work. Agents working on my behalf!

Using the Gemini CLI to trigger a background task in Jules

After doing some other work, I came back and checked the status of the Jules job (/jules status) and saw that the task was done. The Jules extension asked me if I wanted a new branch, or to apply the changes locally. I chose the former option and reviewed the PR before merging.

Reviewing a branch with a README updated by Jules

Finally, I was ready to deploy this to Google Cloud Run. Here, I also used a command approach and instructed the Gemini CLI to deploy this app with the help of the extension for Cloud Run.

Using a natural language request from me, the Gemini CLI crafted the correct gcloud CLI command to deploy my app.

Doing a deployment to Cloud Run from the Gemini CLI

That finished in a few seconds, and I had my vibe-coded app, with some additional changes, deployed and running in Google Cloud.

App running on Google Cloud

So we commanded Google AI Studio to build the fundamentals of the app, used Gemini Code Assist and the Gemini CLI to code and command towards deployment, and Jules to conduct background agents on our behalf. Not particularly difficult, and the handoffs via a Git repo worked well.

This process works great if you have distinct roles with handoffs (designer –> developer –> deployment team) or want to use distinct products at each stage.

Build with Google Antigravity

Google Antigravity isn’t a code editor. It’s not an IDE. It’s something more. Yes, you can edit code and do classic IDE things. What’s different is that it’s agent-first, and supports a rich set of surfaces in a single experience. I can kick off a series of agents to do work, trigger Computer Use in a dedicated browser, and extend behavior through MCP servers. Basically, I can do everything I did above, but within a single experience.

Starting point with Google Antigravity

I fed it the same prompt I gave to Google AI Studio. Immediately, Google Antigravity got to work building an implementation plan.

Giving a prompt to Antigravity to build out an application

I love that I can review this implementation plan, and add comments to sections I want to update. This feels like a very natural way to iterate on this specification. Right away, I asked for Node server host for this app, and am building it that way from the start.

Implementation Plan, with comments

The AI agent recognizes my comments and refreshes its plans.

Antigravity using the Implementation Plan to begin its work

At this point, the agent is rolling. It built out the entire project structure, created all the code files, and plowed through its task list. Yes, it creates and maintains a task list so we can track what’s going on.

Task List maintained by Antigravity

The “Agent Manager” interface is wild. From here I can see my inbox of agent tasks, and monitor what my agents are currently doing. This one is running shell commands.

Agent Manager view for triggering and managing agent work

The little “drawer” at the bottom of the main chat window also keeps tabs on what’s going on across all the various agents. Here I could see what docs need my attention, which processes are running (e.g. web servers), and which artifacts are part of the current conversation.

View of processes, documents, and conversation artifacts

The whole app building processed finished in just a few minutes. It looked good! And because Google Antigravity has built-in support for Computer Use with a Chrome browser, it launched a browser instance and showed me how the app worked. I can also prompt Computer Use interactions any time via chat.

Computer Use driving the finished application

Antigravity saved the steps it followed into an artifact called Walkthrough. Including a screenshot!

Generated walkthrough including screenshots

How about fixing the README? In the previous example, I threw that to a background task in Jules. I could still do that here, but Antigravity is also adept at doing asynchronous work. I went into the Agent Manager and asked for a clean README with screenshots and diagrams. Then I closed Agent Manager and did some other things. Never breaking flow!

Triggering a background agent to update the README

Later, I noticed that the work was completed. The Agent Manager showed me what it did, and gave me a preview of the finished README. Nice job.

Finished README with diagrams and screenshots

I wanted to see the whole process through, so how about using Google Antigravity to deploy this final app to Google Cloud Run?

This product also supports extension via MCP. During this product preview, it comes with a couple dozen MCP servers in the “MCP Store.” These include ones for Google products, Figma, GitHub, Stripe, Notion, Supabase, and more.

MCP servers available out of the box

We don’t yet include one for Cloud Run, but I can add that myself. The “manage MCP servers” is empty to start, but it shows you the format you need to add to the configuration file. I added the configuration for the local Cloud Run MCP server.

Configuration for the Cloud Run MCP server

After saving that configuration, I refreshed the “manage MCP servers” screen and saw all the tools at my disposal.

Tools available from the Cloud Run MCP server

Sweet! I went back to the chat window and asked Google Antigravity to deploy this app to Cloud Run.

Antigravity deploying the app to Google Cloud Run

The first time, the deployment failed but Google Antigravity picked up the error and updated the app to start on the proper port and tweak how it handled wildcard paths. It then redeployed, and worked.

Chat transcript of attempt to deploy to Google Cloud Run

Fantastic. Sure enough, browsing the URL showed my app running and working flawlessly. Without a doubt, this would have been hours or days of work for me. Especially on the frontend stuff since I’m terrible at it. Instead, the whole process took less than an hour.

Finished application running in Google Cloud Run

I’m very impressed! For at least the next few years, software engineering will likely include a mix of commands, coding, and conducting. As I showed you here, you can do that with distinct tools that enable distinct stages and offer one or more of those paradigms. Products like Google Antigravity offer a fresh perspective, and make it possible to design, build, optimize, and deploy all from one product. And I can now seamlessly issue commands, write code, and conduct agents without ever breaking flow. Pretty awesome.



Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Defender for AI services: Threat Protection and AI red team workshop

1 Share

Generative AI is reshaping how enterprises operate, introducing new efficiencies—and new risks. Imagine launching a helpful chatbot, only to learn a cleverly crafted prompt can bypass safety controls and exfiltrate sensitive data. This is today’s reality: every system prompt, plugin/tool, dataset, fine‑tune, or orchestration step can change the attack surface. This is due to the non deterministic nature in how LLM models craft responses in output. The slightest change in the input of a prompt in verbiage or tone can change the outcome of a output in subtle but non predictable ways especially when your data is involved.

This post shows how to operationalize AI red teaming with Microsoft Defender for AI services so security teams gain evidence‑backed visibility into adversarial behavior and turn that visibility into daily defense. By aligning with Microsoft’s Responsible AI principles of transparency, accountability, and continuous improvement, we demonstrate a pragmatic, repeatable loop that makes AI safer week after week. Crucially, security needs to have a seat at the table across the AI app lifecycle from model selection and pilot to production and ongoing updates.

Who Should Read This (and What You’ll See)

  • SOC analysts & incident responders - See how AI signals materialize as high‑fidelity alerts (prompt evidence, URL intel, identity context) in Defender for Cloud and Defender XDR for fast triage and correlation.
  • AI/ML engineers - Validate model safety with controlled simulations (PyRIT‑informed strategies) and understand which filters/guardrails move the needle.
  • Security architects - Integrate Microsoft Defender for AI services into your cloud security program; codify improvements as policy, IaC, and identity hygiene.
  • Red teamers/researchers - Run structured, repeatable adversarial tests that produce measurable outcomes the org can act on.

Why now? Data leakage, prompt injection, jailbreaks, and endpoint abuse are among the fastest‑growing threats to AI systems. With AI red teaming and Microsoft Defender for AI services, you catch intent before impact and translate insight into durable controls.

What’s Different About the AI Attack Surface

New risks sit alongside the traditional ones:

  • Prompts & responses — Susceptible to prompt injection and jailbreak attempts (rule change, role‑play, encoding/obfuscation).
  • User & application context — Missing context slows investigations and blurs accountability.
  • Model endpoints & identities — Static keys and weak identity practices increase credential theft and scripted probing risk.
  • Attached data (RAG/fine‑tuning) — Indirect prompt injection via documents or data sources.
  • Orchestration layers/agents — Tool invocation abuse, unintended actions, or “over‑permissive” chains.
  • Content & safety filters — Configuration drift or silent loosening erodes protection.

A key theme across these risks is context propagation. The way user identity, application parameters, and environmental signals travel with each prompt and response. When context is preserved and surfaced in security alerts, SOC teams can quickly correlate incidents, trace attack paths, and remediate threats with precision. Effective context propagation transforms raw signals into actionable intelligence, making investigations faster and more accurate.

Microsoft Defender for AI services adds a real‑time protection layer across this surface by combining Prompt Shields, activity monitoring, and Microsoft threat intelligence to produce high‑fidelity alerts you can operationalize.

The Improvement Loop (Responsible AI in Practice)

Responsible AI comes to life when teams Observe → Correlate → Remediate → Retest → Codify:

  1. Observe controlled jailbreak/phishing/automation patterns and collect prompt evidence.
  2. Correlate with identity, network, and prior incidents in Defender XDR.
  3. Remediate with the smallest effective control (filters, identities, rate limits, data scoping).
  4. Retest the same scenario to verify risk reduction.
  5. Codify as baseline (policy, IaC template, guardrail profile, rotation notes).

Repeat this rhythm on a schedule and you’ll build durable posture faster than a one‑time “big‑bang” control set.

Prerequisites:

To take advantage of this workshop you’ll need:

  1. Sandbox subscription (ideally inside a Sandbox Mgmt Group with lighter policies), you may also leverage a Free trial of Azure Subscription as well.
  2. Microsoft Defender for AI services plan enabled (see Participant Guide )
  3. Contributor access (you can deploy + view alerts)
  4. Region capacity confirmed (Azure AI Foundry in East US 2)

Workshop flow and testing:

Prep: Enable the Microsoft Defender for AI services plan with prompt evidence, deploy the Azure Template (one hub + single endpoint), and open the AIRT-Eval.ipynb notebook; you now have a controlled space to generate signals(see Participant Guide).

Controlled Signals: Trigger against a jailbreak attempt, a phishing URL simulation, and a suspicious user agent simulation to produce three distinct alert types.

Triage & Correlate: For each alert, review anatomy (evidence, severity, IDs) and capture prompt/URL evidence.

Harden & Retest: Apply improvements or security controls, then validate fixes.

After you harden controls and retest, the next step is validating that your defenses trigger the right alerts on demand. There is a list of Microsoft Defender for AI services alerts here. To evaluate alerts, open DfAI‑Eval.ipynb - a streamlined notebook that safely simulates adversarial activity (current alerts: jailbreak, phishing URL, suspicious user agent) to exercise Microsoft Defender for AI services detections. Think of it as the EICAR test for AI workloads: consistent, repeatable, and safe. 

Next, we will review and break down each of the alerts you’ll generate in the workshop and how to read them effectively.

Anatomy of Jailbreak from AI Red team Agent:

A jailbreak is a user prompt designed to sidestep system or safety instructions—rule‑change (“ignore previous rules”), fake embedded conversation, role‑play as an unrestricted persona, or encoding tricks. Microsoft Defender for AI services (via Prompt Shields + threat intelligence) flags it before unsafe output (“left‑of‑boom”) and publishes a correlated high‑fidelity alert into Defender XDR for cross‑signal investigation.

Anatomy of a Phishing involved in an attack:

Phishing prompt URL alerts fire when a prompt or draft response embeds domains linked to impersonation, homoglyph tricks, newly registered infrastructure, encoded redirects, or reputation‑flagged hosting. Microsoft Defender for AI services enriches the URL (normalization, age, reputation, brand similarity) and—if prompt evidence is enabled—includes the exact snippet, then streams the alert into Defender XDR where end‑user/application context fields (e.g. `EndUserId`, `SourceIP`) let analysts correlate repeated lure attempts and pivot to related credential or jailbreak activity.

Anatomy of a User Agent involved in an attack:

Suspicious user agent alerts highlight enumeration or automation patterns (generic library signatures, headless runners, scanner strings, cadence anomalies) tied to AI endpoint usage and identity context. Microsoft Defender for AI services scores the anomaly and forwards it to Defender XDR enriched with optional `UserSecurityContext` (IP, user ID, application name) so analysts can correlate rapid probing with concurrent jailbreak or phishing alerts and enforce mitigations like managed identity, rate limits, or user agent filtering.

Conclusion

The goal of this Red teaming and AI Threat workshop amongst the different attendees is to catch intent before impact, prompt manipulation before unsafe output, phishing infrastructure before credential loss, and scripted probing before exfiltration. Microsoft Defender for AI services feeding Defender XDR enables a compact improvement loop that converts red team findings into operational guardrails.

Within weeks, this cadence transforms AI from experimental liability into a governed, monitored asset aligned with your cloud security program. Incrementally closing gaps within context propagation, identity hygiene, Prompt Shields & filter tuning—builds durable posture. Small, focused cycles win ship one improvement, measure its impact, promote to baseline, and repeat.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories