Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151781 stories
·
33 followers

Microsoft counters the MacBook Neo with freebies for students

1 Share

Apple's $599 MacBook Neo ($499 for students) has sent shockwaves through the PC ecosystem, and now Microsoft is responding with deals targeting students in the US. A new "Microsoft College Offer" is launching today, which will see the software giant bundle 12 months of free Microsoft 365 Premium and Xbox Game Pass Ultimate with select Windows 11 PCs that have also been discounted.

Acer, Asus, Dell, HP, and Lenovo are all participating in this Microsoft College Offer, and Microsoft is even discounting some Surface devices days after hiking the prices of its Surface Pro and Surface Laptop models. Best Buy is selling a 15.3-inch Lenovo IdeaPad …

Read the full story at The Verge.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Incident response for AI: Same fire, different fuel

1 Share

When a traditional security incident hits, responders replay what happened. They trace a known code path, find the defect, and patch it. The same input produces the same bad output, and a fix proves it will not happen again. That mental model has carried incident response for decades.

AI breaks it. A model may produce harmful output today, but the same prompt tomorrow may produce something different. The root cause is not a line of code; it is a probability distribution shaped by training data, context windows, and user inputs that no one predicted. Meanwhile, the system is generating content at machine speed. A gap in a safety classifier does not leak one record. It produces thousands of harmful outputs before a human reviewer sees the first one.

Fortunately, most of the fundamentals that make incident response (IR) effective still hold true. The instincts that seasoned responders have developed over time still apply: prioritizing containment, communicating transparently, and learning from each.

AI introduces new categories of harm, accelerates response timelines, and calls for skills and telemetry that many teams are still developing. This post explores which practices remain effective and which require fresh preparation.

The fundamentals still hold

The core insight of crisis management applies to AI without modification: the technical failure is the mechanism, but trust is the actual system under threat. When an AI system produces harmful output, leaks training data, or behaves in ways users did not expect, the damage extends beyond the technical artifact. Trust has technical, legal, ethical, and social dimensions. Your response must address all of them, which is why incident response for AI is inherently cross-functional.

Several established principles transfer directly.

Explicit ownership at every level. Someone must be in command. The incident commander synthesizes input from domain experts; they do not need to be the deepest technical expert in the room. What matters is that ownership is clear and decision-making authority is understood.

Containment before investigation. Stop ongoing harm first. Investigation runs in parallel, not after containment is complete. For AI systems, this might mean disabling a feature, applying a content filter, or throttling access while you determine scope.

Escalation should be psychologically safe. The cost of escalating unnecessarily is minor. The cost of delayed escalation can be severe. Build a culture where raising a flag early is expected, not penalized.

Communication tone matters as much as content. Stakeholders tolerate problems. They cannot tolerate uncertainty about whether anyone is in control. Demonstrate active problem-solving. Be explicit about what you know, what you suspect, and what you are doing about each.

These principles are tested, and they are effective in guiding action. The challenge with AI is not that these principles no longer apply; it is that AI introduces conditions where applying them requires new information, new tools, and new judgment.

Where AI changes the equation

Non-determinism and speed are the headline shifts, but they are not the only ones.

New harm types complicate classification and triage. Traditional IR taxonomies center on confidentiality, integrity, and availability. AI incidents can involve harms that do not fit those categories cleanly: generating dangerous instructions, producing content that targets specific groups, or enabling misuse through natural language interfaces. By making advanced capabilities easy to use, these interfaces enable untrained users to perform complex actions, increasing the risk of misuse or unintended harm. This is why we need an expanded taxonomy. If your incident classification system lacks categories for these harms, your triage process will default to “other” and lose signal.

Severity resists simple quantification. A model producing inaccurate medical information is a different severity than the same model producing inaccurate trivia answers. Good severity frameworks guide judgment; they cannot replace it. For AI incidents, the context around who is affected and how they are affected carries more weight than traditional security metrics alone can capture.

Root cause is often multi-dimensional. In traditional incidents, you find the bug and fix it. In AI incidents, problematic behavior can emerge from the interaction of training data, fine-tuning choices, user context, and retrieval inputs. Investigation may narrow the contributing factors without isolating one defect. Your process must accommodate that ambiguity rather than stalling until certainty arrives.

Before the crisis is the time to work through these implications. The questions that matter: How and when will you know? Who is on point and what is expected of them? What is the response plan? Who needs to be informed, and when? Every one of these questions that you answer before the incident is time you buy during it.

Closing the gaps in telemetry, tooling, and response

If AI changes the nature of incidents, it also changes what you need to detect and respond to them.

Observability is the first gap. Traditional security telemetry monitors network traffic, authentication events, file system changes, and process execution. AI incidents generate different signals: anomalous output patterns, spikes in user reports, shifts in content classifier confidence scores, unexpected model behavior after an update. Many organizations have not yet instrumented AI systems for these signals and, without clear signal, defenders may first learn about incidents from social media or customer complaints. Neither provides the early warning that effective response requires.

AI systems are built with strong privacy defaults – minimal logging, restricted retention, anonymized inputs – and those same defaults narrow the forensic record when you need to establish what a user saw, what data the model touched, or how an attacker manipulated the system. Privacy-by-design and investigative capability require deliberate reconciliation before an incident, because that decision does not get easier once the clock is running.

AI can also help close these gaps. We use AI in our own response operations to enhance our ability to:

  • Detect anomalous outputs as they occur
  • Enforce content policies at system speed
  • Examine model outputs at volumes no human team can match
  • Distill incident discussions so responders spend time deciding rather than reading
  • Coordinate across response workstreams faster than email chains allow

Staged remediation reflects the reality of AI fixes. Incidents require both swift action and thorough review. A model behavior change or guardrail update may not be immediately verifiable in the way a traditional patch is. We use a three-stage approach:

  • Stop the bleed. Tactical mitigations: block known-bad inputs, apply filters, restrict access. The goal is reducing active harm within the first hour.
  • Fan out and strengthen. Broader pattern analysis and expanded mitigations over the next 24 hours, covering thousands of related items. Automation is essential here; manual review cannot keep pace.
  • Fix at the source. Classifier updates, model adjustments, and systemic changes based on what investigation revealed. This stage takes longer, and that is acceptable. The first two stages bought time.

One practical tip: tactical allow-and-block lists are a necessary triage tool, but they are a losing proposition as a permanent solution. Adversaries adapt. Classifiers and systemic fixes are the durable answer.

Watch periods after remediation matter more for AI than for traditional patches. Because model behavior is non-deterministic, verification relies on sustained testing and monitoring across varied conditions rather than a single test pass. Sustained monitoring after each stage confirms that the remediation holds under varied conditions.

The human dimension

There is a dimension of AI incident response that traditional IR addresses unevenly and that AI makes urgent: the wellbeing of the people doing the work.

Defenders handling AI abuse reports and safety incidents are routinely exposed to harmful content. This is not the same cognitive load as analyzing malware samples or reviewing firewall logs. Exposure to graphic, violent, or exploitative material has measurable psychological effects, and extended incidents compound that exposure over days or weeks.

Human exhaustion threatens correctness, continuity, and judgment in any prolonged incident. AI safety incidents place an additional emotional burden on responders due to exposure to distressing content. Recognizing and addressing this challenge is essential, as it directly impacts the well-being of the team and the quality of the response.

What helps:

  • Talk to your team about well-being before the crisis, not during it.
  • Manager-sponsored interventions during extended response work, including scheduled breaks, structured handoffs, and deliberate activities that provide cognitive relief.
  • Some teams use structured cognitive breaks, including visual-spatial activities, to reduce the impact of prolonged exposure to harmful content.
  • Coaching and peer mentoring programs normalize the impact rather than framing it as individual weakness.
  • Leveraging proven practices from safety content moderation teams, whose operational workflows for content review and escalation map directly to AI security moderation is a natural collaboration opportunity.

If your incident response plan does not account for the humans executing it, the plan is incomplete.

Looking ahead

Incident response for AI is not a solved problem. The threat surface is evolving as models gain new capabilities, as agentic architectures introduce autonomous action, and as adversaries learn to exploit natural language at scale. The teams that will handle this well are the ones building adaptive capacity now. Extend playbooks. Instrument AI systems for the right signals. Rehearse novel scenarios. Invest in the people who will be on the front line when something breaks. Good response processes limit damage. Great ones make you stronger for the next incident.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Incident response for AI: Same fire, different fuel appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

New enhancements for merchant initiated transactions with the Google Pay API

1 Share
Google has introduced enhancements to the Google Pay API to provide developers with greater flexibility and control over merchant-initiated transactions (MIT). The update includes new objects within the PaymentDataRequest to specifically handle recurring subscriptions, deferred payments like hotel bookings, and automatic account reloads. By allowing merchants to clearly define future payment terms, these changes improve transparency for users and help reduce transaction declines through better token management. Developers can now implement these features to create more seamless and secure long-term payment experiences.
Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Azure MCP tools now ship built into Visual Studio 2022 — no extension required

1 Share

Azure MCP tools now ship built into Visual Studio 2022 — no extension required

Azure MCP tools are now built into Visual Studio 2022 as part of the Azure development workload — no separate extension to find, install, or update. You can enable over 230 tools across 45 Azure services directly in GitHub Copilot Chat and manage Azure resources, deployments, and diagnostics without leaving your IDE. If you already have the Azure development workload installed, you’re one click away from getting started.

What changed

Previously, using Azure MCP tools in Visual Studio 2022 required you to install the “GitHub Copilot for Azure (VS 2022)” extension from the Visual Studio Marketplace, walk through the VSIX installer dialog, and restart Visual Studio. If something went wrong, you had to uninstall and reinstall the extension entirely. That friction added up.

Starting now, Azure MCP tools ship as part of the Azure development workload in Visual Studio 2022. There’s no separate extension to manage. When you install or already have the Azure development workload, the Azure MCP Server is available directly in GitHub Copilot Chat. You enable it once, and it stays enabled across sessions.

This change means fewer installation steps, no version mismatches between the extension and the IDE, and a single update path through the Visual Studio Installer. The Azure MCP Server version gets updated with regular Visual Studio releases, so you always receive the latest tools as part of your normal update cycle.

Note: VS-specific tools available in Visual Studio 2026 are not included in Visual Studio 2022.

What you get

The Azure MCP Server surfaces over 230 tools across 45 Azure services through GitHub Copilot Chat. These tools interact with various Azure services to support developers across the entire development lifecycle. Key scenarios include:

  • Learn — Ask questions about Azure services, best practices, and architecture patterns.
  • Design & develop — Get recommendations for Azure services and configure your application code.
  • Deploy — Provision resources and deploy your application directly from the IDE.
  • Troubleshoot — Query logs, check resource health, and diagnose issues in production.

The tools appear in all tools mode within GitHub Copilot Chat. You pick which tools to enable, and Copilot calls them automatically when your prompts relate to Azure.

See it in action

Here are a few examples that show how you can use the Azure MCP tools directly from GitHub Copilot Chat in Visual Studio 2022. Each prompt triggers one or more Azure MCP tool calls behind the scenes.

Explore your Azure resources

List my storage accounts in my current subscription.

Copilot calls the Azure MCP tools to query your subscriptions and storage accounts, then returns a list of your storage accounts with their names, locations, and SKUs — right in the chat window. No portal tab needed.

Deploy your app

Deploy my ASP.NET Core app to Azure.

Copilot identifies your project, walks you through creating an App Service resource, and initiates the deployment via azd. You can track progress directly in the chat output.

Diagnose issues

Help diagnose my App Service resource.

Copilot uses AppLens and resource health tools to analyze your App Service, check for availability issues, and surface actionable recommendations — all without leaving the IDE.

Query your logs

Query my Log Analytics workspace for exceptions.

Copilot generates and runs a KQL query against your Log Analytics workspace, returning recent exceptions with timestamps, messages, and stack traces. You can refine the query in follow-up prompts to narrow down the root cause.

These are just a few examples. With over 230 tools across 45 Azure services, you can learn about Azure features, provision resources, deploy applications, and troubleshoot issues — all from a single chat window in Visual Studio 2022.

How to enable Azure MCP tools

The Azure MCP tools ship with the Azure development workload in Visual Studio 2022 version 17.14.30 or higher, but are disabled by default. Follow these steps to enable them:

  1. Update Visual Studio 2022 — Open the Visual Studio Installer and make sure you’re running version 17.14.30 or higher. If not, select Update.
  2. Install the Azure development workload — In the Visual Studio Installer, select Modify for your Visual Studio 2022 installation and check the Azure development workload. Select Modify again to apply.
  3. Launch Visual Studio 2022 — Open or create a project, then open GitHub Copilot Chat.
  4. Sign in — Make sure you’re signed in to both your GitHub account (for Copilot) and your Azure account (for resource access).
  5. Enable the Azure MCP Server — In the Copilot Chat window, select the Select tools button (the two wrenches icon). Find Azure MCP Server in the list and toggle it on.

visual studio 2022 enable tools image

Once enabled, the Azure MCP tools are available in every Copilot Chat session. You don’t need to re-enable them after restarting Visual Studio 2022.

Things to know

Keep these details in mind:

  • Azure MCP tools are disabled by default — you need to enable them manually in the Select tools dialog.
  • Tools specific to Visual Studio 2026 are not available in Visual Studio 2022.
  • Tool availability depends on your Azure subscription permissions — if you can’t perform an action in the Azure portal, you can’t perform it through MCP tools either.
  • This feature requires an active GitHub Copilot subscription and an Azure account.
  • The Azure MCP Server version is updated with regular Visual Studio releases.

Learn more

Share your feedback through Help > Send Feedback in Visual Studio 2022 or file issues on the Azure MCP Server GitHub repository.

The post Azure MCP tools now ship built into Visual Studio 2022 — no extension required appeared first on Visual Studio Blog.

Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Why is there a long delay between a thread exiting and the Wait­For­Single­Object returning?

1 Share

A customer reported that they were using the Wait­For­Single­Object function to wait for a thread to exit, but they found that even though the thread had exited, the Wait­For­Single­Object call did not return for over a minute. What could explain this delay in reporting the end of a thread? Can we do something to speed it up?

My psychic powers tell me that the thread didn’t actually exit.

What the customer is observing is probably that their thread procedure has returned, signaling the end of the thread. But a lot of stuff happens after the thread procedure exits. The system needs to send DLL_THREAD_DETACH notifications to all of the DLLs (unless the DLL has opted out via Disable­Thread­Library­Calls), and doing so requires the loader lock.

I would use the debugger to look for the thread you thought had exited and see what it’s doing. It might be blocked waiting for the loader lock because some other thread is hogging it. Or it could be running a DLL’s detach code, and that detach code has gotten stuck on a long-running operation.

I suspect it’s the latter: One of the DLLs is waiting for something in its detach code, and that something takes about a minute.

We didn’t hear back from the customer, which could mean that this was indeed the problem. Or it could mean that this didn’t help, but they decided that we weren’t being helpful and didn’t pursue the matter further. Unfortunately, in a lot of these customer debugging engagements, we never hear back whether our theory worked. (Another possibility is that the customer wrote back with a “thank you”, but the customer liaison didn’t forward it to the engineering team because they didn’t want to bother them any further.)

The post Why is there a long delay between a thread exiting and the <CODE>Wait­For­Single­Object</CODE> returning? appeared first on The Old New Thing.

Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Modernizing Applications with GAP

1 Share
From: Fritz's Tech Tips and Chatter
Duration: 0:00
Views: 0

🎙️ New to streaming or looking to level up? Check out StreamYard and get $10 discount! 😍 https://streamyard.com/pal/d/6565778542821376

Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories