Veeam Data Cloud (VDC) provides cloud-native, SaaS-based data resilience for mission-critical environments including Microsoft 365, Entra ID, Azure, and Salesforce. As enterprise data protection needs to evolve, so does the intelligence behind it. With Veeam Intelligence, VDC is leveraging the power of generative AI that helps users get the most value from VDC services using an advanced AI assistant, with the vision to deliver real-time insights through backup telemetry in the future
To build an assistant delivers accurate, reliable, and safe responses, the VDC team partnered with Azure AI Foundry, Microsoft's AI application development platform which provides built-in responsible AI evaluation and testing at scale.
From Backup to Breakthrough: Enabling Smart, Safe AI Assistance
Veeam Intelligence’s mission is clear: provide accurate, easy-to-understand answers to user questions while maintaining the highest safety standards. But with every new feature release or model update, there’s a risk of introducing harmful regressions or vulnerabilities, especially as AI systems scale and evolve.
To mitigate this, the VDC team adopted Azure AI Foundry early in development. Foundry equips Veeam's product developers with automated testing workflows that ensure the assistant’s responses remain clear and accurate, without generating harmful or unwanted content. This is especially critical when working with enterprise-grade AI that interacts with sensitive backup data and identity services.
Building In Trust with Azure AI Foundry
The first step in integrating Foundry was establishing an automated suite of evaluations triggered with every GitHub commit as part of their CI/CD workflows. As new code is introduced, Foundry’s quality evaluators simulate a range of real-world user queries to verify that the assistant’s responses are both helpful and concise. Simultaneously, risk and safety evaluators are run to ensure that no harmful, toxic, or otherwise inappropriate content is returned.
This constant testing and validation loop helps Veeam ship confidently, knowing that AI-generated interactions upholds enterprise standards of safety and performance.
Why Foundry? Evaluation That Scales
VDC’s AI engineering team conducted a comprehensive review of available AI evaluation solutions, and Foundry stood out. Its extensive suite of evaluators delivered consistent results across a broad range of edge cases, while still being easy to configure and integrate into existing pipelines.
Additionally, advanced features like AI Red Teaming gave the team deeper assurance that Veeam Intelligence could defend against adversarial prompts and jailbreaking attempts, which is an essential capability for any enterprise-grade assistant.
What’s Next: Toward Continuous Monitoring and Real-Time Quality
Looking ahead, VDC is exploring Foundry’s Continuous Monitoring capabilities to maintain the assistant’s quality in live environments. One area of particular interest is real-time alerting for any degradation in safety or accuracy metrics, especially in response to novel threats such as jailbreaking attacks.
Another challenge the team is addressing involves monitoring AI response quality post-deployment, when ground-truth answers are no longer accessible. Utilizing Foundry’s comprehensive suite of quality evaluators, the VDC team aims to maintain rigorous oversight on performance to ensure users consistently receive reliable answers.
Veeam also participated in the session AI and Agent Observability in Azure AI Foundry and Azure Monitor at Microsoft Build 2025, where they demonstrated their use of the Azure AI Foundry Evaluations SDK in action. This collaboration highlights how, with Veeam Intelligence, VDC is setting a new standard for data resilience — powered by responsible, enterprise-ready AI. And with Azure AI Foundry as a partner, the team has the tools to test, validate, and monitor their AI systems with confidence, staying one step ahead of risk.
Learn more about Azure AI Foundry and how it can help your team scale AI responsibly.
This article demonstrates how you can create a non-yielding scheduler scenario at will so you can examine the various diagnostic tools for non-yielding schedulers - error log, memory dump, nonyield_copiedstack_ring_buffer_recorded in system_health Xevents.
Do not do this on a production server as you will freeze your production workload. This is designed for learning and better understanding of how SQL Server works and can be troubleshot.
If you already have WinDbg or WinDbgX installed and symbols configured, skip to step 6 [Attach WinDbg to a SQL Server process]
Locate your SQLServr.exe by expanding the properties of each process to identify the one you want
Select the process and click OK
Run a debugger command that causes the public symbols for many of the loaded modules to be downloaded in your local symbols folder. For example, the kc command will display the call stacks of all threads in SQL Server. It may take awhile to complete because Windbg is downloading symbol files locally in your defined symbol path. Wait for the debugger to return to the prompt
~*kc100
Now let the SQL Server process continue running by pressing typing g followed by Enter
Connect to SQL Server using SSMS or Sqlcmd and execute a very long-running query. Here is an example that works:
SELECT COUNT_BIG (*)
FROM sys.messages a, sys.messages b, sys.messages c
After the query has run for more than 2-3 seconds, go back the WinDbg and break into the debug session. Press either CTRL+Break or Alt+Del. This will place you back to a WinDbg prompt awaiting commands
Let's re-run the command to display the call stacks for all threads. This time the command should complete faster than it did earlier, because most symbol files should already be downloaded. ~*kc100
Scroll to the top of the output and using CTRL+F, search for CSQLSource::Execute. You should find (at least) one thread that has this in the call stack.
Once you locate that thread, look at its WinDbg thread ID. For example, it could be 93 or 112. Here is an example where it's 69:
Now, let's freeze this thread on purpose inside WinDbg, in order to simulate a stalled, non-yielding scenario. This thread has actively executing code and thus likely using a SQL Scheduler. If the thread is frozen artificially, SQL Server Scheduler Monitor will consider it a non-yielding thread because it is not changing state and is not releasing control of the scheduler it was running on.
To freeze your thread, use the ~Tf by where T is your identified thread id ("f" is for freeze)
~Tf
Then hit Enter
Now press g and Enter
Wait for a minute or two (a quick coffee break)
Now open the C:\Program Files\Microsoft SQL Server\MSSQLXXXXXX\MSSQL\Log\Errorlog and look for an entry like this one:
2025-06-06 15:03:55.77 Server *
*******************************************************************************
2025-06-06 15:03:55.77 Server *
2025-06-06 15:03:55.77 Server * BEGIN STACK DUMP:
2025-06-06 15:03:55.77 Server * 06/06/25 15:03:55 spid 5144
2025-06-06 15:03:55.77 Server *
2025-06-06 15:03:55.77 Server * Non-yielding Scheduler
2025-06-06 15:03:55.77 Server *
2025-06-06 15:03:55.77 Server *
*******************************************************************************
2025-06-06 15:03:55.77 Server CImageHelper::DoMiniDump entered. Dump attempts: 1. Exception record exists...
2025-06-06 15:03:55.77 Server Stack Signature for the dump is 0x00000000000003D6qd
You have successfully simulated a Non-yielding scheduler
To unfreeze your thread, use the ~Tu by replacing T with your identified thread id ("u" is for unfreeze), then hit Enter. In this example:
~69u
To detach Windbg from SQL Server type the quit+detach (qd) command. After that you can close Windbg
qd
If you examine the lastes system_health XEL in your \Log folder you can also find the nonyield_copiedstack_ring_buffer_recorded matching the non-yielding scheduler event and also extract the call stack from there. You can use this call stack with SQLCallStackResolver/README.md at main · microsoft/SQLCallStackResolver tool to identify the call stack. (this screenshot is from another debug session so ignore timestamps)
I got into software to ship ideas, not to chase down hard-coded strings after a late-breaking feature request. Unfortunately, many of our day-to-day tasks as developers involve branches working on boilerplate code, refactoring, and the “pre-work” to get to the fun stuff: shipping new features.
So I turned to Copilot’s agentic workflows to help speed along some of that grunt work. In my latest Rubber Duck Thursdays live stream, I put that theory to the test in a project where I wanted to localize an application that used:
Tech stack: a Next.js web app and a matching SwiftUI iOS app living in two separate GitHub repos.
Environment: spun up rapidly in Codespaces (on-demand dev environment) and Xcode 16 for the mobile portion.
Task: an issue built from a couple of paragraphs to “Add English, French, and Spanish localization.”
Copilot tools:coding agent (to turn that issue into a PR), a custom planning chat mode(to try out the new preview capabilities in VS Code), and the new remote GitHub MCP server (so we can avoid managing those dependencies in our dev environment).
By the end of my stream, that idea became a GitHub issue, which turned into a fully tested, review-ready PR while I fielded chat questions, and learned about the preview custom chat mode features in VS Code.
Let’s dive in.
Why I use agentic workflows
Even seasoned developers and teams still burn hours on jobs like:
Turning vague requests into well-scoped issues
Hunting down every file in a cross-cutting refactor
Writing the same unit-test scaffolding again and again
Copilot’s ability to create issues, along with its coding agent, custom chat modes in VS Code, and the new remote MCP backend fold those chores into one tight loop—issue to PR—while you stay firmly in the driver’s seat. You still review, tweak, and decide when to merge, but you skip the drudgery.
Key capabilities covered in this livestream
Capability
What it does
Why it matters
How to enable and use it
Coding agent
Turns any GitHub Issue you assign to Copilot into a PR, and works on that task asynchronously.
Allows you to offload the boilerplate work while you focus on reviews and edge case logic.
Allows AI tools to access live GitHub context and tools, like issues, pull requests and code files. With the remote GitHub MCP server, you don’t need to install it locally, and can even authenticate with OAuth 2.0.
Provides a smooth experience to accessing the GitHub MCP server, reducing the management overhead of a local server.
Copilot agent mode is a real‑time collaborator that sits in your editor, works with you, and edits files based on your needs. Unlike the coding agent, Copilot agent mode works synchronously with you.
Think of agent mode as the senior dev pair programming with you. It has access to several tools (like reading/writing code, running commands in the terminal, executing tools on MCP servers), and works alongside you.
A Copilot subscription with coding agent enabled. (Did you know it’s now available for all paid tiers of GitHub Copilot including Copilot Business and Copilot Pro?)
VS Code 1.101+ with the latest Copilot extension.
Either: GitHub Remote MCP server (update your MCP configuration), or a local GitHub MCP server.
Walk-through: localizing a Next.js app
Here’s the exact flow I demoed on the most recent Rubber Duck Thursdays stream.
1. Capture the request as a GitHub Issue
Go to the immersive view of Copilot Chat. At the bottom of the page, in the “Ask Copilot” box, describe what you want. For example, below is the prompt that I used.
Create a GitHub Issue that brings i11n capability to the application. We must support English, French and Spanish.
The user must be able to change their language in their profile page. When they change the language, it must apply immediately across the site.
Please include an overview/problem statement in the issue, a set of acceptance criteria, and pointers on which files need updating/creating.
Copilot drafts that into an issue, which includes a title, acceptance criteria, and a loose action plan. From there, you can assign that issue to Copilot, and let it cook in the background.
Reviews the task at hand, explores the current state of the codebase, and forms a plan to complete the task.
If you have any custom instructions configured, then the coding agent will also use those as context. For example, we specify that npm run lint and npm run test should pass before committing.
Once complete, it opens a draft PR for your review.
While that runs, you can keep coding, use it as an opportunity to learn (like we learned about custom chat modes) or grab a coffee.
3. Review the PR like you normally would
Whether it’s a colleague, collaborator, or Copilot writing the code, you still need a reviewer. So it’s important to make sure you look the code over carefully, just like you would any other pull request.
Start by reviewing the body of the pull request, which Copilot will have helpfully kept up to date.
Then, review the code changes in the files changed tab, understanding what has changed and why. I also like to take a look at the coding agent session to understand the approach Copilot took to solving the problem.
Once you are comfortable, you may want to try the code out manually in a GitHub Codespace. Or, you may want to run any existing CI checks through your GitHub Actions workflows. But again, make sure you have carefully reviewed the code before executing it.
All being well, you will have green check marks being returned from your CI.
However, there’s always a possibility that you encounter failures, or spot some changes in your manual testing. For example, I spotted some hard-coded strings that the agent hadn’t addressed. Once again, we approach this just like we would any other pull request. We can post our feedback in a comment. For example, here’s the comment I used:
That’s a great start. However, there are a lot of pages which are hardcoded in English still. For example, the flight search/bookings page, the check reservation page. Can you implement the localization on those pages, please?
Copilot will react to the comment once again, and get to work in another session.
Select Create new custom chat mode file.You’ll be asked to save it either in the workspace (to allow collaborating with others), or in the local user data folder (for your use). We opted for the workspace option.
Enter the name. This is the name that will appear in the chat mode selection box, so pay attention to any capitalization.
You should see a new file has been created with the extension .chatmode.md. This is where you can configure the instructions, and the available tools for your new custom chat mode.
Below is the example that we used in the livestream, slightly modified from the VS Code team’s docs example. We’ve added the create_issue tool to the list of allowed tools, adjusted our expectations of what’s included in the issue and added an instruction about creating the issue with the `create_issue` tool once revisions are complete and approved by the user.
---
description: Generate an implementation plan for new features or refactoring existing code.
tools: ['codebase', 'fetch', 'findTestFiles', 'githubRepo', 'search', 'usages', 'github', 'create_issue']
---
# Planning mode instructions
You are in planning mode. Your task is to generate an implementation plan for a new feature or for refactoring existing code.
Don't make any code edits, just generate a plan.
The plan consists of a Markdown document that describes the implementation plan, including the following sections:
* Overview: A brief description of the feature or refactoring task.
* Requirements: A list of requirements for the feature or refactoring task.
* Implementation Steps: A detailed list of steps to implement the feature or refactoring task.
* Testing: A list of tests that need to be implemented to verify the feature or refactoring task.
Once the plan is complete, ask the user if they would like to create a GitHub issue for this implementation plan. If they respond affirmatively, proceed to create the issue using the `create_issue` tool.
When the file is available in your teammate’s local repositories (so they’ve pulled the changes locally), VS Code surfaces the mode in the chat dropdown, allowing you to configure chat modes that are consistent and convenient across your team.
Remote MCP: removing the local setup
You may be used to running MCP locally through npm packages or as docker containers. However, remote MCP servers allow you to reduce the management overhead of running these tools locally. There may be other benefits too. For example, the remote GitHub MCP Servers allows you to authenticate using OAuth 2.0 instead of Personal Access Tokens.
To use the GitHub Remote MCP Server in VS Code, you’ll need to update the MCP configuration. You can find the instructions on how to do that in the GitHub MCP Server repository.
💡 Note: Did you know that the GitHub MCP Server is open source? Take a look through the codebase or raise an issue. Who knows? Maybe you’ll even end up becoming a contributor!
Going mobile: Copilot agent mode in Xcode
While we didn’t show it in depth, I quickly walked through one of my previous agent mode sessions in Xcode. It showed how I gave a similar prompt to Copilot, asking to add internationalization to the app, which we were able to see in the main navigation bar of the app running in the simulator.
We need to implement internationalization in the app. Please make the following changes:
1. The user can select from suported languages (English, Spanish, French) from a dropdown in their profile.
2. The main tab view should support internationalization. No other parts of the app should be changed for now.
3. When the user changes the language, it should update the rendered text instantly.
Dos and don’ts
✅ Do
❌ Don’t
Keep issues tightly scoped
Ask the agent to “re-architect the app”
Provide acceptance criteria
Assume the agent knows your intent
Carefully review the changes made
Execute code or merge a PR without a review
Iterate with Copilot. How often do you get something right on the first shot?
Agentic workflows within GitHub Copilot aren’t magic; they’re tools. When a single click can help reduce technical debt (or knock out any other repetitive task you dread), why not let Copilot handle the boilerplate while you tackle the more challenging, fun, and creative problems?
The unusual LED lights from the original Nothing phones have morphed into a matrix display that can show the time, a Magic 8 Ball, and even a spin-the-bottle game.
Recently, the production outage team at a Fortune-500 retailer identified a race condition in the checkout process that had been causing recent outages. Instead of slogging through thousands of lines by hand, engineers called on GitHub Copilot, which added the guilty await ordering to their notice within a few seconds and auto-suggested a solution that restored full service before customers had any chance to notice.
Similar stories emerge every week in large SaaS companies, gaming companies, and fintechs: the “navigator,” along with the present driver, is becoming increasingly an AI that never sleeps but has complete context of the codebase. As per Stack Overflow’s 2024 pulse survey, 76% of lifetime developers now have or plan to have an AI code assistant available to them.
From Human Pairs to Human-AI Teams
Traditional pair programming involves inserting a second engineer to review each keystroke, share domain knowledge, and catch minor errors early. Indeed, the technique proved efficient in mitigating calendar conflicts, personality conflicts, and the attentional burden associated with maintaining long-term focus. Generative assistants, Copilot, Amazon Q Developer, Google’s Gemini Code Assist, and JetBrains AI Assistant, eliminate those operational frictions by delivering real-time, context-sensitive suggestions within the IDE.
Evidence of impact is no longer anecdotal; it is now substantiated. A jointly managed MIT/GitHub study found that developers completed mundane API implementation tasks 55% faster with Copilot. On a larger scale, Virtasant’s analysis of more than one million commercial commits found that assistants write up to 46% of new code and reduce debugging time by 80%. Virtasant Even Amazon, whose own internal statistics show the median developer types for less than one hour of an eight-hour workday, positions Amazon Q as a solution to the other seven toil-filled hours. Amazon Web Services, Inc.
The ripple effects ripple out farther than raw speed. JetBrains’ 2024 user survey of AI-Assistant users found that 91% recapture at least an hour a week, and juniors regain three to five hours that would have been spent wading through documentation. Knowledge transfer that formerly must occur formally now happens in-line as the assistant explains framework idioms or company-specific utilities as they autocomplete them.
Measurable Outcomes: Speed, Quality, Learning
Release cadence and mean time-to-restore. The Fortune 500 retailer experience here is common and not unusual; companies that monitor incident metrics over time consistently see double-digit reductions in MTTR when AI suggestions are part of the “first look” at failure. Controlled studies mirror the field results: GitHub/MIT study participants who used Copilot not only did tasks more rapidly, but wrote an average of more tests, suggesting the assistant removed tedium and did not induce shortcuts.
Code review quality. AI-driven review services, such as CodeRabbit, advocate for a 50% decrease in review lag and post-merge bug rate. Early adopters affirm that first-pass machine review allows senior engineers to dedicate time to architecture rather than whitespace.
Developer satisfaction and retention. Developer satisfaction and retention. Stack Overflow’s survey found that more than three-quarters of respondents using assistants indicated increased job satisfaction as a result of reduced context switching and faster resolution of repetitive tasks. That morale boost equates to reduced churn, itself a sneaky cost centre for engineering managers.
Onboarding acceleration. Fintechs that migrated hundreds of old REST endpoints to GraphQL have seen their project schedules reduced from months to weeks since assistants began scaffolding resolvers and suggesting schema changes in real-time, thanks to the provision of per-project conventions. Critically, junior engineers were able to catch up on novel stacks within a few days, rather than weeks, since the assistant offered real-time, project-specific conventions.
Automated license scanning. Automated license scanning. The as-yet-unresolved Doe v. GitHub case and the growing docket of generative AI copyright cases are reminders to organisations that origin matters. Tools such as Black Duck or Snyk run within CI to block merges when an assistant writes code with ambiguous origins.
Deliberate skills practice. Engineering inspires fear that juniors who rely too heavily on autocomplete will never master language fundamentals. The answer is obvious: weekly “AI-off” refactoring sessions ensure that every developer can write basic data structures independently before comparing their code with the model’s.
McKinsey research emphasizes why such governance is worth the form: groups that baseline metrics, define policies, and synchronize incentives before rolling out AI are twice as likely to sustain long-term gains in productivity.
The Road Ahead: Personalized AI Teammates
Tuning on private repositories already allows Copilot for Business and Amazon Q to speak in a project’s internal vocabulary. The second horizon is individualisation: personal assistants that learn how Dana logs or how Luis names tests, producing patches with the appearance and feel of hand-written ones. Meanwhile, real-time collaborative editors are experimenting with “trio programming,” where two humans collaborate on a session with an AI member giving visible feedback, effectively recreating the conversational rhythm of old pair programming without the scheduling overhead.
As tools become increasingly sophisticated, the high-end skill set changes from syntax memorization to timely craft, judicious judgment, and architectural taste. That is, software design remains a human craft; mechanical rendering of intent into boilerplate increasingly is a machine’s work.
Conclusion and Next Steps
Generative AI has already shifted the economics of collaborative coding. Several independent research reports, by GitHub, Virtasant, JetBrains, and McKinsey, all conclude with the same headline: when paired with disciplined guardrails, assistants boost speed, code quality, and developer happiness simultaneously.
Teams who are looking to adopt don’t need to bet big. Choose a service with good test coverage, enable an assistant for a single sprint, take baseline measurements, and quantify lead time, review time, and escaped defects. Put good prompt “recipes” in the repository, hook license and security scans to CI, and perform regular AI-free drills to keep skills up.
The navigator beside tomorrow’s driver can be a language model, not a colleague on the other side of the desk, but the goals of pair programming are the same: Faster feedback, shared understanding, and improved code. Employed intentionally, AI brings those outcomes closer than ever.
What if you could spot the weakest link in your software supply chain before it breaks?
With GitHub’s dependency graph, you can. By providing a clear, complete view of the external packages your code depends on, both directly and indirectly, it allows you to understand, secure, and manage your project’s true footprint.
If you’re like me and sometimes lose track of what’s actually powering your applications (we’ve all been there!), GitHub’s dependency graph is about to become your new best friend.
What is the dependency graph?
Here’s the thing: Every modern software project is basically an iceberg. That small manifest file with your direct dependencies seems quite harmless at first glance. But underneath? There’s this massive, hidden world of transitive dependencies that most of us never think about. The GitHub dependency graph maps this entire underwater world. Think of it like a family tree, but for your code. Each package is a family member, and each dependency relationship shows who’s related to whom (and trust me, some of these family trees get really complicated).
Each package is a node. Each dependency relationship is an edge. The result? A full visual and structured representation of your software’s external codebase.
In some cases, 95–97% of your code is actually someone else’s. The dependency graph helps you make sense of that reality.
Let that sink in for a moment. We’re basically curators of other people’s work, and the dependency graph finally helps us make sense of that reality.
Why it matters
When vulnerabilities are discovered in open source packages, the consequences ripple downstream. If you don’t know a vulnerable dependency is part of your project, it’s hard to take action.
The dependency graph isn’t just a cool visualization (though it is pretty neat to look at). It’s the foundation that makes Dependabot alerts possible. When a security issue is found in any of your dependencies (even a transitive one), GitHub notifies you. You get the full picture of what’s in your supply chain, how it got there, and what you can actually do about it.
See it in action: From 21 to 1,000 dependencies
Eric showed us a project that looked innocent enough:
21 direct dependencies (the ones actually listed in package.json)
1,000 total dependencies (including everything that got pulled in along the way)
With the dependency graph, you can finally:
Understand which dependencies are direct vs. transitive
Trace how a package like Log4j ended up in your codebase. (Spoiler: it probably came along for the ride with something else.)
Know what’s yours to fix and what depends on an upstream maintainer
Tighten your supply chain with Dependabot
Dependabot runs on top of the dependency graph—so enabling the graph is what makes Dependabot’s vulnerability alerts and automatic fix suggestions possible.
Pro tip: Filter for direct dependencies first. These are the ones you can actually control, so focus your energy there instead of pulling your hair out over transitive dependencies that are someone else’s responsibility.
How to enable the dependency graph
You can enable the dependency graph in your repository settings under Security > Dependency Graph. If you turn on Dependabot, the graph will be enabled automatically.
Using GitHub Actions? Community-maintained actions can generate a Software Bill of Materials (SBOM) and submit it to GitHub’s Dependency submission API, even if your language ecosystem doesn’t support auto-discovery.
✅ The best part? Dependency graph is free for all public repositories. Private repos need GitHub Advanced Security to use Dependabot alerts, but the graph itself is free.
TL;DR
You can’t secure what you can’t see. GitHub’s dependency graph gives you visibility into the 90%+ of your codebase that comes from open source libraries and helps you take action when it counts.
Enable it today (seriously, do it now)
Use it with Dependabot for automated alerts and fixes
Finally discover what’s actually in your software supply chain
Your future self (and your security team) will thank you.