Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150332 stories
·
33 followers

Named global query filters in Entity Framework Core 10

1 Share
Entity Framework Core 10 enhances the global query filters feature, allowing you to define multiple filters per entity and selectively disable them by name. This powerful enhancement gives you fine-grained control over query filtering, perfect for implementing soft deletes.
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Debug Dumps in Visual Studio

1 Share

This post is part of C# Advent organized by @mgroves.

Did you know that Visual Studio can debug memory dumps directly these days? It’s actually been capable of that for many years. This post is a memory dump (heh) of my recommended settings when using Visual Studio to debug memory dumps.

Memory Dumps?

If you’re not familiar with the concept, you’re going to love this! A memory dump is a file that contains a copy of the memory of a specific process (hence the name). Windows memory dump files generally have the .dmp extension.

This is useful when you have one of those super annoying bugs that only reproduce in production. And yes, this is generally a last resort! Normally you catch bugs by reproducing the issue locally and then you can just run the code in the debugger, which is an incredibly useful tool for… well… debugging. Sometimes reproducing the issue is challenging (or impossible), and a pretty decent fallback technique (I’ve found) is to just read the code. But if you can’t find/reproduce the bug in a reasonable amount of time, taking a memory dump is a great technique to get additional insight.

Capturing Memory Dumps

The best tool for this is ProcDump: you can capture dumps on demand, or capture based on some monitoring trigger (process crash, high CPU/memory usage, etc).

However, ProcDump requires access to the machine, so this works great only for on-prem or virtual machine deployments. If you’re in an Azure WebApp situation, you can capture memory dumps there as well.

Loading the Dump

Long gone are the days when you had to install a different debugger (with… let’s say “interesting” UI design choices), learn about SOS.dll, set up a symbol server cache linked to the upstream Microsoft symbol server, learn who John Robbins is, and start typing arcane commands while desperately searching Tess Ferrandez’s totally awesone blog, hoping that she has already explained what you need to do!

Ah, good times!

Yeah, these days you can literally just drag and drop the dbg file onto Visual Studio 🤯. But before you do that, I do have some tips to get the best experience.

All screenshots are from VS2026, which is current at the time of writing this post.

Just My Code

First, you should uncheck Just My Code. I just find it more useful to see the full picture when debugging a dump file.

Just My Code

Symbols

The next step is to enable loading of symbols (e.g., .pdb files). Visual Studio uses symbols to decipher the compiled code, especially if it was compiled in Release. Symbols are often necessary to make any sense of the stack at all.

I have found the best option is to tell Visual Studio to load all symbols when loading a dump file. This is not a setting I keep normally set, because loading all those symbols takes a long time. But if I’m debugging a dump file, I always turn this on.

Load All Symbols

Next, you need to tell Visual Studio where to download those symbol files from. The two primary sources are Microsoft and NuGet (symbols.nuget.org). Microsoft provides symbols for their OS (at least); the NuGet symbol server is the common place for .NET open source library symbols. These are both so common that you don’t have to type them in anymore; they’re just checkboxes in Visual Studio now.

Symbol Servers

Finally, I recommend setting up a local symbol cache. This is a local folder that will hold all those pdb files. I recommend using a folder on your dev drive for this; mine is at D:\Cache\symbols, a sibling folder of D:\Cache\nuget which acts as my NuGet package cache. I like keeping them both under D:\Cache because I know I can just delete anything under there if I need more disk space.

Symbol Server Cache

Sources

Now we have Visual Studio ready to aggressively load and debug all the symbols it can find. So let’s help it find the actual source code!

I always enable source server support. Source servers are a way for Visual Studio to load the exact version of the original source files actually used to compile the program. These days, source servers are primarily used for unmanaged code, although some legacy .NET libraries may use them.

Enable Source Server Support

While you’re there, I also recommend enabling Git Credential Manager for Source Link. Source Link is the modern replacement for source servers, at least as far as .NET is concerned. Enabling GCM means you’ll be able to pull the original source files from your private source code repository. Actually, I’m not sure why this isn’t enabled by default; I can’t think of a reason I’d want it off.

Enable Git Credential Manager for Source Link

Source Link itself is enabled by default, so you’re good there.

At this point, if you have access to the source code, Visual Studio should now load it automagically.

Load the Memory Dump

Visual Studio is now ready to load a memory dump file; you can just drag-and-drop it right into VS. Then go get a cup of coffee; loading all those symbols the first time is no joke, and it will take a while before it’s ready for you! As your cache fills up, dump files will load faster; the first load is generally the slowest.

Visual Studio does ask you what kind of debugger you want to launch; I always choose Mixed - again, if I don’t know what a problem is, I want to be able to see everything.

Once Visual Studio loads the dump file and the debugger, it will drop you into what looks like a debugging session. Of course, the application is not actually running, so you can’t unpause or step the debugger or anything like that. However, you can poke around. As a general rule, I find the Parallel Stacks, Threads, Call Stack, and Modules debugger windows to be good starting points on trying to figure out what’s going on.

Happy debugging!

Read the whole story
alvinashcraft
34 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

A Weird Thing With NOEXPAND

1 Share

A Weird Thing With NOEXPAND


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The post A Weird Thing With NOEXPAND appeared first on Darling Data.

Read the whole story
alvinashcraft
43 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Build effective AI agents with Model Context Protocol

1 Share

Imagine a world where every tool and service you use daily—whether it’s Slack for chatting with your team, GitHub for managing your projects, or even Kubernetes for running your applications—can seamlessly interact with AI. That’s not just a pipe dream; it’s quickly becoming a reality thanks to the advancements in AI integration through technologies like the Model Context Protocol (MCP).

What’s exciting about MCP is that it’s not specific to any one model. It acts as a universal translator of sorts, allowing various large language models (LLMs) to interact with a range of applications and databases, which is a game-changer for developers and IT professionals alike.

This video is from Red Hat.

Let’s step back a moment to appreciate the journey here. Initially, AI models like the first JGBT could only respond based on their training data. If you asked it a question outside its training scope, it would struggle to provide an accurate answer. But as models evolved, they began to incorporate external data, or what’s referred to as “RAG” (Retrieval-Augmented Generation), pulling relevant information from databases to make more informed responses.

Now, let’s jump into the more interactive aspect of modern AI tools, emphasized by tool calling capabilities in newer models. This allows an AI to not just fetch information but to perform actions, like checking the weather through an API call and reporting back in a conversational format.

This progression towards what we term “agentic AI” is where things truly begin to get fascinating. Instead of every team crafting bespoke integrations to connect their AI to needed APIs, MCP offers a standardized method for these interactions. It’s like setting up a comprehensive communication network where AI can reliably and securely interact with multiple services.

Imagine you’re a developer working with containerized applications. With MCP, you wouldn’t need to manually check logs or command-line interfaces. For example, thanks to the MCP integration with Kubernetes, an AI could retrieve and organize container status, logs, or any operational insights and report back directly to you or even send notifications to a Slack channel.

In practice, it works seamlessly through clients like Goose, an open-source interface that lets you specify which MCP servers to interact with. It not only simplifies the process but enhances productivity by focusing on creative problem-solving rather than routine checks.

Let’s explore a practical scenario: You’re monitoring applications deployed on a platform like OpenShift. Instead of diving into the terminal, you could interact through Goose using natural language prompts. The AI, understanding your query, fetches the information through the MCP server tied to your Kubernetes setup and provides you with detailed, understandable feedback as if you’re chatting with a knowledgeable colleague.

This isn’t just about convenience or saving a few clicks; it’s about making the most of our tools in the most efficient way possible. By integrating AI through MCP, your systems become more than just a collection of independent services—they transform into a cohesive, intelligent framework capable of anticipating needs and collaborating across platforms.

And when you’re ready to share data or updates, the AI can also interact across different applications, posting updates in Slack, managing tasks in GitHub, or even interfacing with design tools like Figma. Each action it takes is informed by the nuanced understanding and ability to engage with numerous APIs seamlessly—a truly interconnected digital workspace.

As we look into the future of AI and system interactions, MCP stands out as a critical stepping stone towards an ecosystem where AI doesn’t just assist but actively participates in our digital workflows. It encourages us to think less about the individual tools and more about the big picture of our technological infrastructure.

For anyone keen on integrating robust AI functions into their systems, exploring MCP offers a glimpse into the next level of digital interaction where our tools not only understand but also act wisely on our behalf.

Stay curious,

Frank

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Pantone’s 2026 Color of the Year Might Just Be Its Most Controversial Yet

1 Share
This is the color you’re going to see everywhere next year. READ MORE...
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Delivering securely on data and AI strategy

1 Share

Most organizations feel the imperative to keep pace with continuing advances in AI capabilities, as highlighted in a recent MIT Technology Review Insights report. That clearly has security implications, particularly as organizations navigate a surge in the volume, velocity, and variety of security data. This explosion of data, coupled with fragmented toolchains, is making it increasingly difficult for security and data teams to maintain a proactive and unified security posture. 

Data and AI teams must move rapidly to deliver the desired business results, but they must do so without compromising security and governance. As they deploy more intelligent and powerful AI capabilities, proactive threat detection and response against the expanded attack surface, insider threats, and supply chain vulnerabilities must remain paramount. “I’m passionate about cybersecurity not slowing us down,” says Melody Hildebrandt, chief technology officer at Fox Corporation, “but I also own cybersecurity strategy. So I’m also passionate about us not introducing security vulnerabilities.” 

That’s getting more challenging, says Nithin Ramachandran, who is global vice president for data and AI at industrial and consumer products manufacturer 3M. “Our experience with generative AI has shown that we need to be looking at security differently than before,” he says. “With every tool we deploy, we look not just at its functionality but also its security posture. The latter is now what we lead with.” 

Our survey of 800 technology executives (including 100 chief information security officers), conducted in June 2025, shows that many organizations struggle to strike this balance. 

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories