Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152299 stories
·
33 followers

Introducing the Progress Agentic RAG .NET SDK

1 Share

The .NET SDK for Progress Agentic RAG provides Retrieval-Augmented Generation (RAG) capabilities to .NET development with knowledge base management, AI-powered search and resource operations.

What Is Progress Agentic RAG?

Progress Agentic RAG is a RAG-as-a-Service that makes it dramatically easier to build AI systems grounded in real, trusted content. Rather than wiring together vector databases, embedding pipelines and retrieval logic yourself, Progress Agentic RAG provides an end-to-end platform for indexing, understanding and retrieving multimodal data.

It enables intelligent, agent-driven workflows that combine structured knowledge, contextual search and LLM orchestration into a unified experience.

With the introduction of the .NET SDK for Progress Agentic RAG, .NET developers can integrate this capability directly into their applications with just a few lines of code, leveraging modern .NET architecture patterns such as dependency injection, async workflows and strongly typed APIs.

From Retrieval to Agentic Intelligence

Progress Agentic RAG offers a fully capable AI Search Dashboard solution that allows you to get started in a matter of minutes. Simply connect or upload your data to a Knowledge Box, wait for NucliaDB’s blazing-fast indexing engine to process it, and you’re ready to explore grounded, contextual AI responses.

That immediate productivity is powerful. But modern enterprise development requires more than a dashboard.

Enterprise environments demand typed APIs, strong tooling and predictable behavior. Many RAG solutions are Python-first, leaving .NET teams stitching together REST calls manually and building custom abstractions just to regain the ergonomics they expect from their platform.

The new .NET SDK changes that.

It provides:

  • Strongly typed APIs
  • Async-first patterns
  • Native .NET integration
  • Simplified knowledge base interaction

With these capabilities, you can move beyond simple retrieval and begin composing intelligent, agent-driven experiences directly inside your application architecture. As powerful as it is convenient, the .NET SDK makes a great choice for new AI-enabled .NET applications.

Getting Started

Once a Knowledge Box has been established, you can begin interacting with Progress Agentic RAG through the .NET SDK.

The SDK is distributed as a NuGet package and covers the complete NucliaDB REST API. That includes strongly typed models, structured output helpers, dependency injection extensions and more than 200 APIs that expose the full surface area of the platform. See the SDK documentation page for a comprehensive list of service providers available.

Install the NuGet Package

dotnet add package Progress.Nuclia

With the package installed, you can register the INucliaDb interface using modern dependency injection patterns. The SDK supports everything from basic configuration to advanced multi-tenant scenarios using keyed services.

Register the Client

using Progress.Nuclia.Extensions;

// Create configuration
var config = new NucliaDbConfig(
    ZoneId: "aws-us-east-2-1",
    KnowledgeBoxId: "your-knowledge-box-id",
    ApiKey: "your-api-key"
);

// Register with logging
builder.Services.AddNucliaDb(config).UseLogging();

This approach aligns naturally with ASP.NET Core’s architecture. You configure once, inject where needed, and keep your AI integration cleanly separated from business logic.

Ask Questions with Agentic RAG

With configuration complete, you can begin querying your Knowledge Box using AskAsync or AskStreamingAsync.

// Make request
AskRequest askRequest = new("What issues are driving the most customer escalations this quarter?");
var response = await client.Search.AskAsync(askRequest);

// Display answer
Console.WriteLine(response.Data.Answer);

In just a few lines of code, you’re executing a grounded, agent-driven query against indexed enterprise data.

The Ask functionality is only the beginning. With more than 200 APIs available in the SDK, you can ingest and manage resources, create conversational interactions and perform search, all using strongly typed, async-first C# patterns.

With structured configuration and native .NET integration in place, you can move from experimentation to production-ready AI systems with confidence.

Explore the Examples: Blazor and .NET MAUI

The fastest way to understand what Progress Agentic RAG can do in a real application is to see it running inside the frameworks you already use.

We’ve published hands-on examples built with:

These samples go beyond simple API calls. They show how to:

  • Register the SDK using dependency injection
  • Execute agentic queries with structured output
  • Stream responses into interactive UI components
  • Keep AI concerns cleanly separated from presentation logic

If you’re building internal tools, customer-facing dashboards or cross-platform AI assistants, these examples provide a production-oriented starting point.

Clone the samples, wire up your Knowledge Box and see how quickly you can integrate grounded, agent-driven intelligence into your existing .NET architecture.

Additional Media

Learn more about Progress Agentic RAG and the .NET SDK from video tutorials and podcasts:

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Another crash caused by uninstaller code injection into Explorer

1 Share

Some time ago, I noted that any sufficiently advanced uninstaller is indistinguishable from malware

During one of our regular debugging chats, a colleague of mine mentioned that he was looking at a mysterious spike in Explorer crashes. He showed me one of the dumps, and as soon as I saw the register dump, I said, “Oh, I bet it’s a buggy uninstaller.”

The tell-tale sign: It’s a crash in 32-bit Explorer on a 64-bit system.

The 32-bit version of Explorer exists for backward compatibility with 32-bit programs. This is not the copy of Explorer that is handling your taskbar or desktop or File Explorer windows. So if the 32-bit Explorer is running on a 64-bit system, it’s because some other program is using it to do some dirty work.

But out of curiosity, I went to look at why this particular version of the buggy uninstaller was crashing.

This particular uninstaller’s injected code had a loop where it tried to do some file operations, and if they failed, it paused for a little bit and then tried again. However, the author of the code failed to specify the correct calling convention on the functions, so instead of calling them with the __stdcall calling convention, it called them with the __cdecl calling convention. In the __stdcall calling convention, the callee pops the parameters from the stack, but in the __cdecl calling convention, the caller pops them.

This calling convention mismatch means that each time the code calls a Windows function, the code pushes parameters onto the stack, the Windows function pops them, and then the calling code pops them again. Therefore, each time through the loop, the code eats away at its own stack.

Apparently, this loop iterated a lot of times, because it had eaten up its entire stack, and the stack pointer had incremented all the way into its injected code. Each time through the loop, a little bit more of the injected code was being encroached by the stack, until the stack pointer found itself inside the code being executed.

The code then crashed on an invalid instruction because the code no longer existed. It had been overwritten by stack data.

This left an ugly corpse behind, and so many of them that the Windows team thought that it was caused by a bug in Windows itself.

¹ The title is a reference to Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic.

The post Another crash caused by uninstaller code injection into Explorer appeared first on The Old New Thing.

Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

When to Use State Pattern in C#: Decision Guide with Examples

1 Share

Discover when to use state pattern in C# with real decision criteria, use case examples, and guidance on when simpler alternatives work better.

Read the whole story
alvinashcraft
21 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Expose your stored procedures as AI agent tools with DAB 2.0

1 Share

Data API builder 2.0 (currently in public preview) is a major release focused on MCP and AI integration. Among its headline features is the ability to expose stored procedures as custom MCP tools, making them discoverable and callable by AI agents. No glue code, no middleware, no extra plumbing.

In this post I'll walk through how the feature works, and show a practical example: wiring up a full-text search stored procedure as its own dedicated tool that any MCP client can discover and call by name.

The idea: a dedicated search tool

By default, DAB's SQL MCP Server exposes tables and views through generic DML tools — things like list_books, get_book, and so on. These are useful for straightforward CRUD, but they're not designed for complex operations like full-text search.

With custom-tool: true, you can go further. Set that flag on a stored-procedure entity and DAB dynamically registers the procedure as a named, purpose-built tool in tools/list. The AI agent discovers it by name, reads its description, and calls it directly — no SQL knowledge required.

Naming note: tool names are derived from the entity name, converted to snake_case. An entity called SearchProducts appears in the tool list as search_products. Use the snake_case name when calling the tool.

Step by step: adding a search command

Here's how to add a custom SearchProducts tool backed by a stored procedure that does full-text search across product names and descriptions.

1. Install the 2.0 preview CLI

dotnet tool install microsoft.dataapibuilder  --prerelease

2. Enable MCP in your configuration

dotnet dab configure --runtime.mcp.enabled true

3. Add the stored procedure as a custom tool

dotnet dab add SearchProducts \
  --source dbo.search_products \
  --source.type "stored-procedure" \
  --permissions "anonymous:execute" \
  --mcp.custom-tool true \
  --description "Full-text search across product names and descriptions"

This produces the following in your dab-config.json:

4. Start DAB and verify the tool is registered

dotnet dab start

When an MCP client calls tools/list, it sees your tool alongside the built-in DML tools:

tools/list response

{
  "tools": [
    {
      "name": "search_products",
      "description": "Full-text search across product names and descriptions",
      "inputSchema": {
        "type": "object",
        "properties": {}
      }
    }
  ]
}

Why descriptions matter

The --description flag might look optional, but it's arguably the most important part of the setup. Without a description, an agent sees only the technical name. With a good description, it understands what the tool does, when to use it, and what kind of input it expects.

The inputSchema currently returns empty properties. Agents rely on the tool description and describe_entities to determine the correct parameters, so write descriptions that are specific and actionable.

A few things worth knowing

Only works on stored procedures

The custom-tool flag is only valid on entities with source.type: stored-procedure. Setting it on a table or view entity causes a configuration error at startup.

RBAC is respected

Custom tools honour the same role-based access control as every other DAB entity. If you restrict the entity to authenticated only, the tool won't appear in tools/list for anonymous agents, and any direct call returns a permission error.

You can register multiple tools

There's no limit. Run dab add for each procedure you want to expose. They all show up in the MCP tool list alongside each other.

You can disable without deleting

Set --mcp.custom-tool false via dab update to hide the tool from agents without removing the entity from your config. Re-enable it anytime by flipping the flag back to true.

More information

What's new for version 2.0 - Preview - Data API builder | Microsoft Learn

Data Manipulation Language Tools (DML) - SQL MCP Server | Microsoft Learn

Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Performance Studio 1.2 Release: Query Store Improvements

1 Share

Performance Studio 1.2 Release: Query Store Improvements


Summary

In this video, I delve into the latest updates and enhancements to my standalone query plan analysis tool within Performance Studio. I’ve been working on this tool to provide a more personalized and helpful experience, especially after feeling let down by SolarWinds’ inaction on Plant Explorer. The latest release focuses on improving the query store functionality, adding filtering and graphing capabilities to make it easier to analyze and understand query performance over time. I walk through the new features, including how to filter and view query history, and demonstrate the execution graphs to help you visualize query performance trends. This update is part of version 1.2, and I encourage you to download it from code.erikdarling.com to explore these new features for yourself.

Chapters

Full Transcript

Erik monitoring tool mogul here. Well, I guess in this video I’m a query plan analysis mogul. We’re not talking about monitoring tool stuff here. We’re talking about my stand-alone query plan analysis tool that, analysis stool, analysis tool, pause, that I’ve been working on because, well, I’m sick of SolarWinds not doing anything with Plant Explorer, and I wanted something that I could put a little bit of myself into. Not in a weird way, in a helpful way. Anyway, I’ve got a few things that have changed since last time. There were a few bug fixes. You know, not a whole lot. But this release was, for me, mostly about making the query store stuff a little bit better. Because, you know, I punted a little bit better. Just to get something in there. And there was some stuff that I didn’t do that I wanted to do that I just got around to doing. So, let’s talk about what I did. Anyway, it’s all fun, right? So, let’s open up Performance Studio. And let’s click on the query store button. And we must test our connection here. And let’s connect into, let’s say, Stack Overflow 2013. All right. So, the stuff that I added so far is a little bit of filtering magic and a little bit of graphing magic. So, just, you know, normally you hit Fetch here, and you get all this stuff back. Now, if you hit Clear, it doesn’t clear out the results pane. It clears out the search stuff. So, don’t hit Clear and think, this isn’t working. This is a bug. I have to bother Eric. That’s not what works here. So, let’s look at some of the filtering stuff. So, I’m going to come over to Management Studio real quick. And let’s look at some plans that I have here.

So, I’m just going to grab the top recent most 10 just by whatever. And we have, let’s see, plan ID 8246. So, if you want to look at, you know, if you want to go searching for stuff, you just hit plan ID there, plug that in and hit Fetch, and you will get plan ID 80246. Isn’t it? Isn’t it our lucky day? You could also do that lookup by query ID or whatever. You could also look at things. You can also search by module name. I think the only one that we might find in here is dbo.dropindexes. This is what I get for typing on my own. I was looking by plan ID. There we go. There we are. All right. There’s our module, dropindexes. Anyway, there we have that. So, if you want to search through query store data now, just sort of like you could do with SP Quickie Store. I don’t really have like the full spate of things in there like comma separated lists and all the other stuff.

This one, I just wanted to get something simple into so you could see that. But then also, if you right click, you hit View History. Well, that’s not a lot of fireworks, is it? Let’s do this a little bit better. Let’s do this thing some justice. Let’s hit Clear. And let’s go to Executions. And now let’s hit Fetch. And let’s see, maybe we can find one that has a little bit of life to it. And hit View History. And this is what we get back. All right. So, sort of like, oh, I got to fix that. Look at that. Nah, that’s silly.

Didn’t show up. Didn’t show up when I opened it. It only showed up after I clicked on it. Hell yeah. This front-end stuff is hard. Man, I thought back-end work was difficult. Front-end stuff, very sensitive. Very sensitive. Anyway, I’ll fix that later. But what you have here is sort of a graph over time of how your query performed. You know, kind of just to sort of try to bring things on par with how, you know, like the query store things work.

You can do average duration. You can do average CPU. I guess those are about the same there. You can do total CPU. Oh, look, it changed a little bit. And, you know, all the rest of it. Executions. Wow, it did nothing for a long time. And then it executed a whole lot. Well, I guess those are all tiny little single executions. And then there was a big spike in executions. So, that was fun.

Anyway, just some small improvements that I’ve made to my Performance Studio app here. This is, again, something that you can open up query plans with, run queries, experiment with performance things, get a whole bunch of good information back about what’s going on in the query plans.

Today’s video is just going over the query store additions that I made in version 1.2. So, you have that now at your disposal to have fun with and look forward to. This is already released. So, if you go to code.erikdarling.com and you click on Performance Studio, you should see the 1.2 release with a bunch of zip files. This thing is available for Windows, Mac, cross-platform.

So, I would encourage you to read the readme file because there’s a lot of good things to read in the readme file about what this thing does. Anyway, thank you for watching. I hope you enjoyed yourselves. I hope you learned something. I hope you’ll try my plan analysis tool here.

And I will see you in, well, actually, I don’t know if this is Thursday or Friday’s video. So, either I’ll see you tomorrow or I’ll see you Monday for office hours. All right. Have a good one.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

The post Performance Studio 1.2 Release: Query Store Improvements appeared first on Darling Data.

Read the whole story
alvinashcraft
44 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

T-SQL That Doesn’t Suck: I’m Running a Pre-Con at PASS Summit East

1 Share

T-SQL That Doesn’t Suck: I’m Running a Pre-Con at PASS Summit East


On Thursday, May 7th, I’ll be in Chicago at PASS Data Community Summit East, running a full-day pre-con called T-SQL That Doesn’t Suck: Solving Performance and Concurrency Problems.

The pitch is simple: you already know how to write T-SQL that runs. It compiles, it returns rows, nobody’s filed an incident yet. The problem is “runs” and “runs well at scale” are different conversations, and production tends to be the one asking the hard questions.

All attendees get free access to Learn T-SQL With Erik.

What we’re covering


The day splits roughly in half.

First half is the performance problems that don’t show up until you actually have data and traffic behind them:

– Implicit conversions that quietly kill your seeks
– Non-sargable predicates hiding behind innocent-looking WHERE clauses
– Parameter sniffing traps — when it helps, when it hurts, what to do about it
– Joins that look fine in the plan right up until they aren’t
– Temp tables vs. table variables, and when each one actually wins
– CTEs that help vs. CTEs that just make the query feel organized
– Window functions that don’t spill to tempdb

Second half is concurrency — the stuff that turns a Tuesday afternoon into a war room:

– Blocking chains, and how to actually read them
– Isolation level surprises
– DML that holds locks like it’s paying rent
– Patterns that let readers and writers coexist without fist-fighting

We’ll also put AI-generated T-SQL on the table. Not to pile on — it’s showing up in pull requests whether you like it or not — but to talk honestly about where it falls apart and where it actually saves you time.

Details


When:** Thursday, May 7, 2026, 9:00 AM – 5:00 PM
Where:** Hyatt Regency McCormick Place, Chicago — Jackson Park B
Level:** 300 (if you’re past “what is a clustered index,” you’re in the right room)
Register:** Here

The hotel discount at the Hyatt cuts off **April 22**, so if you need a room at the conference venue, book this week.

Chicago in May. T-SQL all day. Come write queries you’d be proud to put your name on.

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.

The post T-SQL That Doesn’t Suck: I’m Running a Pre-Con at PASS Summit East appeared first on Darling Data.

Read the whole story
alvinashcraft
48 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories