Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152155 stories
·
33 followers

.NET 10.0.7 Out-of-Band Security Update

1 Share

We are releasing .NET 10.0.7 as an out-of-band (OOB) update to address a security issue introduced in Microsoft.AspNetCore.DataProtection.

Security update details

This release includes a fix for CVE-2026-40372

After the Patch Tuesday 10.0.6 release, some customers reported that decryption was failing in their applications. This behavior was reported in aspnetcore issue #66335.

While investigating those reports, we determined that the regression also exposed a vulnerability. In versions 10.0.0 through .NET 10.0.6 of the Microsoft.AspNetCore.DataProtection NuGet package, the managed authenticated encryptor could compute its HMAC validation tag over the wrong bytes of the payload and then discard the computed hash, which could result in elevation of privilege.

Update required

If your application uses ASP.NET Core Data Protection, update the Microsoft.AspNetCore.DataProtection package to 10.0.7 as soon as possible to address the decryption regression and security vulnerability.

Download .NET 10.0.7

.NET 10.0
Release Notes 10.0 release notes
Installers and binaries 10.0.7
Container Images images
Linux packages 10.0
Known Issues 10.0

Installation guidance

  1. Download and install the .NET 10.0.7 SDK or Runtime.
  2. Verify installation by running dotnet --info and confirming you are on 10.0.7.
  3. Rebuild and redeploy your applications using updated images or packages.

Share your feedback

If you experience any issues after installing this update, please let us know in the .NET release feedback issues.

The post .NET 10.0.7 Out-of-Band Security Update appeared first on .NET Blog.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Aspire 13.2: Bun Support and Container Enhancements

1 Share

If you’ve been following the JavaScript story in Aspire, 13.2 is the kind of release you want to see. Focused, deliberate, and packed with improvements that make a real difference.

Bun support lands for Vite apps. Yarn gets more reliable. Container publishing becomes more predictable. And a handful of debugging and build improvements add up to a noticeably smoother experience.

In other words: better tooling, more building.

Note

To keep this post aligned with the JavaScript thread, the code samples use apphost.ts. If you’d rather author your AppHost in AppHost.cs, the linked aspire.dev docs include C# examples for the same patterns.

⚑ Bun support for Vite apps

Let’s start with the headline. Bun is now a first-class option for Vite apps in Aspire.

import { createBuilder } from './.modules/aspire.js';

// The builder pattern is used to add resources and model their relationship
const builder = await createBuilder();

// We're adding a Vite app, that points to our frontend directory...
// We chain a call to `withBun` to imply that we want the frontend 
// resource to use bun as its package manager.
await builder
  .addViteApp("frontend", "./frontend")
  .withBun();

await builder.build().run();

That is refreshingly clean. Call withBun(), and Aspire knows how to treat your addViteApp() resource like a Bun-backed app instead of assuming npm. No ceremony. No configuration scavenger hunt. Just a single fluent call that works.

Why does this matter? Because Bun is fast, and JavaScript developers tend to notice when tooling is slow.

  • Startup time matters.
  • Install time matters.
  • Feedback loops matter.

If your team already likes Bun, Aspire no longer makes you feel like you’re coloring outside the lines.

And if you don’t use Bun? Fine. npm, Yarn, and pnpm still work. This isn’t Aspire picking favorites. It’s Aspire behaving like the JavaScript ecosystem exists in the real world.

withYarn() got more reliable too

Bun is the shiny headline, but Yarn users get something arguably even more important: fewer headaches. Aspire 13.2 improves reliability for withYarn() with addViteApp(), which is the sort of fix that sounds small right up until it saves you twenty minutes of local dev confusion.

Reliability is a feature in its own right, and this improvement delivers exactly that.

And if your team is still on Yarn, the TypeScript AppHost version stays just as simple:

await builder.addViteApp("frontend", "./frontend")
  .withYarn();

🐳 Container changes that actually help

There are also some genuinely useful container improvements in 13.2 that give you more control and predictability exactly where you need them.

Pull policy is now explicit

If you’re working with Docker Compose publishing, PullPolicy is now a proper setting, including a Never option for “use the local image and do not get creative” workflows.

That’s useful when you’re iterating locally, testing freshly built images, or working in an environment where pulling from a registry is the wrong move. It’s a small change, but it makes container behavior more honest and easier to reason about.

Imagine you’re working in your TypeScript AppHost, and you’ve already added docker compose support:

aspire add docker

You can now add an import for the ImagePullPolicy enum and chain a call to withImagePullPolicy.

import { createBuilder, ImagePullPolicy } from './.modules/aspire.js';

const builder = await createBuilder();

// Adds a Docker Compose publishing environment.
await builder.addDockerComposeEnvironment("compose");

// Add a container based on the myorg/worker:latest image.
// Specify that the image should never be pulled from the registry.
const worker = await builder.addContainer("worker", "myorg/worker:latest")
  .withImagePullPolicy(ImagePullPolicy.Never);

await builder.build().run();

If that reads like “please use the image I already built and leave the registry out of it,” that’s because it is. The Docker Compose deployment docs spell out the same mapping directly.

PostgreSQL 18+ just works

PostgreSQL 18 changed its internal data directory layout. Naturally, that means volume mapping got weird.

Aspire 13.2 now handles the data volume path correctly for PostgreSQL 18 and later. If you’re running newer Postgres versions, persistence keeps working seamlessly with no extra configuration required.

import { createBuilder } from './.modules/aspire.js';

const builder = await createBuilder();

// Add a postgres server resource with a data volume.
const postgres = await builder.addPostgres("postgres")
  .withDataVolume({ isReadOnly: false });

// From the postgres server resource add a database named 'appdb'
const appDb = await postgres.addDatabase("appdb");

// Add a node app for our api and reference the database.
await builder
  .addNodeApp("api", "./api", "src/index.ts")
  .withReference(appDb)
  .waitFor(postgres);

await builder.build().run();

So if you’re attaching a durable volume to local Postgres, 13.2 handles it correctly out of the box. The PostgreSQL hosting docs show the same pattern in both TypeScript and C#.

That’s exactly the kind of solid platform support that keeps development moving.

πŸ› Better debugging and fewer build paper cuts

13.2 also ships a set of quality-of-life improvements to day-to-day development, particularly around debugging and builds.

Better debugger displays

Core Aspire types now use DebuggerDisplayAttribute, so stepping through an AppHost is less of a scavenger hunt. You get clearer values for things like DistributedApplication, resources, and endpoint expressions instead of drilling into objects just to figure out what you’re looking at.

Better WaitFor failures and event behavior

When resources fail to start, the messages around WaitFor are more helpful. That’s good.

BeforeResourceStartedEvent also now fires only when a resource is actually starting, which is even better. Less noise. Better signal. More predictable behavior.

launchSettings.json is less fragile now

Aspire now accepts comments and trailing commas in launchSettings.json.

This is a welcome improvement. Real developers leave trailing commas and add comments to explain why a setting exists, and now Aspire handles that gracefully, just as you’d expect.

GenerateAssemblyInfo no longer gets in the way

If your AppHost disables GenerateAssemblyInfo, 13.2 now behaves correctly without requiring any workarounds. Another solid improvement to build reliability.

🌐 The bigger JavaScript picture

What I like about this release is that it keeps pushing the same idea forward: JavaScript support in Aspire isn’t a novelty feature.

This is what platform maturity looks like. Thoughtful, confident progress that makes polyglot development feel natural and productive.

If you’re using C# as the orchestration layer and JavaScript for the parts that are best written in JavaScript, Aspire keeps getting better at supporting you exactly where you are. That’s the goal.

πŸŽ‰ The bottom line

Aspire 13.2 doesn’t reinvent the JavaScript story. It sharpens it.

Bun is now a first-class option for Vite apps. Yarn is more reliable. Containers are more predictable. Debugging is nicer. Build reliability is stronger. Each improvement is purposeful, and together they make the experience feel cohesive and polished.

That’s what great releases look like: consistent forward momentum, where the tooling does the right thing and development flows without interruption.

Get involved

The post Aspire 13.2: Bun Support and Container Enhancements appeared first on Aspire Blog.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

The Polyglot tax – Part 4

1 Share

The Agent-Ready Database: Security, Backup, and MCP

Part 4 of 4 – The Multi-Model Database Series


This is the final post in a four-part series on multi-model databases in SQL Server 2025 and Azure SQL – exploring how the optimizer, storage engine, and security layer treat each data model as a first-class citizen under one roof.

In Part 1: The Polyglot Tax, we described the trajectory: you spin up a database, point an agent at it, and start building fast. The complexity comes later – JSON, graph, vectors, analytics – and each new requirement tempts you to spin up another database. In Part 2: When JSON Met Graph, we proved that native JSON types and graph MATCH syntax compile into a single execution plan alongside relational joins. In Part 3: Vectors, Analytics, and the End of ETL, we added DiskANN vector search and columnstore analytics – five data models, one stored procedure, one transaction boundary.

But here is where many “multi-model” stories stop. They show you the features and skip the hard parts – the parts that determine whether your architecture survives contact with production. Security across all those data models. Backup and recovery that is actually consistent. An API layer that agents can call without you hand-coding middleware for every endpoint.

Today we close the series with those hard parts. They are not as flashy as DiskANN or columnstore, but they are the reason you can actually trust a single-engine architecture at scale.

1 Unified Security: One Policy Across All Models

In Part 1 we counted five separate auth systems in a polyglot stack. Let us see what “one security model” actually looks like in practice.

1.1 Row-Level Security Across Data Models

Consider a multi-tenant SaaS application. Every table has a TenantID column. We want to guarantee that Tenant A never sees Tenant B’s data – regardless of which data model the query uses.

First, define the filter function:

CREATE FUNCTION dbo.fn_TenantFilter(@TenantID INT)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS fn_result
WHERE @TenantID = CAST(SESSION_CONTEXT(N'TenantID') AS INT);

This function checks whether the row’s TenantID matches the tenant ID stored in the session context. SESSION_CONTEXT is set at connection time by the application (or Data API Builder) – the user never controls it directly.

Now apply it across all tables, regardless of data model:

CREATE SECURITY POLICY TenantIsolation
ADD FILTER PREDICATE dbo.fn_TenantFilter(TenantID)
    ON dbo.Customers,               -- Relational
ADD FILTER PREDICATE dbo.fn_TenantFilter(TenantID)
    ON dbo.Events,                  -- JSON data
ADD FILTER PREDICATE dbo.fn_TenantFilter(TenantID)
    ON dbo.Relationships,           -- Graph edges
ADD FILTER PREDICATE dbo.fn_TenantFilter(TenantID)
    ON dbo.Embeddings               -- Vector data
WITH (STATE = ON);

From this point forward, every query against these tables – relational joins, JSON path queries, graph traversals, vector similarity searches – is automatically filtered by tenant. The calling code does not need to include WHERE TenantID = @id in every query. The engine injects the filter predicate into the execution plan before any data leaves the storage engine.

Here is what this means for security audits. In a polyglot system, the auditor asks: “Prove that Tenant A cannot access Tenant B’s data.” You need to demonstrate this for each database independently – the relational database’s RLS, the document store’s field-level access control, the graph database’s node-level restrictions, and hope that your vector store supports metadata filtering. Each system has different configuration, different failure modes, different edge cases.

With a unified security model, the answer is one policy, one proof, one explanation. The filter predicate is compiled into the execution plan. It is not optional. It cannot be bypassed by switching to a different query syntax or data model.

1.2 Beyond RLS: Layered Permissions

Row-Level Security handles which rows a caller can see. But a unified security model goes further.

Stored procedures as the permission boundary. When stored procedures are the interface – as we recommend in Section 6 – you grant EXECUTE on those procedures and nothing else. The caller has no direct SELECT, INSERT, UPDATE, or DELETE permissions on the underlying tables. If an agent’s credentials are compromised, the attacker can only call the procedures you exposed, with the parameters those procedures accept. No ad-hoc queries. No schema discovery.

-- The agent can only call these procedures - nothing else
GRANT EXECUTE ON dbo.FraudCheck TO agent_fraud_checker;
GRANT EXECUTE ON dbo.GetCustomerContext TO agent_fraud_checker;
-- No SELECT, INSERT, UPDATE, DELETE on any table

Dynamic Data Masking protects sensitive columns – email addresses, phone numbers, account balances – from roles that should not see full values. Unlike RLS, which filters rows, masking returns the row but obscures column values. An analytics agent that needs aggregate patterns but not individual PII sees masked data by default.

Always Encrypted keeps sensitive columns encrypted end-to-end. The database engine never sees the plaintext. Even a DBA with full server access cannot read the data. For compliance-heavy workloads, this is the strongest guarantee the engine offers.

These layers compose. A single query can hit RLS (row filtering), Dynamic Data Masking (column obfuscation), and stored procedure permissions (surface restriction) simultaneously – all enforced by the engine, all configured in one place.

1.3 Testing It

-- Set context to Tenant 100
EXEC sp_set_session_context @key = N'TenantID', @value = 100;

SELECT * FROM Customers;      -- Returns only Tenant 100 customers
SELECT * FROM Events;          -- Returns only Tenant 100 events
SELECT * FROM Relationships;   -- Returns only Tenant 100 graph edges
SELECT * FROM Embeddings;      -- Returns only Tenant 100 vectors

-- Switch context to Tenant 200
EXEC sp_set_session_context @key = N'TenantID', @value = 200;

SELECT * FROM Customers;      -- Returns only Tenant 200 customers
-- Same tables, same queries, different results. No code changes.

The security policy is invisible to the application. Your queries do not change. Your stored procedures do not change. The engine handles it.

2 Unified Backup and Recovery

-- One backup captures ALL data models to a consistent point
BACKUP DATABASE MultiModelApp
TO URL = 'https://storage.blob.core.windows.net/backups/MultiModelApp.bak'
WITH COMPRESSION, ENCRYPTION (
    ALGORITHM = AES_256,
    SERVER CERTIFICATE = BackupCert
);

-- One restore recovers ALL data models to a consistent point
RESTORE DATABASE MultiModelApp
FROM URL = 'https://storage.blob.core.windows.net/backups/MultiModelApp.bak'
WITH STOPAT = '2026-02-01 10:30:00';

The STOPAT clause is point-in-time recovery. The engine replays the transaction log to the specified moment, incorporating all operations – relational inserts, JSON updates, graph edge creations, vector inserts – into a single consistent state.

In a polyglot system, point-in-time recovery across five databases requires coordinating five separate restore operations and hoping that the timestamps line up. If your graph database restored to 10:30:00 but your relational database restored to 10:29:58, you have two seconds of inconsistency. For financial data, audit trails, and compliance-sensitive systems, this is not acceptable.

With one database, one transaction log, and one backup, the recovery is atomic.

3 Ledger Tables: Cryptographic Audit Trails

For regulated industries – finance, healthcare, government – “who changed what, when” needs to be tamper-evident. SQL Server’s ledger tables provide this:

-- Updatable ledger table: tracks all changes with cryptographic hashes
CREATE TABLE FinancialTransactions (
    TransactionID INT PRIMARY KEY,
    AccountID INT NOT NULL,
    Amount MONEY NOT NULL,
    TransactionType NVARCHAR(20),
    Description NVARCHAR(500),
    TransactionDate DATETIME2 DEFAULT SYSUTCDATETIME()
)
WITH (SYSTEM_VERSIONING = ON, LEDGER = ON);

Every insert, update, and delete is recorded in a history table with a cryptographic hash chain. The chain ensures that if anyone tampers with historical records – even a DBA with full access – the verification check fails.

-- Verify nobody has tampered with the ledger
EXECUTE sp_verify_database_ledger_from_digest_storage;

For append-only audit logs where updates and deletes should be physically impossible:

CREATE TABLE AuditLog (
    LogID INT IDENTITY PRIMARY KEY,
    EventType NVARCHAR(50) NOT NULL,
    EventData JSON,
    UserName NVARCHAR(100) DEFAULT SUSER_SNAME(),
    EventTime DATETIME2 DEFAULT SYSUTCDATETIME()
)
WITH (LEDGER = ON (APPEND_ONLY = ON));

An APPEND_ONLY ledger table rejects UPDATE and DELETE statements at the engine level. Combined with the JSON column for event data, this gives you a schema-flexible, tamper-evident audit trail – governed by the same RLS policies as everything else.

4 SQL MCP Server: From Database to Agent in 60 Seconds

We have established that the database engine can handle multiple data models with unified governance. The remaining question: how do applications and AI agents access it?

SQL MCP Server is a feature of Data API Builder (DAB) – Microsoft’s open-source engine that generates REST, GraphQL, and MCP endpoints directly from your database schema. No backend code. No ORM. No controller classes. It is the prescriptive approach for exposing enterprise databases to agents – and it solves problems that other database MCP implementations do not attempt.

4.1 Installation and Configuration

# Install DAB CLI (.NET tool)
dotnet tool install -g Microsoft.DataApiBuilder

# Initialize a configuration file
dab init --database-type mssql \
         --connection-string "Server=localhost;Database=MultiModelApp;Trusted_Connection=true"

This creates a dab-config.json file. Let us add entities:

# Expose a table as REST + GraphQL + MCP
dab add Customer \
  --source dbo.Customers \
  --permissions "anonymous:read" \
  --rest true \
  --graphql true

# Expose a stored procedure (our multi-model query from Part 2)
dab add GetCustomerContext \
  --source dbo.GetCustomerContext \
  --source.type "stored-procedure" \
  --source.params "customerID:,query:" \
  --permissions "authenticated:execute" \
  --rest.methods "get,post" \
  --graphql.operation "query"
# Start the API server
dab start

You now have: – REST at http://localhost:5000/api/Customer – GraphQL at http://localhost:5000/graphql – MCP at http://localhost:5000/mcp for AI agents

No code generation. No deployment pipeline for a separate API layer. If your table schema changes, DAB picks up the changes on restart.

4.2 The Configuration File

Here is what DAB creates under the hood:

{
  "data-source": {
    "database-type": "mssql",
    "connection-string": "@env('SQL_CONNECTION_STRING')"
  },
  "entities": {
    "Customer": {
      "source": {
        "type": "table",
        "object": "dbo.Customers"
      },
      "rest": { "enabled": true },
      "graphql": {
        "enabled": true,
        "type": { "singular": "Customer", "plural": "Customers" }
      },
      "permissions": [
        { "role": "anonymous", "actions": ["read"] }
      ]
    },
    "GetCustomerContext": {
      "source": {
        "type": "stored-procedure",
        "object": "dbo.GetCustomerContext",
        "parameters": {
          "customerID": "",
          "query": ""
        }
      },
      "rest": { "methods": ["GET", "POST"] },
      "graphql": { "operation": "query" },
      "permissions": [
        { "role": "authenticated", "actions": ["execute"] }
      ]
    }
  }
}

First, the connection string uses @env('SQL_CONNECTION_STRING') – it reads from an environment variable, not a plain-text config file. Configuration values can also reference Azure Key Vault secrets. Credentials never sit in a file that might be committed to source control.

Second, the permissions model distinguishes between anonymous and authenticated roles. The Customer entity allows anonymous reads (public catalog data). The GetCustomerContext procedure requires authentication. DAB supports Azure AD / Entra ID integration, so “authenticated” means the caller presented a valid JWT – which DAB validates before forwarding the request to SQL Server. The database-side RLS policy then filters the results by tenant. Two layers of defense, zero custom middleware.

4.3 How SQL MCP Server Exposes Your Data to Agents

SQL MCP Server takes a fundamentally different approach from the database MCP servers you will find on GitHub. Three design decisions matter.

Fixed, small tool set. Regardless of how many tables or procedures your database has, SQL MCP Server exposes exactly seven DML tools: describe_entities, create_record, read_records, update_record, delete_record, execute_entity, and aggregate_records. The agent’s context window is its thinking space. When dozens of tools compete for that space, reasoning quality drops. A fixed tool set keeps the context focused – the agent thinks first and accesses data second.

"runtime": {
  "mcp": {
    "enabled": true,
    "path": "/mcp",
    "dml-tools": {
      "describe-entities": true,
      "create-record": true,
      "read-records": true,
      "update-record": true,
      "delete-record": true,
      "execute-entity": true,
      "aggregate-records": true
    }
  }
}

You can disable individual tools per deployment. A read-only analytics agent gets read_records and aggregate_records. A fraud-checker agent gets execute_entity to call dbo.FraudCheck. The blast-radius scoping from Section 6 extends to the MCP layer.

Entity abstraction – no raw schema exposure. The agent never sees dbo.Customers or dbo.TransactionHistory. It sees the entity names you defined in the configuration: Customer, GetCustomerContext. You can alias column names, hide sensitive fields per role, and add descriptions that guide agent behavior. The internal schema stays internal.

Deterministic query building (NL2DAB, not NL2SQL). SQL MCP Server intentionally does not support NL2SQL. Models are not deterministic, and complex queries are the likeliest to produce subtle errors – exactly the queries users hope AI can generate. Instead, when an agent calls read_records or aggregate_records, DAB’s built-in query builder produces well-formed T-SQL deterministically from the structured tool parameters. Same input, same query, every time. The risk, overhead, and unpredictability of NL2SQL disappear entirely.

4.4 Semantic Descriptions: Teaching Agents What Your Data Means

Without descriptions, an agent sees technical names like ProductID or dbo.Orders. With descriptions, the agent understands that ProductID is “Unique identifier for each product in the catalog” and the Orders entity contains “Customer purchase orders with line items and shipping details.” You can add descriptions at every level – entities, fields, and stored procedure parameters – using the DAB CLI.

This matters for three reasons: agents find the right entities faster (tool discovery), build better queries with proper context (query accuracy), and return only relevant fields (field selection). In our fraud detection system, describing dbo.FraudCheck as “Real-time fraud assessment combining transaction history, device fingerprint, social graph, vector similarity, and statistical baselines” tells the agent exactly when to use it.

4.5 Custom Tools: Stored Procedures as MCP Tools

For enterprises that want a bespoke MCP surface rather than a generic CRUD interface, SQL MCP Server supports promoting stored procedures as custom tools. You turn off the built-in DML tools and expose only your human-reviewed procedures. Our dbo.FraudCheck and dbo.GetCustomerContext become the only tools the agent can call – each with typed parameters, descriptions, and role-based permissions. This is the tightest possible contract between an agent and your data.

4.6 Caching and Monitoring

SQL MCP Server automatically caches results from read_records. Both L1 (in-memory) and L2 (Redis / Azure Managed Redis) caching are supported, configurable per entity. For agent workloads that repeatedly query the same reference data – product catalogs, customer profiles, baseline statistics – caching reduces database load and prevents request stampedes.

For observability, SQL MCP Server emits logs and telemetry to Azure Log Analytics, Application Insights, and OpenTelemetry endpoints. Combined with the observability patterns in Section 6, you get end-to-end visibility: which agent called which MCP tool, which T-SQL was generated, how long it took, and what the execution plan looked like.

5 The Multi-Model Stored Procedure

In Part 2 and Part 3 we built queries that combined individual data models. Here is a complete stored procedure that an AI agent calls via DAB to get full customer context in a single round-trip:

CREATE PROCEDURE dbo.GetCustomerContext
    @customerID INT,
    @query NVARCHAR(MAX) = NULL
AS
BEGIN
    SELECT 
        -- Relational: Core customer data
        c.Name,
        c.Email,
        c.AccountStatus,
        c.LifetimeValue,

        -- JSON: Recent interactions (schema-flexible)
        (
            SELECT TOP 5 
                JSON_VALUE(i.Data, '$.channel') AS Channel,
                JSON_VALUE(i.Data, '$.sentiment') AS Sentiment,
                i.Timestamp
            FROM Interactions i
            WHERE i.CustomerID = @customerID
            ORDER BY i.Timestamp DESC
            FOR JSON PATH
        ) AS RecentInteractions,

        -- Graph: Connected high-value customers
        (
            SELECT p2.Name, p2.LifetimeValue
            FROM Customers c2, Knows k, Customers p2
            WHERE MATCH(c2-(k)->p2)
            AND c2.CustomerID = @customerID
            AND p2.LifetimeValue > 10000
            FOR JSON PATH
        ) AS HighValueConnections,

        -- Vector: Similar past support tickets
        (
            SELECT TOP 3
                t.Resolution,
                t.SatisfactionScore
            FROM SupportTickets t
            WHERE @query IS NOT NULL
            ORDER BY VECTOR_DISTANCE('cosine', t.Embedding, 
                (SELECT embedding FROM dbo.GetEmbedding(@query)))
            FOR JSON PATH
        ) AS SimilarTickets

    FROM Customers c
    WHERE c.CustomerID = @customerID;
END;

Here is what happens when an agent calls this via SQL MCP Server.

  1. The agent discovers GetCustomerContext through the describe_entities tool – the description tells it this procedure returns full customer context
  2. The agent calls execute_entity with {"entity": "GetCustomerContext", "parameters": {"customerID": 123, "query": "shipping delay"}}
  3. SQL MCP Server validates the JWT, extracts the tenant ID, and sets SESSION_CONTEXT
  4. DAB generates the EXEC dbo.GetCustomerContext call deterministically from the typed parameters – no NL2SQL, no prompt-to-query translation
  5. The engine executes one query plan that touches relational tables, JSON documents, graph edges, and vector indexes
  6. RLS filters all results by tenant – the agent only sees data its user is authorized to see
  7. The procedure returns a single JSON response with all four data models represented
  8. The agent receives complete context in one round-trip: ~10–50ms

Compare this to the polyglot workflow from Part 1: 5 API calls, 5 auth contexts, 5 failure modes, 200–500ms. The multi-model version is not just faster – it reduces the agent’s cognitive burden. Fewer tool calls means less opportunity for the model to hallucinate, lose context, or retry in unexpected ways.

6 When the Caller is an Agent

The stored procedures above work. But traditional database design assumes a deterministic, human-reviewed caller. Agents violate that assumption at every layer. They reason their way to queries, retry on failure, and hold connections while they think. The patterns needed to handle this – statement timeouts, append-only logs, idempotency keys, blast-radius scoping, query tagging – are not new technology. They are existing patterns elevated from “nice to have” to “load-bearing infrastructure.”

Stored procedures as the API surface

The strongest defense against unpredictable agent queries is to never let agents write arbitrary SQL. The dbo.FraudCheck procedure from Part 3 is the agent’s API surface. The agent calls execute_entity with typed parameters, and SQL MCP Server generates the EXEC dbo.FraudCheck @PersonID = 3, @IncomingAmount = 9200 deterministically – not a five-table join the model reasoned into existence. This is the NL2DAB approach in practice: the agent expresses intent through structured tool calls, and DAB’s query builder produces the SQL. No prompt-to-SQL translation. No hallucinated table names. The “non-deterministic caller” problem stops mattering when the callable surface is a set of human-reviewed procedures exposed through a deterministic query builder.

Resource isolation

Agent workloads are unpredictable – a fraud-check burst can spike CPU while customer-facing checkout queries need consistent latency. Resource Governor creates workload groups with CPU, memory, and I/O ceilings:

-- Separate agent traffic from human-facing application queries
CREATE RESOURCE POOL AgentPool WITH (CAP_CPU_PERCENT = 30);
CREATE WORKLOAD GROUP AgentWorkload
    USING AgentPool;

-- Classify agent sessions into the workload group
CREATE FUNCTION dbo.AgentClassifier() RETURNS SYSNAME
WITH SCHEMABINDING AS
BEGIN
    RETURN CASE 
        WHEN SUSER_NAME() LIKE 'agent_%' THEN 'AgentWorkload' 
        ELSE 'default' 
    END;
END;

An agent running fraud checks that consumes excessive CPU hits the pool ceiling. Customer-facing checkout queries continue in their own pool, unaffected. This is enforced by the engine, not by the caller.

Idempotency and audit

Agents retry by design. Every orchestration framework uses at-least-once delivery. An agent that gets a timeout on a fraud decision might re-issue the same call. Temporal tables provide automatic history tracking for every row change:

-- Fraud decisions table with temporal history and idempotency
CREATE TABLE FraudDecisions (
    DecisionID INT IDENTITY PRIMARY KEY,
    TxnID INT NOT NULL,
    PersonID INT NOT NULL,
    Decision NVARCHAR(20) NOT NULL,   -- 'APPROVED', 'DENIED', 'REVIEW'
    DecidedBy NVARCHAR(100) NOT NULL, -- 'agent:fraud-checker-v2' or 'user:analyst-jane'
    IdempotencyKey NVARCHAR(64) NULL,
    ValidFrom DATETIME2 GENERATED ALWAYS AS ROW START,
    ValidTo DATETIME2 GENERATED ALWAYS AS ROW END,
    PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo),
    CONSTRAINT UQ_FraudDecision_Idempotency UNIQUE (IdempotencyKey)
) WITH (SYSTEM_VERSIONING = ON);

Every change is recorded in a history table automatically. “Show me everything the anti-fraud agent changed in the last hour” is a built-in temporal query. The UNIQUE constraint on IdempotencyKey handles retries: the second write with the same key gets a constraint violation, and the agent treats it as “already processed.”

Blast radius scoping

The question is not “what does this agent need?” but “what is the worst case if this agent’s reasoning goes wrong?” In the fraud system, this translates directly:

CREATE ROLE agent_fraud_checker;
CREATE ROLE agent_analytics;

-- Fraud checker agent: can read everything it needs, can only write decisions
GRANT EXECUTE ON dbo.FraudCheck TO agent_fraud_checker;
GRANT INSERT ON FraudDecisions TO agent_fraud_checker;
-- Cannot modify Person.RiskScore, cannot delete transactions, cannot see PII

-- Analytics agent: read-only against the columnstore
GRANT SELECT ON TransactionHistory TO agent_analytics;
GRANT SELECT ON Person TO agent_analytics;
-- Cannot write anything

Row-Level Security (from Section 1) extends this further. A policy filters which persons or accounts the agent can see. The agent does not know the filter exists. It calls dbo.FraudCheck and gets back only the data it is authorized to see.

Query observability

When an agent issues a slow fraud check, you need to know which agent, which task, and which reasoning step produced it. Two layers work together.

At the application layer, SQL MCP Server emits structured telemetry – including agent identity and request metadata – to Azure Log Analytics, Application Insights, and OpenTelemetry endpoints. Session context tags set at connection time provide correlation IDs that flow through DAB’s logging pipeline:

-- Agent sets context at connection time
EXEC sp_set_session_context @key = N'agent_id', @value = N'fraud-checker-v2';
EXEC sp_set_session_context @key = N'task_id', @value = N'txn-review-9200';

At the database layer, Query Store captures every stored procedure execution with runtime stats and execution plans. When agent workloads funnel through a fixed set of stored procedures (as described above), identifying problematic queries is straightforward – Query Store tells you which executions were slow and why the optimizer chose that plan.

The pattern

None of these are add-ons. Temporal tables replace hand-built audit logs. Resource Governor replaces hand-built connection pool isolation. Row-Level Security replaces hand-built authorization filters. Query Store replaces hand-built query monitoring. The database was designed with these safeguards for human callers. For agentic callers, you activate them rather than invent them.

7 Reference Architecture: E-Commerce Platform

Here is what the full architecture looks like in production.

Azure SQL Database (Multi-Model Core)
β”œβ”€β”€ Relational: Customers, Orders, Inventory, Payments
β”œβ”€β”€ JSON: Product specs, event logs, user preferences, feature flags
β”œβ”€β”€ Graph: Customer networks, fraud rings, product relationships
β”œβ”€β”€ Vector (DiskANN): Product search, support ticket similarity, recommendations
└── Columnstore: Sales analytics, inventory forecasting, customer segmentation

Data API Builder
β”œβ”€β”€ REST: Mobile/web apps, partner integrations
β”œβ”€β”€ GraphQL: Frontend teams, flexible queries
└── MCP: AI agents (Claude, GPT, custom)

Governance (applied uniformly)
β”œβ”€β”€ Row-Level Security: Tenant isolation across all models
β”œβ”€β”€ Dynamic Data Masking: PII protection in API responses
β”œβ”€β”€ Always Encrypted: Client-side encryption for sensitive columns
β”œβ”€β”€ Ledger Tables: Tamper-evident audit trail
β”œβ”€β”€ TDE: Encryption at rest
└── Unified Audit: Single log for all data access

Recommendation Engine

Here is a product recommendation procedure that uses all five data models:

CREATE PROCEDURE dbo.GetProductRecommendations
    @customerID INT,
    @limit INT = 10
AS
BEGIN
    WITH CustomerProfile AS (
        -- Graph: What do customers in this person's network buy?
        SELECT 
            oi.ProductID,
            COUNT(*) * 0.3 AS GraphScore
        FROM Customers c1, Similar s, Customers c2
        JOIN Orders o ON c2.CustomerID = o.CustomerID
        JOIN OrderItems oi ON o.OrderID = oi.OrderID
        WHERE MATCH(c1-(s)->c2)
        AND c1.CustomerID = @customerID
        GROUP BY oi.ProductID
    ),
    VectorSimilarity AS (
        -- Vector: Products semantically similar to recent purchases
        SELECT 
            p.ProductID,
            MIN(VECTOR_DISTANCE('cosine', p.Embedding, recent.Embedding)) * 0.4 AS VectorScore
        FROM Products p
        CROSS JOIN (
            SELECT TOP 3 pr.Embedding 
            FROM OrderItems oi 
            JOIN Orders o ON oi.OrderID = o.OrderID
            JOIN Products pr ON oi.ProductID = pr.ProductID
            WHERE o.CustomerID = @customerID
            ORDER BY o.OrderDate DESC
        ) recent
        GROUP BY p.ProductID
    ),
    TrendingProducts AS (
        -- Columnstore/Analytics: What is trending this week?
        SELECT 
            ProductID,
            PERCENT_RANK() OVER (ORDER BY SUM(Quantity)) * 0.2 AS TrendScore
        FROM SalesHistory
        WHERE SaleDate > DATEADD(DAY, -7, GETUTCDATE())
        GROUP BY ProductID
    )
    SELECT TOP (@limit)
        p.ProductID,
        p.Name,
        JSON_VALUE(p.Specs, '$.shortDescription') AS Description,  -- JSON
        p.Price,
        COALESCE(cp.GraphScore, 0) +
        COALESCE(vs.VectorScore, 0) +
        COALESCE(tp.TrendScore, 0) AS TotalScore
    FROM Products p
    LEFT JOIN CustomerProfile cp ON p.ProductID = cp.ProductID
    LEFT JOIN VectorSimilarity vs ON p.ProductID = vs.ProductID
    LEFT JOIN TrendingProducts tp ON p.ProductID = tp.ProductID
    WHERE p.InStock = 1
    ORDER BY TotalScore DESC;
END;

One procedure. Five data models. One execution plan. One transaction. One security boundary. One backup captures the entire state.

8 Getting Started

Everything in this series runs on free-tier infrastructure.

Option 1: SQL Server 2025 (On-Premises, Free)

SQL Server 2025 Express includes all multi-model features: native JSON type, graph tables, vector search, columnstore indexes. Up to 10 GB per database. No license cost.

# Install via winget
winget install Microsoft.SQLServer.2025.Express

Option 2: Azure SQL Database (Cloud, Free Tier)

Azure SQL Database Free Offer – 100,000 vCore-seconds per month. Full cloud-managed experience with automatic backups and high availability.

Data API Builder (Free, Open Source)

dotnet tool install -g Microsoft.DataApiBuilder
dab init --database-type mssql --connection-string "YOUR_CONNECTION_STRING"
dab add Customer --source dbo.Customers --permissions "anonymous:read"
dab start

Sixty seconds from install to a running API with REST, GraphQL, and MCP endpoints.

Resource Link
SQL Server Downloads microsoft.com/sql-server
Azure SQL Free Tier https://aka.ms/sqlfreeoffer
Data API Builder github.com/Azure/data-api-builder
DAB Documentation learn.microsoft.com
SQL MCP Server aka.ms/sql/mcp
SQL Server Samples github.com/microsoft/sql-server-samples

Series Summary

Across four posts, we traced a single fraud detection scenario from architectural problem to production-ready solution:

  • Part 1: The Polyglot Tax – Five databases, five auth systems, five backup schedules, five things that break at 2 AM. We measured the compounding costs of polyglot persistence: network latency, saga patterns for consistency, exploding security surface area, operational overhead, and developer cognitive load. The tax is not any single cost – it is all of them multiplied together.
  • Part 2: When JSON Met Graph – Native JSON type with pre-parsed binary storage and indexed paths replaced NVARCHAR(MAX) string parsing. Graph MATCH syntax compiled to the same join infrastructure the optimizer has refined for three decades. One query combined relational joins, JSON path extraction, and graph traversal in a single execution plan. Two databases eliminated.
  • Part 3: Vectors, Analytics, and the End of ETL – DiskANN delivered billion-scale vector search on commodity storage. The pre-filter pattern – where relational predicates narrow 10 million rows to 3,000 before any distance computation – is an optimization that only works when vectors and relational metadata share the same engine. Columnstore achieved 60x analytical speedups through compression, column elimination, batch mode, and segment elimination. Two more databases eliminated.
  • Part 4 (this post) – Row-level security applied uniformly across all five data models. Point-in-time backup and recovery as a single atomic operation. SQL MCP Server generating REST, GraphQL, and MCP endpoints without custom code – with entity abstraction, deterministic query building, and a fixed tool set that keeps the agent’s context window focused. Agent-hardening patterns – stored procedures as API surfaces, resource isolation, idempotency, blast-radius scoping – built from primitives the engine already has.

So What?

In Part 1, you were building fast. An agent, a database, a working app by lunch. Then the complexity arrived – JSON schemas that evolve weekly, graph traversals for fraud detection, semantic search for product discovery, real-time dashboards over millions of rows – and the conventional answer was to spin up a new database for each one. Five databases. Five auth systems. Five backup schedules. Five things that can break while you sleep.

Across the next three posts, we dismantled that assumption:

  • Part 2: JSON and graph live inside the same engine, compiled into the same execution plans, optimized by the same query planner. Two databases eliminated.
  • Part 3: DiskANN provides web-scale vector search with hybrid filtering that external vector databases cannot replicate. Columnstore delivers 60x analytical speedups through physics, not tuning. Two more databases eliminated.
  • Part 4 (this post): One row-level security policy across all five data models. One backup to a consistent point in time. SQL MCP Server generating REST, GraphQL, and MCP endpoints with entity abstraction, deterministic query building, and a fixed seven-tool surface that keeps agents focused. Agent-hardening patterns that use the engine’s existing primitives – not new inventions.

Five databases consolidated to one. Five auth systems to one. Five backup schedules to one.

For the agentic world, this is not an incremental improvement – it is a structural one. Your agent makes one call, gets one response, through one auth context, in under 50ms. No orchestrating between services. No hoping that five databases all return consistent data. No writing fallback logic because one of them is slow.

The case is not that SQL Server is the only database you will ever need. There are workloads – petabyte-scale OLAP, real-time streaming with sub-millisecond latency, specialized time-series at extreme ingest rates – where purpose-built systems justify their operational overhead.

The case is this: the database should never be the reason you slow down. Your agent is fast. Your iteration cycle is fast. Your architecture should keep up. For the majority of application workloads, a multi-model engine removes the tax entirely – one database, one optimizer, one security model, one backup, one API.

The math from Part 1 still holds: 1 Γ— 1 Γ— 1 = 1, not 5 Γ— 5 Γ— 5 = 125.

Start with the free tier. Run the scripts. Point an agent at it. Build fast – and keep building fast.

The Full Series


Further Reading:Azure SQL Database DocumentationJSON in SQL ServerGraph Processing in SQL ServerVector Search in SQL ServerColumnstore IndexesData API BuilderSQL MCP ServerRow-Level SecurityLedger Tables

The post The Polyglot tax – Part 4 appeared first on Azure SQL Dev Corner.

Read the whole story
alvinashcraft
19 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Meet Replit Security Agent

1 Share
Replit gives professionals a secure place to build with AI. Replit Agent already protects your apps as you build by automatically scanning for vulnerabilities, and audits dependencies, before your projects are ever published. Before coding agents, a full pre-launch security review meant additional weeks of back-and-forth: coordinating with security engineers, reviewing reports, and manually fixing issues. Today, we are introducing Replit Security Agent, which lets you you complete a comprehensive security review of your app in under an hour. Security Agent acts on a customizable threat modeling plan to review your entire codebase. It also uses a unique hybrid approach, leveraging Semgrep and HoundDog.ai as tools to improve the accuracy of its findings. Daghan Atlas, Head of Product at Semgrep, shares: β€œThe most effective security is the kind that works seamlessly. Replit's Security Agent is a great example of what's possible when you pair the contextual reasoning of LLMs with the determinism and program analysis capabilities of Semgrep. We're excited to see this combination in the hands of Replit's massive builder community.”

Read the whole story
alvinashcraft
57 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Building agent-first governance and security

1 Share

As AI agents increasingly work alongside humans across organizations, companies could be inadvertently opening a new attack surface. Insecure agents can be manipulated to access sensitive systems and proprietary data, increasing enterprise risk.

In some modern enterprises, non-human identities (NHI) are outpacing human identities, and that trend will explode with agentic AI. Solid governance and a fortified security foundation are therefore critical. 

According to the Deloitte AI Institute 2026 State of AI report, nearly 74% of companies plan to deploy agentic AI within two years. Yet only one in five (21%) reports having a mature model for governance of autonomous agents. Executives are most concerned with data privacy and security (73%); legal, intellectual property, and regulatory compliance (50%); followed closely by governance capabilities and oversight (46%).

Enterprises may not even realize they are treating agents within their environment as first-class citizens with the keys to the kingdom, creating looming blind spots and potential points of exposure. What is needed is a robust control plane that governs, observes, and secures how AI agents, as well as their tools and models, operate across the enterprise.

β€œA control plane is the shared, centralized layer governing who can run which agents, with which permissions, under which policies, and using which models and tools,” according to Andrew Rafla, principal, Deloitte Cyber Practice.

β€œWithout a true control plane, you don’t really have the ability to scale agents autonomouslyβ€”you just have unmanaged execution, and that comes with a lot of risk,” he says. β€œIf you can’t answer what an agent did, on whose behalf, using what data, under what policyβ€”and whether you can reproduce or stop itβ€”you don’t have a functional control plane.”

Governance must make those answers obvious, not aspirational, he says. Governance is what turns AI pilots into production use cases. It’s the bridge that lets companies move from impressive experiments to safe, repeatable, enterprise-wide automation.

Without governance, agent deployments don’t fail safely. They fail unpredictably and at scale.

Download the article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Keeping the Cursor app stable

1 Share
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories