Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148725 stories
·
33 followers

A Little About Scalar UDFs and Read Committed Snapshot Isolation In SQL Server

1 Share

A Little About Scalar UDFs and Read Committed Snapshot Isolation In SQL Server


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The post A Little About Scalar UDFs and Read Committed Snapshot Isolation In SQL Server appeared first on Darling Data.

Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Query JSON Data Quickly in SQL Server, Part 1: Pre-2025

1 Share

Before SQL Server 2025, if you want to store JSON data in Microsoft SQL Server or Azure SQL DB, and you want fast queries, the easiest way is to:

  • Store the data in an NVARCHAR(MAX) column (because the native JSON datatype didn’t arrive until SQL Server 2025)
  • Add a computed column for the specific JSON keys we’ll want to query quickly
  • Index those keys
  • Query it using the JSON_VALUE function

To demo what I mean, I’m going to take the Users table from the Stack Overflow database, and I’m going to pretend that we’re only storing the Id column, and the rest of the columns are JSON, stored in a UserAttributes column. Let’s create a new Users_JSON table to simulate it:

DROP TABLE IF EXISTS dbo.Users_JSON;
GO
CREATE TABLE dbo.Users_JSON
(
    Id int NOT NULL PRIMARY KEY CLUSTERED,
    UserAttributes json
);
GO

/* Takes 2-3 minutes with the 2018-06 training database on 4 cores, 32GB RAM: */
INSERT INTO dbo.Users_JSON (Id, UserAttributes)
SELECT 
    Id,
    JSON_OBJECT(
        'DisplayName': DisplayName,
        'Reputation': Reputation,
        'Age': Age,
        'CreationDate': CreationDate,
        'LastAccessDate': LastAccessDate,
        'Location': Location,
        'UpVotes': UpVotes,
        'DownVotes': DownVotes,
        'Views': Views,
        'WebsiteUrl': WebsiteUrl,
        'AboutMe': AboutMe,
        'EmailHash': EmailHash,
        'AccountId': AccountId
    )
FROM dbo.Users;
GO

Now we’ve got a table with just Ids and JSON data:

Users_JSON

If we want to find the users with DisplayName = ‘Brent Ozar’, and we query with the JSON_VALUES function, SQL Server has to scan through that entire table, cracking open every JSON value, and finding it. That’s a lot of reading, and a lot of CPU work:

SET STATISTICS TIME, IO ON;

SELECT *
    FROM dbo.Users_JSON
    WHERE JSON_VALUE(UserAttributes, '$.DisplayName') = N'Brent Ozar';

The query is slow and CPU-intensive, going parallel across multiple cores, maxing our server out for about 4 seconds:

Oof, that CPU time

This stuff works in the sense that it compiles and runs, but as you hit real-world data sizes (there are less than 10 million rows in that table), that doesn’t scale. We’re gonna need to run more than one query every 4 seconds.

Add a computed column and index it.

We do have to decide what columns we wanna index it, but take a deep breath and relax – I’m not saying you need to do any application code work at all. We’re just going to make some of the columns really fast. Take DisplayName:

ALTER TABLE dbo.Users_JSON
    ADD vDisplayName AS JSON_VALUE(UserAttributes, '$.DisplayName');

CREATE INDEX vDisplayName ON dbo.Users_JSON (vDisplayName);

Now, run the exact same JSON_VALUE query without changing our app or our code:

SELECT *
    FROM dbo.Users_JSON
    WHERE JSON_VALUE(UserAttributes, '$.DisplayName') = N'Brent Ozar';

The query runs instantly, reading hardly any data and doing no CPU work:

Fast index seek

SQL Server automatically recognizes what we’re trying to do in the query, realizes it’s got an indexed computed column ready to go, and uses that to deliver our query results. It’s like magic, and it’s worked this way since 2016 with JSON_VALUE. Works with other datatypes too, like numbers and dates.

It even works with LIKE queries:

SELECT *
    FROM dbo.Users_JSON
    WHERE JSON_VALUE(UserAttributes, '$.DisplayName') LIKE N'Brent%';

Producing a nice index seek, and even pretty good row estimations. Here, SQL Server estimates 3699 rows will be produced (and 1523 are):

Sargable LIKE

If we do a non-sargable filter, like a leading % sign in our LIKE:

SELECT *
    FROM dbo.Users_JSON
    WHERE JSON_VALUE(UserAttributes, '$.DisplayName') LIKE N'%Jorriss%';

SQL Server still uses the index, just scanning it instead of seeking it, which is a good thing:

Non-sargable LIKE

Scan SOUNDS bad, but it’s still less CPU work and logical reads than our un-indexed JSON scan. Back at the beginning of the post, we were looking at half a million reads and 14 seconds of CPU work to scan all the JSON, but with the index, even non-sargable stuff isn’t terrible:

Non-sargable IO metrics

Once you’ve got the computed column and index in place, you could change your queries to be normal T-SQL, like this:

SELECT *
    FROM dbo.Users_JSON
    WHERE vDisplayName = N'Brent Ozar';

They’ll be super-fast (just like the JSON_VALUE queries) – but I don’t recommend doing this. If you change your queries to point to the computed column, then I always have to have the computed column in place in the database. Leaving your queries as JSON_VALUE queries means we keep the flexibility of changing the database, plus we get fast index seeks. There’s no benefit to changing our queries to point to the new computed column – only heartache when our schema needs eventually change.

This indexed-computed-column technique has worked really well.

It’s a nice compromise that works well when the developers want the flexibility of changing which attributes they store, adding more attributes over time – but some attributes need really fast searches. I’ve been teaching it for years in this module of my Mastering Index Tuning class, where I also talk about the drawbacks and other solutions. (You can join in on these classes for a special price during my Black Friday sale this month!)

Developers are happy because they can change their UsersAttributes schema whenever they want without talking to the DBA. They can even decide to remove columns that used to be part of our core fast-query design – as long as they tell the DBA, and the DBA drops the index and computed column. The developers don’t have to change their queries – they just keep using JSON_VALUE. The results are fast when we agree that it’s one of the core columns, and still doable when they’re outside the core set of columns, just slower.

One drawback is that the NVARCHAR(MAX) datatype isn’t really JSON: SQL Server doesn’t validate the data for you. Another is that if you want multiple filters, you have to write them out individually. Another is that the more complex your JSON becomes, the more you have to pay attention to stuff like lax mode and strict mode. I’mma be honest, dear reader: like the lion, I don’t concern myself with such things, and I just point the developers to the documentation. I say look, if you want JSON queries to be millisecond-fast in SQL Server and Azure SQL DB, you gotta tell me the specific columns you’re going to query, in advance, and I’m gonna tell you how to write the query.

But the biggest drawback – by far – is that we have to define the core set of columns we wanna query quickly.

Developers want real flexibility: the ability to change their JSON schema at any time, and query any values out of it, quickly, without friction from the database side. That’s what Microsoft tried to deliver in SQL Server 2025, and I’ll cover that in the next post in the series. Careful what you ask for, though….

Read the whole story
alvinashcraft
36 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

What’s New in React 19.2 – Activity API, useEffectEvent, cacheSignal & More

1 Share

React 19.2 (released October 1, 2025) introduces a wave of refinements designed to improve rendering, performance, and developer ergonomics. In this post, walk through the new <Activity /> component, the useEffectEvent hook, cacheSignal, and SSR/streaming enhancements, with clear code examples and upgrade tips.

1. <Activity /> Component

What It Is

The new <Activity /> component allows parts of your UI to remain mounted and preserve state while being hidden or deprioritized — instead of being unmounted completely.

Why It Matters

  • Previously, hiding components via conditionals (like isVisible && <Sidebar />) destroyed state and effects.
  • With <Activity mode="hidden">, effects pause, updates defer, but state is preserved.
  • Enables smoother transitions and faster UI toggles.

Example

import React, { useState } from 'react';
import { Activity } from 'react';

function Dashboard() {
  const [showSidebar, setShowSidebar] = useState(true);

  return (
    <>
      <button onClick={() => setShowSidebar(!showSidebar)}>
        Toggle Sidebar
      </button>

      <Activity mode={showSidebar ? 'visible' : 'hidden'}>
        <Sidebar />
      </Activity>

      <MainContent />
    </>
  );
}

Here, when showSidebar is false, <Sidebar /> is hidden but its scroll/input state remains intact.

2. useEffectEvent Hook

What It Is

The useEffectEvent hook introduces a stable way to create event handlers that always see the latest props and state, without forcing effects to re-run.

Why It Matters

Before, event handlers in effects could capture stale closures — forcing developers to add extra dependencies or re-attach effects unnecessarily.

useEffectEvent simplifies that pattern.

Example

import React, { useState, useEffect } from 'react';
import { useEffectEvent } from 'react';

function Counter() {
  const [count, setCount] = useState(0);

  const handleClick = useEffectEvent(() => {
    console.log('Current count is', count);
  });

  useEffect(() => {
    window.addEventListener('click', handleClick);
    return () => window.removeEventListener('click', handleClick);
  }, []); // effect runs once

  return (
    <div>
      <p>Count: {count}</p>
      <button onClick={() => setCount(prev => prev + 1)}>Increment</button>
    </div>
  );
}

✅ The effect attaches once, but the handler always has access to the latest count.

3. cacheSignal & Resource Lifetime Management

What It Is

cacheSignal complements React’s cache() API by providing a signal that fires when a cached resource’s lifetime ends.

Useful for cleaning up background operations tied to cached data.

Why It Matters

In Suspense + server components apps, caching resources is essential — but cleanup can be tricky.

cacheSignal gives a built-in AbortSignal to track disposal.

Example

import { cache, cacheSignal } from 'react';

const fetchUser = cache(async (id) => {
  const res = await fetch(`/api/users/${id}`);
  if (!res.ok) throw new Error('Failed to fetch');
  return res.json();
});

function UserProfile({ userId }) {
  const user = fetchUser(userId);
  const signal = cacheSignal();

  useEffect(() => {
    const abortHandler = () => {
      console.log('Cache aborted for', userId);
    };
    signal.addEventListener('abort', abortHandler);
    return () => signal.removeEventListener('abort', abortHandler);
  }, [signal, userId]);

  return <div>{user.name}</div>;
}

Here, React triggers the abort signal when the cache entry expires or is invalidated — allowing safe cleanup.

4. Server-Side Rendering & Streaming Upgrades

What’s New

React 19.2 improves SSR streaming, partial pre-rendering, and hydration consistency.

  • Partial Pre-Rendering: pre-render sections of UI early, then “resume” dynamic parts later.
  • Batched Suspense reveals: multiple Suspense boundaries now stream together for smoother UX.
  • Web Streams for Node: New APIs like renderToReadableStream, resume, and prerender.

Example: Partial Pre-Rendering

import { prerender, resume } from 'react-dom/server';

async function handleRequest(req, res) {
  const { prelude, postponed } = await prerender(<App />, { signal: req.signal });
  res.write(prelude);

  const resumeStream = await resume(<App />, postponed);
  resumeStream.pipe(res);
}

This hybrid approach improves Time-to-First-Byte and interactive speed for large apps.

5. Other Enhancements

  • Performance Tracks in Chrome DevTools show new React Scheduler and component tracks.
  • Updated useId prefix (_r_ instead of :r:) improves compatibility with web standards.
  • eslint-plugin-react-hooks@6 now supports useEffectEvent and other React 19.2 APIs.
  • Internal bug fixes and streamlining for consistency across browsers and SSR frameworks.

Upgrade Tips

  1. Install latest React
   npm install react@19.2 react-dom@19.2
  1. Adopt features incrementally — e.g., replace legacy event patterns with useEffectEvent.
  2. Profile before/after using Chrome DevTools’ React Performance tracks.
  3. Update ESLint to v6 for hook rule support.
  4. Test SSR boundaries (especially if you use Next.js or Remix) for smooth streaming hydration.

Conclusion

React 19.2 focuses on refining how components mount, update, and interact with asynchronous work — without adding friction.

From <Activity /> to useEffectEvent, from cacheSignal to streaming SSR improvements, this release makes React apps smoother, more predictable, and easier to maintain.

Upgrade, explore, and profile your app — you’ll likely find subtle but impactful performance wins.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Build a Spring AI MCP Server With MongoDB

1 Share

In this tutorial, we're going to build a Model Context Protocol (MCP) server that connects AI models to MongoDB, using it to power a simple todo list application. If you've been working with AI applications, you've probably run into the challenge of getting LLMs to interact with your actual systems and data. That's exactly what the Model Context Protocol solves! It's quickly becoming the standard way to connect AI models to the real world. Whether you're building agents that need to query databases, access APIs, or interact with your company's internal tools, understanding MCP is becoming essential. By the end of this tutorial, you'll have a working server that exposes MongoDB operations as tools any MCP-compatible AI can use, and you'll understand the patterns for building your own servers for whatever data sources you need to connect.

MongoDB is a natural fit for AI applications—it's flexible, scalable, and has built-in vector search capabilities that make it easy to work with embeddings and semantic search. If you're curious about the AI side of MongoDB and want to dive deeper into vector search, there's a Vector Search Fundamentals course that'll walk you through those concepts (and you can earn a skill badge while you're at it).

MongoDB Vector Fundamental badge

If you want to take a look at the code used in this tutorial, check out the GitHub repository.

What is the Model Context Protocol?

The Model Context Protocol (MCP) is an open standard that enables AI applications to securely connect to various data sources and tools. Think of it as a universal adapter that lets your AI models interact with databases, APIs, file systems, and other services in a standardized way. Instead of building custom integrations for every data source, MCP provides a common language that both AI systems and data providers can speak.

At its core, MCP defines how servers expose their capabilities (like database queries or file operations) as tools, resources, and prompts that AI models can discover and use. This makes it incredibly powerful for building context-aware AI applications that need to pull from multiple sources.

The difference between an MCP server and an MCP client

An MCP server is the component that exposes capabilities to AI systems. It's what we'll be building in this tutorial. The server defines tools (functions the AI can call), resources (data the AI can access), and prompts (templated conversations). When we build our Spring AI MCP server, we're creating something that says, "Hey, I can interact with MongoDB for you—here are the operations I support."

An MCP client, on the other hand, is the component that consumes these capabilities. This is typically your AI application or agent that discovers available tools from MCP servers and decides when to use them. The client sends requests to servers and handles the responses. In the broader ecosystem, applications like Claude Desktop or other AI interfaces act as MCP clients that can connect to your servers.

What is Spring AI?

Spring AI is an application framework for AI engineering. Its goal is to apply to the AI domain Spring ecosystem design principles such as portability and modular design and promote using POJOs as the building blocks of an application to the AI domain.

Prerequisites

  • Java 17+
  • Maven
  • A MongoDB Atlas account with a cluster set up
    • An M0 free forever tier is perfect
  • Node (version 22.7.5 or higher is recommended)
    • We need this as we will be using the MCP Inspector to test our application

Build a Spring app and our dependencies

To start building our application, we will need to bring in a few dependencies. The easiest way to get started is using Spring Initializr. We are going to need the Model Context Protocol Server and the Spring Web dependencies. We'll be adding the Spring Data MongoDB dependency later on, but we'll keep it to just the first two for now to make getting started easier.

Spring iniatilzr configuration

For Project, I have selected Maven; for language, Java; and Spring Boot version 3.5.7 (or the latest stable release). Now, we can generate our Jar and open the downloaded app in our IDE.

Configure our MCP server

First things first, we need to update our Spring AI version to the latest milestone 1.1.0-M3. It is possible to do all this with the latest stable release, but the newer interface provides a much cleaner setup.

Open the POM.XML and update the Spring AI version:

<spring-ai.version>1.1.0-M3</spring-ai.version>

Now, we can define our MCP details in the application.properties file:

spring.application.name=springai-mcp
spring.ai.mcp.server.name=mongodb-mcp
spring.ai.mcp.server.version=1.0.0

spring.ai.mcp.server.protocol=streamable
spring.ai.mcp.server.stdio=false
spring.ai.mcp.server.type=sync

We give our server a name and version—this helps clients identify what they're connecting to. The name mongodb-mcp tells clients this server provides MongoDB capabilities, and the version lets us track updates over time.

We want an HTTP streamable service, no stdio, and synchronous responses. This means our server will communicate over HTTP rather than standard input/output, and it'll handle requests synchronously rather than asynchronously. This keeps things simple for our tutorial, though async operations are definitely an option for production use cases.

Build your first MCP tool

An MCP tool is essentially a function that an AI model can call. When you annotate a method with @McpTool, you're telling the MCP framework, "This is something an AI can use." The AI will be able to see the tool's name, description, and parameters, then decide whether to call it based on what the user is asking for.

Create a class MongoDbTools.java, where we will create a @Component.

We will use the @McpTool annotation. The name we give it will be how the AI identifies and calls this tool—think of it as the function name the AI will invoke. The description is important as it tells the AI what this tool does and when to use it. A good description helps the AI make smart decisions about when to call your tool. Think of this like a prompt template. When we give our application a natural language prompt, these descriptions allow the LLM to discern which tool is best for the job, and orchestrate the tools to achieve what we are asking it to do.

package com.timkelly.springaimcp;

import org.springaicommunity.mcp.annotation.McpTool;
import org.springframework.stereotype.Component;

@Component
public class MongoDbTools {

    // Tools
    @McpTool(name="get-todo-items", description = "I will return the items of a todo list")
    public String getTodoItems() {
        var todo = """
                1. Make Spring AI MCP tutorial

                2. Have a coffee

                3. Walk the dog
                """;

        return todo;
    }
}

This is where we would define @McpResources or @McpPrompts. The @McpResource annotation provides access to resources via URI templates. The @McpPrompt annotation generates prompt messages for AI interactions. These go beyond what we cover in this tutorial, but you can read about them in the Spring AI MCP documentation.

How we will test our MCP server

First, we need to run our application.

mvn spring-boot:run

To test our application, we will use the MCP Inspector. This is an interactive developer tool for testing and debugging MCP servers.

If you want to read more about it, check out the Debugging guide, which covers the Inspector as part of the overall debugging toolkit. For this tutorial, all we are going to do is use it to make sure our application is up and running, exposing the tools we create, and testing those tools.

The Inspector runs with the following:

npx @modelcontextprotocol/inspector

If this is your first time running it, you will likely need to install it:

Need to install the following packages:
@modelcontextprotocol/inspector@0.17.2
Ok to proceed? (y) y

Once this spins up, it will open in the browser.

MCP Inspector default screen

Now, for the transport type in the top left, we need to change this to Streamable HTTP. This will change the interface slightly, giving us the option to define the URL our app is running on. Mine is localhost:8080, so I will enter http://localhost:8080/mcp. The /mcp is important to add—this is the default endpoint where Spring AI MCP exposes the protocol.

MCP Inspector configuration

At this point, we can just hit connect. We will receive some server logging in the history section. It will confirm the name of the server, update when tools change, all that good stuff.

Inspector connected to MCP

From the top bar we can select tools, and here we can list all the tools we have available. If we select a tool, we can then choose to run it in the browser.

Tools section

If everything is working correctly, we should be able to confirm our output.

Demo tool output

Success! Now that we have this confirmed, we can look to add some functionality to our todo list by connecting to MongoDB as our database.

Connect to MongoDB

Add the Spring Data MongoDB dependency to your pom.xml:

<dependency>  
    <groupId>org.springframework.boot</groupId>  
    <artifactId>spring-boot-starter-data-mongodb</artifactId>  
</dependency>

Add connection details to the application.properties file:

#...

# MongoDB Configuration
# 1. Copy your connection string from MongoDB Atlas
# 2. Replace ?appName=Cluster with ?appName=SpringAiMcp, or add it if no appName is present
#
# Before: mongodb+srv://user:pass@cluster.abc12.mongodb.net/?appName=Cluster
# After:  mongodb+srv://user:pass@cluster.abc12.mongodb.net/?appName=SpringAiMcp

spring.data.mongodb.uri=<YOUR-CONNECTION-STRING>
spring.data.mongodb.database=todo

For our connection string, it is best practice to set the app name so we can see clearly what queries or operations are tied to which apps. For this, set the appName to SpringAiMcp by adding it to the end of your connection string. The database name todo is where we'll store all our task data.

Quick connection check (optional)

Run:

mvn spring-boot:run

Look for a line similar to:

Monitor thread successfully connected to server with description ServerDescription{address=...}

If you see authentication or timeout errors, check:

  • The connection string credentials.
  • Your IP access list in MongoDB Atlas.
  • That the appName is added correctly at the end of the URI.

Build our MongoDB tool

Now that we have MongoDB connected, it's time to replace our hardcoded todo list with real database operations. We're going to build out a complete set of tools that let an AI manage tasks in MongoDB—adding new tasks, marking them as complete, and retrieving tasks with various filters.

The beauty of the MCP approach here is that we're not writing AI-specific code. We're just building normal Spring Data repositories and services, then exposing them as tools with a few annotations. The AI doesn't need to know anything about MongoDB connection strings, query syntax, or data modeling—it just needs to know, "Here's a tool called todo-add-task that takes a task name." Our MCP server handles all the translation between the AI's requests and the actual database operations.

We'll structure this in layers, following Spring best practices. First, we'll define our data model and repository for database access. Then, we'll create a service layer to handle our business logic. Finally, we'll expose these operations as MCP tools that AI models can discover and use. This keeps our code clean, testable, and easy to extend with new functionality later.

Let's start by setting up our data model.

Define our Task model and repository

First, let's create our Task model in Task.java. This represents a single todo item in our MongoDB collection:

package com.timkelly.springaimcp;  

import org.bson.types.ObjectId;  
import org.springframework.data.annotation.Id;  
import org.springframework.data.mongodb.core.mapping.Document;  

@Document(collection = "tasks")  
public class Task {  
    @Id  
    private ObjectId id;  
    private String name;  
    private boolean completed;  

    public Task(ObjectId id, String name) {  
        this.id = id;  
        this.name = name;
    }  

    public ObjectId getId() {  
        return id;  
    }  

    public String getName() {  
        return name;  
    }  

    public void setName(String name) {  
        this.name = name;  
    }  

    public boolean isCompleted() {  
        return completed;  
    }  

    public void setCompleted(boolean completed) {  
        this.completed = completed;  
    }  
}

Now, create our repository interface in TodoRepository.java. Spring Data MongoDB will handle the implementation for us—this is one of the best features of Spring Data. We just define an interface that extends MongoRepository, and Spring will automatically generate all the common CRUD operations (create, read, update, delete) at runtime. No need to write boilerplate code for basic database operations like findAll(), save(), or deleteById()—they're all provided out of the box.

package com.timkelly.springaimcp;  

import org.bson.types.ObjectId;  
import org.springframework.data.mongodb.repository.MongoRepository;  
import org.springframework.stereotype.Repository;  

import java.util.List;  

@Repository  
public interface TodoRepository extends MongoRepository<Task, ObjectId> {  
}

The generic types <Task, ObjectId> tell Spring what entity we're working with and what type the ID field is. Spring Data MongoDB is smart enough to map our Java objects to BSON documents in MongoDB and back again, handling all the serialization for us.

Creating our add task tool

Let's create a service layer to handle our business logic. Create TodoService.java:

package com.timkelly.springaimcp;  

import org.bson.types.ObjectId;  
import org.springframework.stereotype.Service;  

import java.util.List;  

@Service  
public class TodoService {  

    private final TodoRepository todoRepository;  

    public TodoService(TodoRepository todoRepository) {  
        this.todoRepository = todoRepository;  
    }  

    public void addTask(String name) {  
        todoRepository.save(new Task(new ObjectId(), name));  
    }  
}

Now, update our MongoDbTools.java class to wire in the service and add our first real tool:

package com.timkelly.springaimcp;  

import org.springaicommunity.mcp.annotation.McpTool;  
import org.springaicommunity.mcp.annotation.McpToolParam;  
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;  
import org.springframework.stereotype.Component;  

import java.util.List;  

@Component  
@ConditionalOnBean(TodoService.class)  
public class MongoDbTools {  

    private final TodoService todoService;  

    public MongoDbTools(TodoService todoService) {  
        this.todoService = todoService;  
    }  

    @McpTool(  
            name = "todo-add-task",  
            description = "Add a new to-do task to MongoDB"  
    )  
    public String addTask(  
            @McpToolParam(  
                    description = "The name or description of the new task",  
                    required = true  
            ) String name  
    ) {  
        todoService.addTask(name);  
        return "Task added successfully: " + name;  
    }  
}

The @McpToolParam annotation allows us to provide metadata about each parameter. The description helps the AI understand what to pass in, and marking it as required ensures the AI knows this parameter can't be optional. The better we describe our parameters, the better the AI can use our tools correctly.

Update our tasks

Let's add a query method to our TodoRepository.java so we can find tasks by name:

package com.timkelly.springaimcp;  

import org.bson.types.ObjectId;  
import org.springframework.data.mongodb.repository.MongoRepository;
import org.springframework.data.mongodb.repository.Update; 
import org.springframework.stereotype.Repository;

@Repository  
public interface TodoRepository extends MongoRepository<Task, ObjectId> {  
    @Update("{ '$set' : { 'completed' : ?0 } }")
    void updateCompletedByName(String name, boolean completed);
}

This is another powerful Spring Data feature—query derivation from method names. Spring Data MongoDB will parse the method name findByName and automatically generate the MongoDB query for us. It sees "findBy" and knows we want to query, then "Name" tells it which field to match against. At runtime, this becomes a query like db.tasks.find({ name: "some name" }) without us writing any query code. As long as we follow Spring Data's naming conventions, we can create complex queries just by naming our methods correctly—things like findByCompletedTrueAndNameContaining would work exactly as you'd expect.

Now, add the completeTask method to TodoService.java:

   public void setCompletedByName(String name, boolean completed) {
        todoRepository.updateCompletedByName(name, completed);
    }

And add the corresponding tool to MongoDbTools.java:

   @McpTool(
            name = "todo-complete-task",
            description = "Mark a to-do task as complete by name"
    )
    public String completeTask(
            @McpToolParam(
                    description = "The name of the task to mark as complete or incomplete",
                    required = true
            ) String name,
            @McpToolParam(
                    description = "The status of the task, either complete(true) or incomplete(false)"
            ) boolean status
    ) {
        todoService.setCompletedByName(name, status);
        return "Marked task as " + status + ": " + name;
    }
}

Creating our get tasks tools

Let's add more query methods to our TodoRepository.java to support filtering by completion status:

package com.timkelly.springaimcp;  

import org.bson.types.ObjectId;  
import org.springframework.data.mongodb.repository.MongoRepository;
import org.springframework.data.mongodb.repository.Update;  
import org.springframework.stereotype.Repository;  

import java.util.List;  

@Repository  
public interface TodoRepository extends MongoRepository<Task, ObjectId> {  
    @Update("{ '$set' : { 'completed' : ?1 } }")
    void updateCompletedByName(String name, boolean completed);
    List<Task> findByCompletedTrue();
    List<Task> findByCompletedFalse();
}

Add the getTasks method to TodoService.java:

public List<Task> getTasks(String filter) {  
    if (filter == null || filter.isBlank() || filter.equalsIgnoreCase("all")) {  
        return todoRepository.findAll();  
    }  

    return switch (filter.toLowerCase()) {  
        case "incomplete" -> todoRepository.findByCompletedFalse();  
        case "complete" -> todoRepository.findByCompletedTrue();  
        default -> todoRepository.findAll();  
    };
}

And finally, add the tool to MongoDbTools.java:

@McpTool(  
        name = "todo-get-tasks",  
        description = "Retrieve to-do list items, optionally filtered by completion status"  
)  
public List<Task> getTasks(  
        @McpToolParam(  
                description = "Filter tasks by completion status: 'complete', 'incomplete', or 'all' (default: 'all')",  
                required = false  
        ) String filter  
) {  
    return todoService.getTasks(filter);
}  

Testing our application

Make sure to set your MongoDB connection string before running the application:

mvn spring-boot:run

Now, if we go back to our MCP Inspector, we can see our new tools listed in the tools section.

Updated MCP tools displayed in Inspector

You can test each one individually—try adding a task, marking it complete, and retrieving tasks with different filters.

Add tasks tool displayed in inspector

The Inspector will show you the requests being sent and the responses coming back, making it easy to verify everything is working as expected.

Inspector get tasks displaying results

And there you have it! You've built a fully functional MCP server that exposes MongoDB operations as tools that AI models can use. This same pattern can be extended to build servers for all kinds of data sources and operations—the possibilities are pretty much endless.

Conclusion

Congrats! You've just built a fully functional MCP server that bridges the gap between AI models and MongoDB. What started as a simple, hardcoded todo list evolved into a real application with persistent storage, CRUD operations, and a clean architecture that any AI client can interact with.

The power of what you've built goes way beyond just a todo list. You now understand the fundamental pattern for exposing any data source or service to AI models through MCP. The @McpTool, @McpToolParam, and the other MCP annotations give you a consistent way to make anything accessible to AI.

What's really cool is how little AI-specific code we actually wrote. Most of what we built was just normal Spring Boot application code—models, repositories, services. The MCP layer was just a thin wrapper on top that made everything discoverable and callable by AI models. This means you can take existing Spring applications and MCP-enable them without major rewrites.

From here, you could extend this in tons of directions. Add authentication so different users have their own task lists. Create more complex queries with date filters or priority sorting. Build out @McpResources to expose collection schemas or statistics. Add @McpPrompts to help guide AI models on how to best use your tools. The Spring AI MCP framework gives you all the building blocks you need.

The Model Context Protocol is still relatively new, but it's quickly becoming the standard way to connect AI models to real-world data and systems. Getting comfortable with building MCP servers now puts you ahead of the curve as this ecosystem continues to grow. Whether you're building internal tools for your team, creating services for AI agents, or just experimenting with what's possible when AI can access real data, MCP is the bridge that makes it all work.

If you found this tutorial useful, check out my other tutorial, Secure Local RAG With Role-Based Access: Spring AI, Ollama, & MongoDB.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

The Other 80%: What Productivity Really Means

1 Share

We’ve been bombarded with claims about how much generative AI improves software developer productivity: It turns regular programmers into 10x programmers, and 10x programmers into 100x. And even more recently, we’ve been (somewhat less, but still) bombarded with the other side of the story: METR reports that, despite software developers’ belief that their productivity has increased, total end-to-end throughput has declined with AI assistance. We also saw hints of that in last year’s DORA report, which showed that release cadence actually slowed slightly when AI came into the picture. This year’s report reverses that trend.

I want to get a couple of assumptions out of the way first:

  • I don’t believe in 10x programmers. I’ve known people who thought they were 10x programmers, but their primary skill was convincing other team members that the rest of the team was responsible for their bugs. 2x, 3x? That’s real. We aren’t all the same, and our skills vary. But 10x? No.
  • There are a lot of methodological problems with the METR report—they’ve been widely discussed. I don’t believe that means we can ignore their result; end-to-end throughput on a software product is very difficult to measure.

As I (and many others) have written, actually writing code is only about 20% of a software developer’s job. So if you optimize that away completely—perfect secure code, first time—you only achieve a 20% speedup. (Yeah, I know, it’s unclear whether or not “debugging” is included in that 20%. Omitting it is nonsense—but if you assume that debugging adds another 10%–20% and recognize that that generates plenty of its own bugs, you’re back in the same place.) That’s a consequence of Amdahl’s law, if you want a fancy name, but it’s really just simple arithmetic.

Amdahl’s law becomes a lot more interesting if you look at the other side of performance. I worked at a high-performance computing startup in the late 1980s that did exactly this: It tried to optimize the 80% of a program that wasn’t easily vectorizable. And while Multiflow Computer failed in 1990, our very-long-instruction-word (VLIW) architecture was the basis for many of the high-performance chips that came afterward: chips that could execute many instructions per cycle, with reordered execution flows and branch prediction (speculative execution) for commonly used paths.

I want to apply the same kind of thinking to software development in the age of AI. Code generation seems like low-hanging fruit, though the voices of AI skeptics are rising. But what about the other 80%? What can AI do to optimize the rest of the job? That’s where the opportunity really lies.

Angie Jones’s talk at AI Codecon: Coding for the Agentic World takes exactly this approach. Angie notes that code generation isn’t changing how quickly we ship because it only takes in one part of the software development lifecycle (SDLC), not the whole. That “other 80%” involves writing documentation, handling pull requests (PRs), and the continual integration pipeline (CI). In addition, she realizes that code generation is a one-person job (maybe two, if you’re pairing); coding is essentially solo work. Getting AI to assist the rest of the SDLC requires involving the rest of the team. In this context, she states the 1/9/90 rule: 1% are leaders who will experiment aggressively with AI and build new tools; 9% are early adopters; and 90% are “wait and see.” If AI is going to speed up releases, the 90% will need to adopt it; if it’s only the 1%, a PR here and there will be managed faster, but there won’t be substantial changes.

Angie takes the next step: She spends the rest of the talk going into some of the tools she and her team have built to take AI out of the IDE and into the rest of the process. I won’t spoil her talk, but she discusses three stages of readiness for the AI: 

  • AI-curious: The agent is discoverable, can answer questions, but can’t modify anything.
  • AI-ready: The AI is starting to make contributions, but they’re only suggestions. 
  • AI-embedded: The AI is fully plugged into the system, another member of the team.

This progression lets team members check AI out and gradually build confidence—as the AI developers themselves build confidence in what they can allow the AI to do.

Do Angie’s ideas take us all the way? Is this what we need to see significant increases in shipping velocity? It’s a very good start, but there’s another issue that’s even bigger. A company isn’t just a set of software development teams. It includes sales, marketing, finance, manufacturing, the rest of IT, and a lot more. There’s an old saying that you can’t move faster than the company. Speed up one function, like software development, without speeding up the rest and you haven’t accomplished much. A product that marketing isn’t ready to sell or that the sales group doesn’t yet understand doesn’t help.

That’s the next question we have to answer. We haven’t yet sped up real end-to-end software development, but we can. Can we speed up the rest of the company? METR’s report claimed that 95% of AI products failed. They theorized that it was in part because most projects targeted customer service, but the backend office work was more amenable to AI in its current form. That’s true—but there’s still the issue of “the rest.” Does it make sense to use AI to generate business plans, manage supply change, and the like if all it will do is reveal the next bottleneck?

Of course it does. This may be the best way of finding out where the bottlenecks are: in practice, when they become bottlenecks. There’s a reason Donald Knuth said that premature optimization is the root of all evil—and that doesn’t apply only to software development. If we really want to see improvements in productivity through AI, we have to look company-wide.



Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

2025.8 release introduces Stack Overflow Internal: The next generation of enterprise knowledge intelligence

1 Share
Today, we’re excited to introduce Stack Overflow Internal—the next evolution of our enterprise platform and the future of Stack Overflow for Teams.
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories