Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147180 stories
·
33 followers

OpenClaw Security Risks and Concerns

1 Share
OpenClaw offers powerful autonomous AI capabilities, but it also introduces serious security, privacy, and governance risks. Learn the key OpenClaw security concerns and best practices enterprises must follow before production use.
Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Passive AI Learning Stack That Changed the Way I Learn

1 Share
Learn more with less effort.

Source

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Is the craft dead?

1 Share

The Japanese are really good at woodworking. And I love watching the Yankee workshop, my dad makes Native American bows and arrows completely from scratch in his workshop with trees that he finds. 

This is all different from the stuff you get at IKEA, but I’ve been coding now for money for 35 years and systems are still complicated, computers still do dumb stuff, humans still do dumb stuff, this is just like the move from assembler to C, like the introduction of syntax highlighting, the introduction of intellisense, and the copy paste directly into production shift when stack overflow happened.

There is value in good taste, there is value in craftsmanship, and there is value in human judgment. The furniture might be differently designed, but we’re still interior designers and putting together a cohesive system is non-trivial. 

Don’t let them gaslight you with one shot Minecraft clones and one shot C compilers. Software is still hard, it’s just that you’re no longer I/O bound with the speed of your fingertips.

I think that there will be lots of work for us cleaning up after the slop, but if you know what you’re doing AI augmented development is going to get you some amazing results and I am enjoying learning a ton during this momentous era shift - but the craft still exists.



© 2025 Scott Hanselman. All rights reserved.
    
Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The danger of glamourizing one shots

1 Share

People should not be judging AI-augmented coding by “1 shots.” 

If someone told you that their model did a “one shot of Minecraft” and they’re impressed by that, you need to consider how much semantic heavy lifting the word “Minecraft” is doing in that prompt. 

Ask them to one shot Minecraft without using the word Minecraft. 

It’s not trivial to one shot something unique, because programming is the art of making the ambiguous incredibly specific through sculpting. AI sculpting is less about vibes and more about finding the specificity you want and keeping the system stable through changes. Good SDLC practices still matter, historical context still matters, and knowing how things work matters, shout out to Grady Booch.

It’s a cool party trick to one shot Mario Brothers or space invaders, but then you’ll end up with the most mid version of both. Literally mid. You’ll get the statistical fat part of the Bell curve version of these mythical games. You’re telling the model to close its eyes and draw the face of these games from memory. 

As high-level programming cedes way to the prose compiler, making your goals and specs well understood to the ambiguity loop and showing good judgment is going to matter more than ever. Consider all of your words and make sure that certain words aren’t carrying all the semantic load, hidden or otherwise.



© 2025 Scott Hanselman. All rights reserved.
    
Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

SQL SERVER – Execution Plans and Indexing Strategies – Quick Guide

1 Share

SQL SERVER - Execution Plans and Indexing Strategies - Quick Guide perffood-800x600 Performance tuning is one of the most critical skills a database professional can develop. A single poorly optimized query can bring an entire application to its knees, while a well-tuned database can handle exponentially more load with the same hardware. This comprehensive guide walks you through practical techniques for identifying and resolving performance bottlenecks in SQL Server, complete with scripts you can run in your own environment. Let us learn today in this quick guide Execution Plans and Indexing Strategies.

Understanding the Performance Tuning Landscape

Before diving into specific techniques, it’s important to understand what we’re actually measuring when we talk about query performance. SQL Server tracks several key metrics that help us identify problematic queries.

Logical reads represent the number of pages SQL Server must read from the buffer cache to satisfy a query. This is often the most important metric because it directly correlates with the amount of work the database engine must perform. A query that performs millions of logical reads will almost always be slower and more resource-intensive than one that performs thousands.

CPU time measures how long the processor spends actively working on your query. Queries with high CPU consumption often involve complex calculations, excessive sorting, or inefficient join operations.

Elapsed time captures the total wall-clock duration of query execution. When elapsed time significantly exceeds CPU time, it typically indicates the query is waiting on something, whether that’s disk I/O, locks, or network latency.

SQL SERVER - Execution Plans and Indexing Strategies - Quick Guide perf1food-800x600

Setting Up a Test Environment

To demonstrate these concepts effectively, we need a table with enough data to show meaningful performance differences. The following script creates a SalesOrders table and populates it with one million rows of randomized data. This gives us a realistic dataset to work with while remaining manageable for testing purposes.

-- =============================================
-- SCRIPT 1: Create Sample Table and Insert Data
-- =============================================
CREATE TABLE SalesOrders
(
    OrderID INT IDENTITY(1,1) PRIMARY KEY,
    CustomerID INT,
    OrderDate DATETIME,
    OrderStatus VARCHAR(20),
    TotalAmount DECIMAL(18,2),
    ShippingAddress VARCHAR(500),
    Notes VARCHAR(MAX)
);
GO

SET NOCOUNT ON;
DECLARE @Counter INT = 1;
DECLARE @StartDate DATETIME = '2020-01-01';

WHILE @Counter <= 1000000
BEGIN
    INSERT INTO SalesOrders (CustomerID, OrderDate, OrderStatus, TotalAmount, ShippingAddress, Notes)
    VALUES (
        ABS(CHECKSUM(NEWID())) % 10000 + 1,
        DATEADD(DAY, ABS(CHECKSUM(NEWID())) % 1825, @StartDate),
        CASE ABS(CHECKSUM(NEWID())) % 5
            WHEN 0 THEN 'Pending'
            WHEN 1 THEN 'Processing'
            WHEN 2 THEN 'Shipped'
            WHEN 3 THEN 'Delivered'
            ELSE 'Cancelled'
        END,
        CAST(ABS(CHECKSUM(NEWID())) % 10000 + RAND() AS DECIMAL(18,2)),
        'Address line ' + CAST(@Counter AS VARCHAR(10)),
        'Order notes for order number ' + CAST(@Counter AS VARCHAR(10))
    );
    SET @Counter = @Counter + 1;
END;
GO

The script uses CHECKSUM with NEWID to generate pseudo-random values, distributing orders across approximately five years of dates and ten thousand distinct customers. This distribution pattern mimics real-world data where queries against specific date ranges or customers are common.

Observing Baseline Performance

With our test data in place, let’s establish a baseline by running a typical query without any supporting indexes. We’ll enable statistics to capture the performance metrics that matter.

-- =============================================
-- SCRIPT 2: Query WITHOUT Index (Observe Performance)
-- =============================================
SET STATISTICS IO ON;
SET STATISTICS TIME ON;
GO

SELECT OrderID, CustomerID, OrderDate, TotalAmount
FROM SalesOrders
WHERE OrderDate BETWEEN '2023-01-01' AND '2023-01-31';
GO

When you run this query, pay close attention to the Messages tab in SQL Server Management Studio. You’ll see output similar to this:

Table 'SalesOrders'. Scan count 1, logical reads 7429, physical reads 0...
SQL Server Execution Times: CPU time = 156 ms, elapsed time = 892 ms.

Those 7,429 logical reads tell us that SQL Server had to examine a substantial portion of the table to find our January 2023 orders. The query works, but it’s doing far more work than necessary.

Reading Execution Plans

Execution plans are your window into how SQL Server actually processes a query. They reveal the specific operations performed, the order of those operations, and the estimated versus actual costs involved. Understanding execution plans is fundamental to effective performance tuning.

-- =============================================
-- SCRIPT 3: View Execution Plan (Text Format)
-- =============================================
SET SHOWPLAN_TEXT ON;
GO

SELECT OrderID, CustomerID, OrderDate, TotalAmount
FROM SalesOrders
WHERE OrderDate BETWEEN '2023-01-01' AND '2023-01-31';
GO

SET SHOWPLAN_TEXT OFF;
GO

The text output will show a Clustered Index Scan operation. This means SQL Server is reading through the entire clustered index (essentially the whole table) to find matching rows. For a million-row table, this is expensive. The scan must examine every single row to determine whether its OrderDate falls within our specified range.

Microsoft publishes the official XML Showplan Schema at schemas.microsoft.com/sqlserver/2004/07/showplan, which defines the structure of execution plan XML output. This schema has evolved with each SQL Server version, from SQL Server 2005 through SQL Server 2022, adding new elements to represent new query processing capabilities. When you export an execution plan as XML or use SET SHOWPLAN_XML ON, the output conforms to this schema, making it possible to programmatically parse and analyze execution plans.

The Power of Proper Indexing

Now let’s create an index specifically designed to support our query pattern. This nonclustered index uses OrderDate as the key column, allowing SQL Server to quickly locate rows within a date range. The INCLUDE clause adds CustomerID and TotalAmount to the leaf level of the index, eliminating the need for additional lookups to the base table.

-- =============================================
-- SCRIPT 4: Create Index and Re-run Query
-- =============================================
CREATE NONCLUSTERED INDEX IX_SalesOrders_OrderDate
ON SalesOrders (OrderDate)
INCLUDE (CustomerID, TotalAmount);
GO

SELECT OrderID, CustomerID, OrderDate, TotalAmount
FROM SalesOrders
WHERE OrderDate BETWEEN '2023-01-01' AND '2023-01-31';
GO

Run this and observe the dramatic difference in the statistics output. The logical reads drop from thousands to perhaps a few dozen. The CPU and elapsed times decrease proportionally. This is the transformative power of proper indexing.

Verifying the Execution Plan Change

Let’s confirm that SQL Server is now using our new index by examining the execution plan again.

-- =============================================
-- SCRIPT 5: View Execution Plan (Text Format)
-- =============================================
SET SHOWPLAN_TEXT ON;
GO

SELECT OrderID, CustomerID, OrderDate, TotalAmount
FROM SalesOrders
WHERE OrderDate BETWEEN '2023-01-01' AND '2023-01-31';
GO

SET SHOWPLAN_TEXT OFF;
GO

Instead of a Clustered Index Scan, you should now see an Index Seek operation against IX_SalesOrders_OrderDate. A seek operation means SQL Server can navigate directly to the relevant portion of the index using the B-tree structure, rather than scanning every row. This is the difference between flipping to the right chapter in a book using the table of contents versus reading every page until you find what you’re looking for.

The execution plan output follows the structure defined in the Showplan XML Schema. Key elements include the RelOp element representing each relational operator in the plan, the IndexScan element (which covers both seeks and scans), and various attribute nodes capturing estimated and actual row counts, costs, and memory grants. Understanding this schema helps when you need to programmatically analyze plans across many queries.

Discovering Missing Indexes

SQL Server continuously tracks query patterns and identifies potentially beneficial indexes that don’t exist. This information accumulates in the missing index DMVs and provides valuable guidance for index creation. However, these recommendations should be evaluated critically rather than implemented blindly.

-- =============================================
-- SCRIPT 6: Find Missing Indexes with Highest Impact
-- =============================================
SELECT TOP 20
    CONVERT(DECIMAL(18,2), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) AS ImprovementMeasure,
    OBJECT_NAME(mid.object_id, mid.database_id) AS TableName,
    mid.equality_columns AS EqualityColumns,
    mid.inequality_columns AS InequalityColumns,
    mid.included_columns AS IncludedColumns,
    migs.user_seeks AS UserSeeks,
    migs.user_scans AS UserScans,
    migs.avg_total_user_cost AS AvgTotalUserCost,
    migs.avg_user_impact AS AvgUserImpact,
    'CREATE NONCLUSTERED INDEX IX_' + OBJECT_NAME(mid.object_id, mid.database_id) + '_' 
        + REPLACE(REPLACE(REPLACE(ISNULL(mid.equality_columns, ''), ', ', '_'), '[', ''), ']', '')
        + CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN '_' ELSE '' END
        + REPLACE(REPLACE(REPLACE(ISNULL(mid.inequality_columns, ''), ', ', '_'), '[', ''), ']', '')
        + ' ON ' + mid.statement
        + ' (' + ISNULL(mid.equality_columns, '')
        + CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ', ' ELSE '' END
        + ISNULL(mid.inequality_columns, '') + ')'
        + ISNULL(' INCLUDE (' + mid.included_columns + ')', '') AS CreateIndexStatement
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE mid.database_id = DB_ID()
ORDER BY ImprovementMeasure DESC;
GO

The ImprovementMeasure calculation combines average query cost, potential improvement percentage, and frequency of use into a single metric that helps prioritize which indexes to create first. The script also generates ready-to-use CREATE INDEX statements, though you should always review column order and consider consolidating similar index recommendations before implementation.

Identifying Resource-Intensive Queries

Finding the queries that consume the most resources is essential for targeted optimization efforts. The following script identifies queries with the highest logical read counts, which typically correlate with the most significant performance impact.

-- =============================================
-- SCRIPT 7: Find Top 20 Queries by Logical Reads
-- =============================================
SELECT TOP 20
    SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(qt.TEXT)
            ELSE qs.statement_end_offset
        END - qs.statement_start_offset)/2)+1) AS QueryText,
    qs.execution_count AS ExecutionCount,
    qs.total_logical_reads AS TotalLogicalReads,
    qs.total_logical_reads/qs.execution_count AS AvgLogicalReads,
    qs.total_worker_time AS TotalCPUTime,
    qs.total_worker_time/qs.execution_count AS AvgCPUTime,
    qs.total_elapsed_time AS TotalElapsedTime,
    qs.total_elapsed_time/qs.execution_count AS AvgElapsedTime,
    qs.last_execution_time AS LastExecutionTime,
    qp.query_plan AS QueryPlan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE qt.dbid = DB_ID()
ORDER BY qs.total_logical_reads DESC;
GO

This query extracts the exact statement text using offset calculations, which is important when a batch contains multiple statements. The query plan column returns XML that you can click in Management Studio to open the graphical plan viewer, making it easy to analyze the execution strategy for each problematic query.

Analyzing CPU-Bound Queries

While I/O is often the primary bottleneck, CPU-intensive queries can also degrade overall system performance, particularly on systems with limited processor resources. Sorting, hashing, and complex calculations all contribute to CPU consumption.

-- =============================================
-- SCRIPT 8: Find Top 20 Queries by CPU Time
-- =============================================
SELECT TOP 20
    SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
        ((CASE qs.statement_end_offset
            WHEN -1 THEN DATALENGTH(qt.TEXT)
            ELSE qs.statement_end_offset
        END - qs.statement_start_offset)/2)+1) AS QueryText,
    qs.execution_count AS ExecutionCount,
    qs.total_logical_reads AS TotalLogicalReads,
    qs.total_logical_reads/qs.execution_count AS AvgLogicalReads,
    qs.total_worker_time AS TotalCPUTime,
    qs.total_worker_time/qs.execution_count AS AvgCPUTime,
    qs.total_elapsed_time AS TotalElapsedTime,
    qs.total_elapsed_time/qs.execution_count AS AvgElapsedTime,
    qs.last_execution_time AS LastExecutionTime,
    qp.query_plan AS QueryPlan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE qt.dbid = DB_ID()
ORDER BY qs.total_worker_time DESC;
GO

When analyzing CPU-bound queries, look for Sort and Hash Match operators in the execution plan, as these operations are particularly processor-intensive. Sometimes adding an index that provides pre-sorted data can eliminate expensive sort operations entirely.

Evaluating Index Effectiveness

Creating indexes is only half the battle. You also need to monitor whether your existing indexes are actually being used and providing value. Unused indexes consume storage space and slow down write operations without providing any benefit.

-- =============================================
-- SCRIPT 9: Analyze Index Usage Patterns
-- =============================================
SELECT 
    OBJECT_NAME(s.object_id) AS TableName,
    i.name AS IndexName,
    i.type_desc AS IndexType,
    s.user_seeks AS UserSeeks,
    s.user_scans AS UserScans,
    s.user_lookups AS UserLookups,
    s.user_updates AS UserUpdates,
    s.last_user_seek AS LastUserSeek,
    s.last_user_scan AS LastUserScan,
    CASE 
        WHEN s.user_seeks + s.user_scans + s.user_lookups = 0 THEN 'Unused'
        WHEN s.user_scans > s.user_seeks THEN 'Mostly Scanned - Review'
        ELSE 'Healthy'
    END AS IndexHealth
FROM sys.dm_db_index_usage_stats s
INNER JOIN sys.indexes i ON s.object_id = i.object_id AND s.index_id = i.index_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
    AND s.database_id = DB_ID()
ORDER BY s.user_seeks + s.user_scans + s.user_lookups DESC;
GO

The IndexHealth column provides a quick assessment based on usage patterns. Indexes marked as “Unused” are candidates for removal, though you should verify this across a representative time period that includes all typical workload patterns. Indexes flagged as “Mostly Scanned” may not be providing the seek performance they were designed for and warrant investigation into whether the key columns are optimal.

Managing Index Fragmentation

As data is inserted, updated, and deleted, indexes become fragmented over time. Fragmentation increases the number of pages SQL Server must read and can significantly impact query performance. Regular monitoring and maintenance help keep indexes performing optimally.

-- =============================================
-- SCRIPT 10: Check Index Fragmentation Levels
-- =============================================
SELECT 
    OBJECT_NAME(ips.object_id) AS TableName,
    i.name AS IndexName,
    ips.index_type_desc AS IndexType,
    ips.avg_fragmentation_in_percent AS FragmentationPercent,
    ips.page_count AS PageCount,
    ips.avg_page_space_used_in_percent AS AvgPageSpaceUsed,
    CASE 
        WHEN ips.avg_fragmentation_in_percent < 10 THEN 'No Action Needed'
        WHEN ips.avg_fragmentation_in_percent < 30 THEN 'Reorganize'
        ELSE 'Rebuild'
    END AS RecommendedAction
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'LIMITED') ips
INNER JOIN sys.indexes i ON ips.object_id = i.object_id AND ips.index_id = i.index_id
WHERE ips.page_count > 1000
    AND ips.avg_fragmentation_in_percent > 10
ORDER BY ips.avg_fragmentation_in_percent DESC;
GO

The conventional wisdom suggests reorganizing indexes with fragmentation between 10% and 30%, and rebuilding those above 30%. However, this guidance originated when storage was predominantly spinning disk. With modern SSDs, fragmentation has less impact on read performance, though it still affects page density and storage efficiency. The page count filter of 1,000 pages excludes small indexes where fragmentation is largely irrelevant.

Detecting Implicit Conversions

Implicit conversions occur when SQL Server must convert data from one type to another to perform a comparison or join. These conversions can prevent index usage and significantly degrade performance. They often hide in plain sight, as the query appears syntactically correct but performs poorly due to type mismatches.

-- =============================================
-- SCRIPT 11: Find Queries with Implicit Conversions
-- =============================================
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

SELECT TOP 50
    DB_NAME(qt.dbid) AS DatabaseName,
    OBJECT_NAME(qt.objectid, qt.dbid) AS ObjectName,
    cp.usecounts AS ExecutionCount,
    qp.query_plan,
    qt.text AS QueryText
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) qt
WHERE qp.query_plan.exist('declare namespace 
    qplan="http://schemas.microsoft.com/sqlserver/2004/07/showplan";
    //qplan:ScalarOperator/qplan:Identifier/qplan:ColumnReference[@Column="CONVERT_IMPLICIT"]') = 1
    OR CAST(qp.query_plan AS NVARCHAR(MAX)) LIKE '%CONVERT_IMPLICIT%'
ORDER BY cp.usecounts DESC;
GO

This script searches the plan cache for queries whose execution plans contain CONVERT_IMPLICIT operations. The XQuery expression uses the official Showplan namespace to search the XML structure, while the LIKE clause serves as a fallback. Common causes include comparing varchar columns to nvarchar parameters, joining columns with mismatched types, and passing integer values to decimal columns. Fixing these typically involves correcting parameter types in application code or explicitly casting values in the query.

Monitoring Statistics Currency

SQL Server relies on statistics to estimate row counts and choose efficient execution plans. When statistics become stale, the optimizer may make poor decisions based on outdated information. Monitoring statistics age and modification counts helps identify when updates are needed.

-- =============================================
-- SCRIPT 12: Check When Statistics Were Last Updated
-- =============================================
SELECT 
    OBJECT_NAME(s.object_id) AS TableName,
    s.name AS StatisticsName,
    STATS_DATE(s.object_id, s.stats_id) AS LastUpdated,
    DATEDIFF(DAY, STATS_DATE(s.object_id, s.stats_id), GETDATE()) AS DaysOld,
    sp.rows AS RowsInTable,
    sp.modification_counter AS ModificationsSinceLastUpdate
FROM sys.stats s
CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
    AND sp.modification_counter > 0
ORDER BY sp.modification_counter DESC;
GO

The modification_counter reveals how many row changes have occurred since statistics were last updated. Tables with high modification counts relative to their size are prime candidates for statistics updates. While SQL Server’s auto-update statistics feature handles this automatically in most cases, large tables may benefit from more frequent manual updates, particularly after bulk data loads.

Cleaning Up

After completing your testing, remove the sample objects to restore your database to its original state.

-- =============================================
-- SCRIPT 13: Clean Up Sample Objects
-- =============================================
IF OBJECT_ID('SalesOrders', 'U') IS NOT NULL
    DROP TABLE SalesOrders;
GO

Building a Performance Tuning Practice

Effective performance tuning is an ongoing discipline rather than a one-time activity. Establishing regular monitoring practices helps catch problems before they impact users. Consider scheduling the diagnostic queries from this guide to run periodically, storing results in a tracking table to identify trends over time.

When approaching a new performance problem, start with the high-level resource consumption queries to identify which queries deserve attention. Then drill into execution plans for those specific queries to understand why they’re expensive. Look for table scans where seeks would be appropriate, missing indexes that would benefit the workload, and implicit conversions that prevent index usage.

Remember that every index you create imposes maintenance overhead on insert, update, and delete operations. The goal isn’t to create an index for every query pattern, but to find the right balance that optimizes overall workload performance. Sometimes a query that runs once a day doesn’t warrant an index that would slow down thousands of transactions per minute.

Finally, always test changes in a non-production environment before deployment. What looks like an obvious improvement in isolation may have unintended consequences when combined with the full application workload. Measure before and after, keep records of what you’ve changed, and be prepared to roll back if results don’t match expectations.

SQL SERVER - Execution Plans and Indexing Strategies - Quick Guide perf2food-800x600

If you’re facing persistent performance challenges that require expert guidance, consider my Comprehensive Database Performance Health Check service. With over two decades of experience helping more than 400 customers optimize their SQL Server environments, I provide hands-on collaboration to identify bottlenecks and implement solutions, typically within just 2 to 4 hours. You can also follow me on Twitter for daily SQL Server tips and insights.

Reference: Pinal Dave (https://blog.sqlauthority.com)

First appeared on SQL SERVER – Execution Plans and Indexing Strategies – Quick Guide

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Secure MCP Development with Rust and Gemini CLI

1 Share

Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in the Rust programming language with a local development environment.

Why not just use Python?

Python has traditionally been the main coding language for ML and AI tools. One of the strengths of the MCP protocol is that the actual implementation details are independent of the development language. The reality is that not every project is coded in Python- and MCP allows you to use the latest AI appt roaches with other coding languages.

What is this Tutorial Trying to Do?

Building on previous tutorials, the goal is to extend a Rust MCP server with basic support for API key ennablement. The ultimate goal is allowing MCP servers to be deployed as unauthenticated Cloud Run endpoints but be protected by an API key.

What is Rust?

Rust is a high performance, memory safe, compiled language:

Rust

Rust provides memory safe operations beyond C/C++ and also can provide exceptional performance gains as it is compiled directly to native binaries.

Initial Environment Setup

The environment is meant to be run from a Bash like shell. You can run this from a Linux VM, ChromeOS Linux VM, Firebase Studio environment, or any environment that provides a basic shell. You will also need a working Docker environment.

Rust Setup

Instructions to install Rust are available here:

Getting started

For a Linux like environment the command looks like this:

curl — proto ‘=https’ — tlsv1.2 -sSf https://sh.rustup.rs | sh

Rust also depends on a working C compiler and OpenSSL setup. For a Debian 12 system — install the basic tools for development:

sudo apt install build-essential
sudo apt install libssl-dev
sudo apt install pkg-config
sudo apt-get install libudev-dev
sudo apt install make
sudo apt install git

Gemini CLI

If not pre-installed you can download the Gemini CLI to interact with the source files and provide real-time assistance:

sudo npm install -g @google/gemini-cli

Note- if you are an a non standard environment — you will need to make sure to have at least Node version 20 available in order to run Gemini CLI.

Testing the Gemini CLI Environment

Once you have all the tools and the correct Node.js version in place- you can test the startup of Gemini CLI. You will need to authenticate with a Key or your Google Account:

gemini

Getting Started with Rust and MCP

When MCP was first released, there were several competing Rust frameworks that provided support for the protocol. Eventually, one official supported SDK was consolidated to provide a standard package for building MCP applications with Rust. This SDK is more like a toolbox that provides many options- clients/servers, different transports, and even more advanced integration options.

The official MCP Rust SDK (rmcp) is available here:

GitHub - modelcontextprotocol/rust-sdk: The official Rust SDK for the Model Context Protocol

Where do I start?

The strategy for validating Rust for MCP development is a incremental step by step approach.

First, the basic development environment is setup with the required system variables and a working Gemini CLI configuration.

A command line version of the System Information tool is built with Gemini CLI.

Then, a minimal Rust MCP Server is built with the stdio transport working directly with Gemini CLI in the local environment. This validates the connection from Gemini CLI to the local compiled Rust process via MCP. The MCP client (Gemini CLI) and the Rust MCP compiled binary Server both run in the same environment.

Setup the Basic Environment

At this point you should have a working Rust compiler and a working Gemini CLI installation. The next step is to clone the GitHub samples repository with support scripts:

cd ~
git clone https://github.com/xbill9/iap-https-rust

Then run init.sh from the cloned directory.

The script will attempt to determine your shell environment and set the correct variables:

cd iap-https-rust
source init.sh

If your session times out or you need to re-authenticate- you can run the set_env.sh script to reset your environment variables:

cd iap-https-rust
source set_env.sh

Variables like PROJECT_ID need to be setup for use in the various build scripts- so the set_env script can be used to reset the environment if you time-out.

Minimal System Information Tool Build

The first step is to build the basic tool directly with Rust. This allows the tool to be debugged and tested locally before adding the MCP layer.

All of the sample code is in the stdiokey directory-which is shorthand for stdio MCP server with an API key:

xbill@penguin:~/iap-https-rust/stdiokey$

First build the tool locally:

xbill@penguin:~/iap-https-rust/stdiokey$ make
Building the Rust project...
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.16s
xbill@penguin:~/iap-https-rust/stdiokey$ 

then lint check the code:

xbill@penguin:~/iap-https-rust/stdiokey$ make lint
Linting code...
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.15s
xbill@penguin:~/iap-https-rust/stdiokey$ 

and run local tests:


xbill@penguin:~/iap-https-rust/stdiokey$ make test
Running tests...
   Compiling sysutils-stdiokey-rust v0.2.0 (/home/xbill/iap-https-rust/stdiokey)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 1.56s
     Running unittests src/main.rs (target/debug/deps/sysutils_stdiokey_rust-e1e853f069b7cd67)

running 3 tests
test tests::test_schema_generation ... ok
test tests::test_disk_usage ... ok
test tests::test_local_system_info ... ok

test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.15s

xbill@penguin:~/iap-https-rust/stdiokey$ 

The last step is to build the production version:

xbill@penguin:~/iap-https-rust/stdiokey$ make release
Building Release...
    Finished `release` profile [optimized] target(s) in 0.15s
xbill@penguin:~/iap-https-rust/stdiokey$ 

Running the Tool Locally

Once the release version has been built- the resulting binary can be executed directly in the local environment.

The quick summary of local system info can be run right from the Makefile:

xbill@penguin:~/iap-https-rust/stdiokey$ make info
{"timestamp":"2026-02-08T22:31:20.660215Z","level":"ERROR","fields":{"message":"Application failed","error":"Authentication Required: Please provide the API Key using --key <KEY> or MCP_API_KEY environment variable"},"target":"sysutils_stdiokey_rust"}
make: *** [Makefile:26: info] Error 1
xbill@penguin:~/iap-https-rust/stdiokey$ 

This call failed because no API key was provided on the command line or in the current environment.

The tool will also fail if an invalid key is passed:

xbill@penguin:~/iap-https-rust/stdiokey$ export MCP_API_KEY=1234567890
xbill@penguin:~/iap-https-rust/stdiokey$ make info
{"timestamp":"2026-02-08T22:34:04.966794Z","level":"INFO","fields":{"message":"Fetching MCP API Key for project: 1056842563084"},"target":"sysutils_stdiokey_rust"}
{"timestamp":"2026-02-08T22:34:05.029982Z","level":"ERROR","fields":{"message":"Application failed","error":"Failed to fetch MCP API Key\n\nCaused by:\n 0: Failed to list API keys\n 1: Token retrieval failed: Connection failure: Hyper error: client error (Connect)\n "},"target":"sysutils_stdiokey_rust"}
make: *** [Makefile:26: info] Error 1
xbill@penguin:~/iap-https-rust/stdiokey$ 

Setting an API Key

On project setup the init.sh script configures the Google Cloud environment and creates a sample key to secure the connection. To set this key in the current environment — use the set_key.sh script:

xbill@penguin:~/iap-https-rust/stdiokey$ source ../set_key.sh
--- Setting Google Cloud Project ID ---
Using Google Cloud project: comglitn
Checking for existing MCP API Key...
Using existing MCP API Key: projects/1056842563084/locations/global/keys/cbd6422f-e594-4536-9ad9-6f179f43f11b
Retrieving API Key string...
MCP API Key retrieved and exported.

To use with the 'manual' or 'local' variants, ensure this script was sourced:
source ./set_key.sh
cargo run --bin manual
--- Environment Checks ---
Not running in Google Cloud VM or Shell. Checking ADC...
Running on ChromeOS.
--- Initial Setup complete ---
xbill@penguin:~/iap-https-rust/stdiokey$

The tool can now execute:

xbill@penguin:~/iap-https-rust/stdiokey$ cargo run info
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.07s
     Running `target/debug/sysutils-stdiokey-rust info`
{"timestamp":"2026-02-08T23:07:46.935357Z","level":"INFO","fields":{"message":"Fetching MCP API Key for project: 1056842563084"},"target":"sysutils_stdiokey_rust"}
{"timestamp":"2026-02-08T23:07:49.712041Z","level":"INFO","fields":{"message":"Successfully fetched API key via gcloud"},"target":"sysutils_stdiokey_rust"}
System Information Report
=========================

MCP API Key Status
------------------
Provided Key: [FOUND]
Cloud Match: [MATCHED]

System Information
------------------
System Name: Debian GNU/Linux
Kernel Version: 6.6.99-09121-g16665fbb817c
OS Version: 12
Host Name: penguin

CPU Information
---------------
Number of Cores: 16

Memory Information
------------------
Total Memory: 6364 MB
Used Memory: 297 MB
Total Swap: 0 MB
Used Swap: 0 MB

Network Interfaces
------------------
lo : RX: 3897 bytes, TX: 3897 bytes (MAC: 00:00:00:00:00:00)
veth7ac2607 : RX: 126 bytes, TX: 1868 bytes (MAC: 7e:ab:4b:a7:6a:bf)
veth6af42f9 : RX: 126 bytes, TX: 1868 bytes (MAC: 1a:05:e1:da:02:b3)
br-e70a18428e21 : RX: 168 bytes, TX: 746 bytes (MAC: 2e:76:46:de:e4:6c)
eth0 : RX: 5884193 bytes, TX: 42076984 bytes (MAC: 00:16:3e:07:39:7b)
docker0 : RX: 0 bytes, TX: 0 bytes (MAC: 2a:e4:d7:54:a8:de)

xbill@penguin:~/iap-https-rust/stdiokey$ 

System Information with MCP STDIO Transport

One of the key features that the Rust rmcp SDK provides is abstracting various transport methods.

The high level tool MCP implementation is the same no matter what low level transport channel/method that the MCP Client uses to connect to a MCP Server.

The simplest transport that the SDK supports is the stdio (stdio/stdout) transport — which connects a locally running process. Both the MCP client and MCP Server must be running in the same environment.

First- switch the directory with the Rust stdio sample code:

xbill@penguin:~/iap-https-rust/stdiokey$ make release
Building Release...
    Finished `release` profile [optimized] target(s) in 0.18s
xbill@penguin:~/iap-https-rust/stdiokey$ 

You can validate the final result of the build by checking the compiled Rust binary:

xbill@penguin:~/iap-https-rust/stdiokey/target/release$ file sysutils-stdiokey-rust
sysutils-stdiokey-rust: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=a9f3e7464e5ff32b4fbe55cc7659cb1c7835eaf9, stripped
xbill@penguin:~/iap-https-rust/stdiokey/target/release$ 

Connecting Gemini CLI to the MCP STDIO Server

To configure Gemini CLI as the MCP client- a sample settings.json is provided in the .gemini config directory:

{
  "mcpServers": {
    "sysutils-stdiokey-rust": {
        "command": "$HOME/iap-https-rust/stdiokey/target/release/sysutils-stdiokey-rust",
        "args": ["--prebuilt","--stdio"],
        "env": {
          "RUST_LOG": "trace",
          "MCP_API_KEY": "xxx"
        }
      }
    }
  }

This sample Gemini CLI config will fail- as the MCP_API_KEY is hard coded to xxx:

✕ Error during discovery for MCP server 'sysutils-stdiokey-rust': MCP error -32000: Connection closed

 > /mcp list
Configured MCP servers:

🔴 sysutils-stdiokey-rust - Disconnected

Pass the API Key in Gemini Settings

The stdio server checks the API key if it is provided. The set_key.sh scripts sets the environment variable from the Google Cloud settings. A sample Gemini setup is provided for this scenario as well:

{
  "mcpServers": {
    "sysutils-stdiokey-rust": {
        "command": "$HOME/iap-https-rust/stdiokey/target/release/sysutils-stdiokey-rust",
        "args": ["--prebuilt","--stdio"],
        "env": {
          "RUST_LOG": "trace",
          "MCP_API_KEY": "$MCP_API_KEY"
        }
      }
    }
  }

Next Gemini CLI is used to check the MCP connection settings:

> /mcp list
Configured MCP servers:
🟢 sysutils-stdio-rust - Ready (2 tools)
  Tools:
  - disk_usage
  - local_system_info

The local MCP Server (sysutils-stdiokey-rust) can now be used directly using Gemini CLI as a MCP client. This is the same Rust binary that was tested locally as a standalone build:

> run mcp tool local_system_info
✦ I will retrieve the system information using the local_system_info tool.

╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓ local_system_info (sysutils-stdiokey-rust MCP Server) {} │
│ │
│ System Information Report │
│ ========================= │
│ │
│ Authentication: [VERIFIED] (Running as MCP Server) │
│ │
│ System Information │
│ --- │
│ System Name: Debian GNU/Linux │
│ Kernel Version: 6.6.99-09121-g16665fbb817c │
│ OS Version: 12 │
│ Host Name: penguin │
│ │
│ CPU Information │
│ --- │
│ Number of Cores: 16 │
│ │
│ Memory Information │
│ --- │
│ Total Memory: 6364 MB │
│ Used Memory: 678 MB │
│ Total Swap: 0 MB │
│ Used Swap: 0 MB │
│ │
│ Network Interfaces │
│ --- │
│ veth7ac2607 : RX: 126 bytes, TX: 1938 bytes (MAC: 7e:ab:4b:a7:6a:bf) │
│ br-e70a18428e21 : RX: 168 bytes, TX: 746 bytes (MAC: 2e:76:46:de:e4:6c) │
│ eth0 : RX: 6840392 bytes, TX: 44354526 bytes (MAC: 00:16:3e:07:39:7b) │
│ veth6af42f9 : RX: 126 bytes, TX: 1868 bytes (MAC: 1a:05:e1:da:02:b3) │
│ docker0 : RX: 0 bytes, TX: 0 bytes (MAC: 2a:e4:d7:54:a8:de) │
│ lo : RX: 3897 bytes, TX: 3897 bytes (MAC: 00:00:00:00:00:00) │
│ │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ The system information has been retrieved and is displayed above. What would you like to do next?

Project Package Details

The stdiokey project has been published to crates.io:

crates.io: Rust Package Registry

Summary

The potential for using Rust for MCP development with Gemini CLI was validated with a incremental step by step approach.

A minimal stdio transport MCP Server was built from Rust source code and validated with Gemini CLI running as a MCP client in the same local environment.

This approach can be extended to more complex deployments using other MCP transports and Cloud based options.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories