Learn how to implement microsoft agent framework tool approval human in the loop patterns in C# to keep AI agents safe, auditable, and under human control.
TL;DR: Discover how Smart Suggestions (Slash Menu) and Mentions enhance the React Rich Text Editor’s workflow. The blog explains how slash-triggered commands improve formatting flow, how structured @ tagging strengthens accuracy, and how these features together support smoother content creation, stronger collaboration, and a more intuitive editing experience in modern applications.
Are you building a modern application that demands powerful, collaborative content tools? In today’s fast-paced digital landscape, content creation must be intuitive and efficient to meet workflow demands. The Syncfusion® Rich Text Editor makes content simple and efficient. Its Smart Suggestions and Mentions features improve formatting and collaboration, making it a great fit for blogs, forums, and messaging apps.
In this blog post, we’ll explore how Smart Suggestions and Mentions work, their key benefits, and share sample code to help you implement them.
Why Smart Suggestions and Mentions matter
Modern users expect:
Fast actions without hunting through toolbars.
Structured formatting with minimal effort.
Accurate tagging inside collaborative environments.
Smart Suggestions and Mentions help achieve all of this by providing context-aware menus right where users type.
Configuring Smart Suggestions (Slash Menu) in React Rich Text Editor
Smart Suggestions, also known as the Slash Menu, allow users to type / in the editor to open a quick command popup for actions such as applying headings, creating lists, or inserting media. This removes friction from formatting and makes content creation feel natural, especially for blogging and note-taking.
How Smart Suggestions work
Trigger: Type / inside the editor.
Options: A customizable list of commands (e.g., Paragraph, Headings, lists, media insertion, and more).
This event handler runs when a user selects an item from the Slash Menu.
In the above code example, we check if the selected command is MeetingNotes and use executeCommand method to insert a predefined HTML snippet.
Developers can extend this logic to handle other custom actions, such as inserting templates, signatures, or dynamic content.
For example, a content creator can type / select Heading 1 to format a title, or choose MeetingNotes to insert a predefined note, streamlining their workflow as shown below.
Smart Suggestions menu displayed after typing “/” in the Rich Text Editor.
Benefits of Smart Suggestions
Here are the key features that make this feature efficient.
Faster formatting: Skip toolbars and format inline.
Contextual workflow: Suggestions appear exactly where users type.
Customizability: Tailor the Slash Menu to include app-specific commands, like inserting signatures or templates.
Ready to level up your editor? Explore the Smart Suggestions demo and documentation to start implementing and customizing it.
Configuring Mentions in React Rich Text Editor
The Mentions feature allows users to tag people, groups, or entities by typing @, triggering a suggestion list populated from a data source. This is perfect for collaborative applications like messaging apps, comment sections, or project management tools, ensuring accurate and efficient tagging.
How Mentions work
Trigger: Type @ followed by a character to display the suggestion list.
Data Source: A list of objects (e.g., employee records with names, emails, and profile images).
Customization:
Use itemTemplate to style the suggestion list.
Use displayTemplate to format tagged values.
Properties like suggestionCount, popupWidth, and allowSpaces provide further control.
Integrating Mentions in React Rich Text Editor
Below is a React example showing how to integrate and customize the Mentions feature using Syncfusion’s React Rich Text Editor.
itemTemplate customizes the suggestion list to show both name and email.
Example: In a team messaging app, typing @Maria displays a suggestion list with Maria Smith - maria@example.com, ensuring accurate tagging.
Refer to the following image.
Mention suggestions are displayed after typing “@”
Benefits of Mentions
Collaboration: Simplifies tagging team members, improving communication in collaborative tools.
Accuracy: Selecting from a predefined list reduces typing errors and ensures correct tagging.
Enhanced UI: Customizable suggestion lists with images or status indicators improve the visual experience.
Note: Explore the Mentions demo and documentation for detailed steps on implementing and customizing this feature.
Real-world applications
The combination of Smart Suggestions and Mentions makes the Rich Text Editor ideal for:
Blogging platforms: Use Smart Suggestions to format posts quickly and Mentions to tag contributors.
Collaboration tools: Tag team members in comments or notes for seamless communication.
Support ticket systems: Assign tasks with Mentions and insert predefined responses with Smart Suggestions.
Frequently Asked Questions
What is the difference between Smart Suggestions (Slash Menu) and Mentions?
Smart Suggestions are triggered by typing / and help with formatting actions like adding headings, lists, or inserting templates. Mentions are triggered by typing @ and are used to tag people, entities, or items from a data source within the editor.
Can I create my own custom Smart Suggestion (Slash Menu) commands?
Yes. You can fully customize the Slash Menu by adding your own items with text, icons, descriptions, and actions. Using the slashMenuSettings.items property and handling the slashMenuItemSelect event, you can insert templates, dynamic HTML, signatures, or any custom content.
Does the Mentions feature work with dynamic data from APIs?
Absolutely. Mentions can use any data source, static arrays, remote data, REST APIs, or databases. Bind the dataSource of the MentionComponent to your dynamic data and map fields like text, value, or email using the fields property.
Can I customize how Mention items appear in the suggestion list?
Yes. Mentions support multiple presentation options: Custom itemTemplate for list appearance and Custom displayTemplate for how selected mentions appear inside the editor You can include profile images, roles, email IDs, statuses, or any custom UI element.
Thanks for reading! The Smart Suggestions and Mentions features in Syncfusion Rich Text Editor transform content creation by making it faster, more intuitive, and collaborative.
Smart Suggestions reduce clicks with quick formatting commands.
Mentions ensure accurate, structured tagging in collaborative environments.
Both features are highly customizable, flexible, and ready for real-world applications. Try them out to elevate your content creation experience today!
If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.
Today’s applications require robust security to ensure your application’s sensitive and confidential information is not compromised. This is exactly where access tokens and refresh tokens come in.
Typically, these tokens are generated based on JWT open standard. JWT tokens should be generated in such a way that they have a short expiry time – the shorter the expiry time, the safer they are. There needs to be a way to refresh these tokens, re-authenticate the user, and generate new JWT tokens to continue to use the application uninterrupted.
This article explains JWT-based authentication, access and refresh tokens, and how you can implement them in an ASP.NET Core application.
What do you need to use refresh tokens in ASP.NET Core?
Tokens are digitally encoded signatures that are used to authenticate and authorize access to protected resources in an application. JWT (JSON Web Token) is an open standard commonly used for exchanging information between two parties in a secure manner. Typically, a JWT token is used in ASP.NET Core applications to authenticate users and, if the authentication is successful, provide them access to protected resources in the application.
To understand how refresh tokens operate, it’s imperative that you have a thorough knowledge of how JWT tokens work. Since they are signed digitally, this information is trustworthy and verifiable. If you want to sign a JWT, you can either use a secret key (by leveraging the HMAC algorithm) or a pair of public/private keys (RSA or ECDSA).
What are access tokens?
An access token is a digital (cryptographic) key that provides secure access to API endpoints. A token-based authentication system uses access tokens to allow an application to access APIs on the server. After authentication with valid credentials is successful, access tokens are issued to the user.
The tokens are then passed as ‘bearer’ tokens in the request header while a user requests data from the server. As long as the token is valid, the server understands that the bearer is authorized to access the resource.
Since access tokens cannot be used for an extended period of time, you should leverage refresh tokens to re-authenticate a user in your application sans the need of authenticating the user again. This explains why refresh tokens are used by most applications to refresh access to protected resources by reissuing an access token to the user.
What are Refresh Tokens? Why are they needed?
Since access tokens expire after a certain amount of time, refresh tokens are used to obtain new access tokens after the original has expired. This allows users to remain authenticated without having to log in to the application each time the access token expires – effectively, they are a ‘renewal’ mechanism.
Here are the benefits of refresh tokens at a glance:
Extended access: Refresh tokens allow you to access APIs and applications for prolonged periods without re-logins even after access tokens have expired.
Enhanced security: A long-lived refresh token can reduce token theft considerably since access tokens expire quickly.
Improved user experience: The use of refresh tokens makes it easier for users to interact with apps without the need for re-entering the credentials.
How do refresh tokens work?
Here’s a simplified explanation of how refresh tokens work:
As a first step, the client sends the login credentials to the authentication component.
As soon as the user logs into the application, the credentials are sent to the authentication server for validation.
Assuming the authentication process completes successfully, the authentication component generates two tokens, i.e., an access token and a refresh token, and sends them to the client application.
From now on, the client application takes advantage of the access token to gain access to protected resources of the server application, i.e., the APIs or services.
The access token is verified and, if it’s valid, access to the protected resource is granted.
Steps 4 and 5 are repeated until the access token is no longer valid, i.e., after the access token expires.
Upon expiry of the access token, the client application requests a new access token from the server application using the refresh token.
Lastly, the authentication component generates two new tokens again, i.e., an access token and a refresh token.
New Access Token and Refresh Token generated
9. Steps 4 to 8 are repeated until the refresh token expires.
10. As soon as the refresh token has expired, the authentication server generates access and refresh tokens yet again since the client should be re-authenticated.
How to implement refresh tokens in ASP.NET Core: getting started
In this section, we’ll examine how we can implement refresh tokens in an ASP.NET Core application. We’ll build an ASP.NET Core Web API application to demonstrate how it all works and test the API endpoints using Postman.
In this example, we’ll use the following files:
LoginModel (This model is used to store user credentials to login to the application)
RegisterModel (This model stores user data required to register a new user)
TokenModel (This model contains the access and refresh token and is used to send these tokens in the response)
ApplicationUser (This class extends the functionality of the IdentityUser class of the ASP.NET Core Identity Framework)
ApplicationDbContext (This represents the DbContext used to interact with the underlying database)
MessageCode (This record type contains a list of message codes.)
MessageProvider (This record type contains a list of notification and error messages.)
JwtOptions (This type is used to read configuration data.)
Response (This represents the custom response format we’ll use for sending formatted response out of the controller action methods.)
IAuthenticationService
AuthenticationService (This class represents the Authentication Service that wraps all logic for registering a new user, logging in an existing user, refreshing tokens, etc.)
AuthenticationController (This represents the API that contains action methods to register a new user, login an existing user, refresh tokens, etc. It calls the methods of the AuthenticationService class to perform each of these operations.)
Save 35% on Redgate’s .NET Developer Bundle
Fantastic value on our .NET development tools for performance optimization and debugging.
How to implement refresh tokens in ASP.NET Core: step-by-step guide
To build the application discussed in this article, follow these steps:
Create a new ASP.NET Core application
Install the NuGet packages
Create the models
Create the data context
Register the data context
Create the repositories
Add services to the container
How to create a new ASP.NET Core web API application
To create a new ASP.NET Core Web API project, run the following commands at the command prompt:
dotnet new sln --name RefreshTokenDemo
dotnet new webapi -f net10.0 --no-https --use-controllers --name RefreshTokenDemo
dotnet sln RefreshTokenDemo.sln add RefreshTokenDemo/RefreshTokenDemo.csproj
Install the NuGet package(s)
In this example, you’ll take advantage of JWT tokens for implementing authentication. You can use the Microsoft.AspNetCore.Authentication.JwtBearer NuGet package to work with JWT tokens in ASP.NET Core applications; this can be installed via the NuGet Package Manager or NuGet Package Manager Console.
Create three record types – LoginModel, RegisterModel and TokenModel – as shown in the following code listing:
public record LoginModel
{
public string Username { get; set; }
public string Password { get; set; }
}
public record RegisterModel
{
public string Username { get; set; }
public string Email { get; set; }
public string Password { get; set; }
}
public record TokenModel
{
public string? AccessToken { get; set; }
public string? RefreshToken { get; set; }
}
While the LoginModel and RegisterModel types will be used to store login and register data for the user, the TokenModel will be used to store the access and refresh tokens respectively. Note the usage of record type in the preceding code example.
In C#, a record is a class (or struct) primarily designed to store data when working with immutable data models. You can use a record type in place of a class or a struct when you want to create a data model with value-based equality and define a type that comprises immutable objects.
Next, create a new class named ApplicationUser. This extends the IdentityUser class to add custom properties to the default ASP.NET Core IdentityUser class:
using Microsoft.AspNetCore.Identity;
public class ApplicationUser : IdentityUser
{
public string? RefreshToken { get; set; }
public DateTime RefreshTokenExpiryTime { get; set; }
}
Create the MessageCode enum
Create an enum named MessageCode. This will contain the message codes (as integer constants) we’ll use in this example:
Next, create a record type called MessageProvider. This will be used to return a text message based on the value of the MessageCode enum as a parameter. Hence, if the value of the parameter is LoginSuccess (or integer value 0), the text “User logged in successfully.” will be returned:
public record MessageProvider
{
public static string GetMessage(MessageCode code)
{
switch (code)
{
case MessageCode.LoginSuccess:
return "User logged in successfully.";
case MessageCode.InvalidCredentials:
return "Invalid credentials.";
case MessageCode.UserAlreadyExists:
return "User already exists.";
case MessageCode.UserCreationFailed:
return "User creation failed.";
case MessageCode.UserCreatedSuccessfully:
return "User created successfully.";
case MessageCode.InvalidRequest:
return "Invalid request.";
case MessageCode.InvalidTokenPair:
return "Invalid access token or refresh token.";
case MessageCode.RefreshTokenSuccess:
return "Token refreshed successfully.";
case MessageCode.UnexpectedError:
return "An unexpected error occurred.";
default:
throw new ArgumentOutOfRangeException
("Invalid message code.");
}
}
}
Create the response type
In this example, we’ll use a custom response record type that can be used to send out responses from the controller in a pre-defined custom format. Create a new record type called Response and replace the auto-generated code with:
public record Response<T>
{
public string? Message { get; set; }
public T? Data { get; set; }
public HttpStatusCode StatusCode { get; set; }
public static Response<T> Create(
HttpStatusCode statusCode,
T? data = default,
MessageCode? messageCode = null)
{
return new Response<T>
{
StatusCode = statusCode,
Data = data,
Message = messageCode.HasValue
?
MessageProvider.GetMessage(messageCode.Value)
: null
};
}
}
The Response record type shown here is a generic wrapper. It contains fields corresponding to the message to be sent as a response from the action methods of the controller (a HTTP status code), as well as data which will optionally contain the controller-generated access token and refresh token.
Create the JWT section in the configuration file
Create a new section in the appsettings.json file. This is to define the necessary security parameters for validating and generating JWT tokens in your ASP.NET Core API.
Now that the models have been created, you can now create the data context class for interacting with the underlying database. In Entity Framework Core, the data context acts as the bridge of communication between your application and the underlying database. It represents a session of connectivity with the database, enabling you to execute database operations without having to write raw SQL queries.
In this example, the data context class is named ApplicationDbContext. It extends the IdentityDbContext of the ASP.NET Core Identity Framework:
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore;
public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options)
{ }
protected override void OnModelCreating(ModelBuilder builder)
{
base.OnModelCreating(builder);
}
}
Create the authentication service
The AuthenticationService class encapsulates the process of token creation, token validation, and token refresh logic in one place. It implements the IAuthenticationService interface:
The AuthenticationService class uses constructor injection to create instances of the UserManager class, and the JwtOptions record class, pertaining to the ASP.NET Core Identity Framework. JwtOptions reads the required configuration data:
public sealed class AuthenticationService : IAuthenticationService
{
private readonly UserManager<ApplicationUser> _userManager;
private readonly JwtOptions _jwtOptions;
public AuthenticationService(
UserManager<ApplicationUser> userManager,
IOptions<JwtOptions> jwtOptions)
{
_jwtOptions = jwtOptions.Value ??
throw new ArgumentNullException(nameof(jwtOptions));
_userManager = userManager ??
throw new ArgumentNullException(nameof(userManager));
if (string.IsNullOrWhiteSpace(_jwtOptions.SecretKey))
{
throw new InvalidOperationException
("The Secret Key is not configured.");
}
}
}
The AuthenticationService class contains three async methods: LoginAsync, RegisterAsync and RefreshTokensAsync. Each of these methods are called from the controller class. The source code of these three methods is:
public async Task<Response<object>> LoginAsync(LoginRequest request,
CancellationToken cancellationToken = default)
{
var user = await _userManager.FindByNameAsync(request.Username);
if (user == null || !await _userManager.CheckPasswordAsync(user, request.Password))
{
return Response<object>.Create(
HttpStatusCode.BadRequest,
null,
MessageCode.InvalidCredentials);
}
var tokens = await GenerateTokensAsync(user, cancellationToken);
return Response<object>.Create(
HttpStatusCode.OK,
new { tokens.AccessToken, tokens.RefreshToken },
MessageCode.LoginSuccess);
}
public async Task<Response<object>> RegisterAsync(RegisterRequest request,
CancellationToken cancellationToken = default)
{
var existingUser = await _userManager.FindByNameAsync(request.Username);
if (existingUser != null)
{
return Response<object>.Create(
HttpStatusCode.BadRequest,
null,
MessageCode.UserAlreadyExists);
}
var user = new ApplicationUser
{
Email = request.Email,
SecurityStamp = Guid.NewGuid().ToString(),
UserName = request.Username
};
var result = await _userManager.CreateAsync(user, request.Password);
if (!result.Succeeded)
{
return Response<object>.Create(
HttpStatusCode.BadRequest,
null,
MessageCode.UserCreationFailed);
}
return Response<object>.Create(
HttpStatusCode.OK,
null,
MessageCode.UserCreatedSuccessfully);
}
public async Task<Response<object>> RefreshTokensAsync(RefreshTokenRequest request,
CancellationToken cancellationToken = default)
{
var principal = GetPrincipalFromExpiredToken(request.AccessToken ?? string.Empty);
var username = principal.Identity?.Name;
var user = await _userManager.Users
.FirstOrDefaultAsync(
u => u.UserName == username && u.RefreshToken == request.RefreshToken,
cancellationToken);
if (user == null || user.RefreshTokenExpiryTime <= DateTime.UtcNow)
{
return Response<object>.Create(
HttpStatusCode.BadRequest,
null,
MessageCode.InvalidTokenPair);
}
var tokens = await GenerateTokensAsync(user, cancellationToken);
return Response<object>.Create(
HttpStatusCode.OK,
new { tokens.AccessToken, tokens.RefreshToken },
MessageCode.RefreshTokenSuccess);
}
The complete source code for the AuthenticationService class is available in the Github repository.
How to create migrations using Entity Framework (EF) Core
In Entity Framework (EF) Core, migrations enable schema versioning for your database. You can either create or update the schema from your application using C# models (such as the ApplicationUser model in this example).
Once the migration has been executed successfully and you’ve applied the changes to the database, a new database with the name you specified in the configuration file – along with the associated identity database tables such as AspNetUsers and AspNetRole – will be created automatically.
To create a migration in EF Core, run the Add-Migration command in the Package Manager Console window:
Add-Migration RefreshTokenDemoMigration
You can also create a migration by running the following command at the .NET CLI:
dotnet ef migrations add RefreshTokenDemoMigration
Once you run the migration, a new solution folder called Migrations will be created. To apply the migration you created against the underlying database, run the Update-Database command at the Package Manager Console window:
Update-Database
Once you’ve executed the command, the changes will be applied against the underlying database. A new database will be created, as well as the tables created per your model design. The database will be named whatever you specified in the connection string.
Create the authentication controller
The AutenticationController contains action methods that can be called to register a new user, login an existing user, and regenerate both access and refresh tokens if the former has expired. The actual logic for each of these actions is wrapped inside the AuthenticationService class to ensure your controller is lean, clean, and maintainable.
The following code shows the AuthenticationController class and its action methods. Note how the instance of type IAuthenticationService is injected using constructor injection:
[Route("api/[controller]")]
[ApiController]
public class AuthenticationController : ControllerBase
{
private readonly IAuthenticationService _authenticationService;
public AuthenticationController(IAuthenticationService authService)
{
_authenticationService = authService;
}
[HttpPost("login")]
public async Task<IActionResult> Login([FromBody] LoginRequest request)
{
if (!ModelState.IsValid)
{
var response = Response<object>.Create(
System.Net.HttpStatusCode.BadRequest,
null,
MessageCode.InvalidCredentials);
return BadRequest(response);
}
var responseFromService = await _authenticationService.LoginAsync(request);
if (responseFromService != null)
{
if (responseFromService.StatusCode == System.Net.HttpStatusCode.BadRequest)
{
return BadRequest(responseFromService);
}
}
return Ok(responseFromService);
}
[HttpPost("register")]
public async Task<IActionResult> Register([FromBody] RegisterRequest request)
{
if (!ModelState.IsValid)
{
var response = Response<object>.Create(
System.Net.HttpStatusCode.BadRequest,
null,
MessageCode.UserCreationFailed);
return BadRequest(response);
}
var responseFromService = await _authenticationService.RegisterAsync(request);
if (responseFromService != null)
{
if (responseFromService.StatusCode == System.Net.HttpStatusCode.BadRequest)
{
return BadRequest(responseFromService);
}
}
return Ok(responseFromService);
}
[HttpPost("refresh-token")]
public async Task<IActionResult> RefreshToken([FromBody] RefreshTokenRequest request)
{
var responseFromService = await _authenticationService.RefreshTokensAsync(request);
if (responseFromService != null)
{
if (responseFromService.StatusCode == System.Net.HttpStatusCode.BadRequest)
{
return BadRequest(responseFromService);
}
}
return Ok(responseFromService);
}
}
}
What is the Program.cs file?
The Program.cs file serves as the entry point of for your ASP.NET Core application, analogous to the Main() function in your console applications. This file contains code that bootstraps the web host, configures the services you need, and sets up the HTTP request processing pipeline.
For example, the following statement in the Program.cs file loads configuration data, environment variables, sets up the web host, and prepares the dependency injection container for registering the services you’ll need:
var builder = WebApplication.CreateBuilder(args);
The next section in the Program.cs file registers the services with the request processing pipeline. For example, the following code adds an instance of type IAuthenticationService to the services collection:
Next, you should use the following piece of code in the Program.cs file to register the ASP.NET Core Identity system in the DI container. This is required to provide user management capabilities in your application.
The following code snippet shows how you can add an instance of type IAuthenticationService as a scoped service so that you can access it in the application:
In the following code snippet, the statement Configure<JwtOptions> takes advantage of the Options Pattern to automatically bind the “JWT” section from the appsettings.json to the JwtOptions record type we created earlier:
The complete source code of the Program.cs file is available in the GitHub repository for your reference.
How to execute the application using Postman
In this example, we’ll use Postman to test the API endpoints. Postman is a powerful, versatile API testing platform that lets you create, test, document, and manage your APIs. With it, you can send HTTP requests using verbs such as GET, POST, PUT, PATCH, and DELETE, and work with a wide variety of data formats. You can also use Postman to handle authentication, create automated test scripts, and even create mock servers for testing purposes.
When the application is launched, you’ll be able to invoke the API endpoints from Postman. The first thing you should do is register a new user by invoking the api/Authentication/Register endpoint and specifying the new user’s username, password, and email address in the request body:
New user registered successfully
Once a new user has been registered, you should be able to invoke the api/Authentication/Login endpoint to login to the application by specifying the user’s credentials in the request body. If the request is valid, an access token and a refresh token will be returned in the response:
Invoking the Login endpoint of the AuthenticationService in Postman
If you pass the access token generated here as a Bearer Token in the body of the request to invoke the HTTP Get endpoint of the WeatherForecast controller, the authentication system will validate the access token passed. If validated, you’ll be able to see data returned in the response:
WeatherForecast data returned as a response
The WeatherForecast controller is created by default when you create a new ASP.NET Core Web API project in Visual Studio.
If you invoke the same endpoint after the expiry of the access token, the HTTP GET endpoint of the WeatherForecast controller will return an HTTP 401 response. This implies that the token is no longer valid, so the request has not been authenticated and the user is no longer authorized to access this endpoint.
At this point, you’ll need a valid access point to access the endpoint again. To do this, you should pass the access token and the refresh token generated when you invoked the api/Authentication/Login endpoint earlier:
How to invoke the api/Authentication/refresh-token endpoint and pass the access token and refresh token in the body of the request. This generates new access and refresh tokens.
Final thoughts
In this article, we’ve examined the approaches you should take to implement refresh tokens to secure your APIs with high reliability – all while providing end users with the most seamless experience.
By enabling your application to refresh tokens when they expire, many of the issues associated with traditional static tokens can be addressed, and this approach can be effectively used in a distributed application. When your application can recreate the tokens used to authenticate users, you can enforce a one-time-use policy – and even revoke tokens on-demand.
The official Azure SQL Dev’s Corner blog recently wrote about how to enable soft deletes in Azure SQL using row-level security, and it’s a nice, clean, short tutorial. I like posts like that because the feature is pretty cool and accomplishes a real business goal. It’s always tough deciding where to draw the line on how much to include in a blog post, so I forgive them for not including one vital caveat with this feature.
Row-level security can make queries go single-threaded.
This isn’t a big deal when your app is brand new, but over time, as your data gets bigger, this is a performance killer.
Setting Up the Demo
To illustrate it, I’ll copy a lot of code from their post, but I’ll use the big Stack Overflow database. After running the below code, I’m going to have two Users tables with soft deletes set up: a regular dbo.Users one with no security, and a dbo.Users_Secured one with row-level security so folks can’t see the IsDeleted = 1 rows if they don’t have permissions.
USE StackOverflow;
GO
/* The Stack database doesn't ship with soft deletes,
so we have to add an IsDeleted column to implement it.
Fortunately this is a metadata-only operation, and the
table isn't rewritten. All rows just instantly get a
0 default value. */
ALTER TABLE dbo.Users ADD IsDeleted BIT NOT NULL DEFAULT 0;
GO
/* Copy the Users table into a new Secured one: */
CREATE TABLE [dbo].[Users_Secured](
[Id] [int] IDENTITY(1,1) NOT NULL,
[AboutMe] [nvarchar](max) NULL,
[Age] [int] NULL,
[CreationDate] [datetime] NOT NULL,
[DisplayName] [nvarchar](40) NOT NULL,
[DownVotes] [int] NOT NULL,
[EmailHash] [nvarchar](40) NULL,
[LastAccessDate] [datetime] NOT NULL,
[Location] [nvarchar](100) NULL,
[Reputation] [int] NOT NULL,
[UpVotes] [int] NOT NULL,
[Views] [int] NOT NULL,
[WebsiteUrl] [nvarchar](200) NULL,
[AccountId] [int] NULL,
[IsDeleted] [bit] NOT NULL,
CONSTRAINT [PK_Users_Secured_Id] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF,
DATA_COMPRESSION = PAGE) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Users_Secured] ADD DEFAULT ((0)) FOR [IsDeleted]
GO
SET IDENTITY_INSERT dbo.Users_Secured ON;
GO
INSERT INTO dbo.Users_Secured (Id, AboutMe, Age, CreationDate,
DisplayName, DownVotes, EmailHash, LastAccessDate,
Location, Reputation, UpVotes, Views, WebsiteUrl,
AccountId, IsDeleted)
SELECT Id, AboutMe, Age, CreationDate,
DisplayName, DownVotes, EmailHash, LastAccessDate,
Location, Reputation, UpVotes, Views, WebsiteUrl,
AccountId, IsDeleted
FROM dbo.Users;
GO
SET IDENTITY_INSERT dbo.Users_Secured OFF;
GO
DropIndexes @TableName = 'Users';
GO
CREATE LOGIN TodoDbUser WITH PASSWORD = 'Long@12345';
GO
CREATE USER TodoDbUser FOR LOGIN TodoDbUser;
GO
GRANT SELECT, INSERT, UPDATE ON dbo.Users TO TodoDbUser;
GO
CREATE FUNCTION dbo.fn_SoftDeletePredicate(@IsDeleted BIT)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
SELECT 1 AS fn_result
WHERE
(
DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID('TodoDbUser')
AND @IsDeleted = 0
)
OR DATABASE_PRINCIPAL_ID() <> DATABASE_PRINCIPAL_ID('TodoDbUser');
GO
CREATE SECURITY POLICY dbo.Users_Secured_SoftDeleteFilterPolicy
ADD FILTER PREDICATE dbo.fn_SoftDeletePredicate(IsDeleted)
ON dbo.Users_Secured
WITH (STATE = ON);
GO
Now let’s start querying the two tables to see the performance problem.
Querying by the Primary Key: Still Fast
The Azure post kept things simple by not using indexes, so we’ll start that way too. I’ll turn on actual execution plans and get a single row, and compare the differences between the tables:
SELECT * FROM dbo.Users
WHERE Id = 26837;
SELECT * FROM dbo.Users_Secured
WHERE Id = 26837;
If all you’re doing is getting one row, and you know the Id of the row you’re looking for, you’re fine. SQL Server dives into that one row, fetches it for you, and doesn’t need multiple CPU cores to accomplish the goal. Their actual execution plans look identical at first glance:
If you hover your mouse over the Users_Secured table operation, you’ll notice an additional predicate that we didn’t ask for: row-level security is automatically checking the IsDeleted column for us:
Querying Without Indexes: Starts to Get Slower
Let’s find the top-ranked people in Las Vegas:
SELECT TOP 101 *
FROM dbo.Users
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
SELECT TOP 101 *
FROM dbo.Users_Secured
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
Their actual execution plans show the top query at about 1.4 seconds for the unsecured table, and the bottom query at about 3 seconds for the secured table:
The reason isn’t security per se: the reason is that the row-level security function inhibits parallelism. The top query plan went parallel, and the bottom query did not. If you click on the secured table’s SELECT icon, the plan’s properties will explain that the row-level security function can’t be parallelized:
That’s not good.
When you’re using the database’s built-in row-level security functions, it’s more important than ever to do a good job of indexing. Thankfully, the query plan has a missing index recommendation to help, so let’s dig into it.
Missing Index (Impact 99.6592):
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[Users_Secured] ([Location])
INCLUDE ([AboutMe],[Age],[CreationDate],[DisplayName],[DownVotes],
[EmailHash],[LastAccessDate],[Reputation],[UpVotes],[Views],
[WebsiteUrl],[AccountId],[IsDeleted])
The index simply ignores the IsDeleted and Reputation columns, even though they’d both be useful to have in the key! The missing index hint recommendations are seriously focused on the WHERE clause filters that the query passed in, but not necessarily on the filters that SQL Server is implementing behind the scenes for row-level security. Ouch.
Let’s do what a user would do: try creating the recommended index on both tables – even though the number of include columns is ridiculous – and then try again:
CREATE NONCLUSTERED INDEX Location_Includes
ON [dbo].[Users] ([Location])
INCLUDE ([AboutMe],[Age],[CreationDate],[DisplayName],[DownVotes],
[EmailHash],[LastAccessDate],[Reputation],[UpVotes],[Views],
[WebsiteUrl],[AccountId],[IsDeleted]);
GO
CREATE NONCLUSTERED INDEX Location_Includes
ON [dbo].[Users_Secured] ([Location])
INCLUDE ([AboutMe],[Age],[CreationDate],[DisplayName],[DownVotes],
[EmailHash],[LastAccessDate],[Reputation],[UpVotes],[Views],
[WebsiteUrl],[AccountId],[IsDeleted]);
GO
SELECT TOP 101 *
FROM dbo.Users
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
SELECT TOP 101 *
FROM dbo.Users_Secured
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
GO
Our actual execution plans are back to looking identical:
Neither of them require parallelism because we can dive into Las Vegas, and read all of the folks there, filtering out the appropriate IsDeleted rows, and then sort the remainder, all on one CPU core, in a millisecond. The cost is just that we literally doubled the table’s size because the missing index recommendation included every single column in the table!
A More Realistic Single-Column Index
When faced with an index recommendation that includes all of the table’s columns, most DBAs would either lop off all the includes and just use the keys, or hand-review the query to hand-craft a recommended index. Let’s start by dropping the old indexes, and creating new ones with only the key column that Microsoft had recommended:
CREATE INDEX Location ON dbo.Users(Location);
DROP INDEX Location_Includes ON dbo.Users;
CREATE INDEX Location ON dbo.Users_Secured(Location);
DROP INDEX Location_Includes ON dbo.Users_Secured;
GO
SELECT TOP 101 *
FROM dbo.Users
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
SELECT TOP 101 *
FROM dbo.Users_Secured
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
GO
Summary: Single-Threaded is Bad, but Indexes Help.
The database’s built-in row-level security is a really cool (albeit underused) feature to help you accomplish business goals faster, without trying to roll your own code. Yes, it does have limitations, like inhibiting parallelism and making indexing more challenging, but don’t let that stop you from investigating it. Just know you’ll have to spend a little more time doing performance tuning down the road.
In this case, we’re indexing not to reduce reads, but to avoid doing a lot of work on a single CPU core. Our secured table still can’t go parallel, but thanks to the indexes, the penalty of row-level security disappears for this particular query.
Experienced readers will notice that there are a lot of topics I didn’t cover in this post: whether to index for the IsDeleted column, the effect of residual predicates on IsDeleted and Reputation, and how CPU and storage are affected. However, just as Microsoft left off the parallelism thing to keep their blog post tightly scoped, I gotta keep mine scoped too! This is your cue to pick up this blog post with anything you’re passionate about, and extend it to cover the topics you wanna teach today.
This year, Azure Cosmos DB Conf will feature 21 speakers from across the globe, bringing together Microsoft engineers, community leaders, architects, and developers to share how they are building modern applications with Azure Cosmos DB. Attendees will hear directly from experts using Azure Cosmos DB to power real systems—from AI agent memory architectures and retrieval-augmented generation pipelines to globally distributed event-driven microservices and cost-efficient high-scale workloads.
You can also expect talks exploring Open Source DocumentDB and Azure DocumentDB (with MongoDB compatibility), demonstrating how developers can build portable architectures that run both on-premises and in the cloud while maintaining full compatibility with the MongoDB developer ecosystem.
Still curious about what you’ll see? Below, you can watch a short recap from Azure Cosmos DB Conf 2025 to get a sense of the technical depth and real‑world focus that shape the conference.
An Expanded Azure Cosmos DB Conf, Powered by AMD
This year’s event is our biggest yet. Thanks to our partnership with AMD, Azure Cosmos DB Conf has expanded from three hours to five hours of live programming, giving us more time for deep technical sessions, product insights, and real‑world engineering stories. That includes a behind‑the‑scenes look at how Azure Cosmos DB runs at planetary scale, with Andrew Liu, Principal GPM, Azure Cosmos DB at Microsoft, walking through the internal architecture that powers request processing, replication, and high availability on AMD‑powered infrastructure across Azure datacenters. From data placement and partitioning to quorum‑based replication and the mechanics behind Request Units and serverless execution, this session trades slides for whiteboards and focuses on how the system actually works under the hood.
Keynote: Azure Cosmos DB Platform Evolution and Real‑World Learnings
Evolving the Azure Cosmos DB Platform
In the opening keynote, Kirill Gavrylyuk, Vice President of Azure Cosmos DB at Microsoft, will highlight how the platform continues to evolve to support modern production workloads. Over the past year, Azure Cosmos DB has delivered meaningful improvements across AI‑driven applications, retrieval workloads, performance, reliability, and developer productivity—including advances in vector indexing, full‑text and hybrid search, workload control, and security and backup capabilities that help teams build faster and operate with confidence at scale.
From Production Systems to Open Architectures
The keynote will also highlight how developers are applying these capabilities across the broader Azure Cosmos DB ecosystem in real production environments. Kirill will share success stories from teams using Azure Cosmos DB, Azure DocumentDB (with MongoDB compatibility), and the open source DocumentDB project, part of the Linux Foundation, to meet different architectural and operational requirements, including cloud and hybrid deployments, AI applications, real time analytics, and mission critical workloads. These examples reflect how developers choose the right option for their scenario while maintaining consistent performance characteristics, scalability, and operational reliability as systems grow in complexity and scale.
A look at some of the 25 minute sessions at Azure Cosmos DB Conf 2026
Azure Cosmos DB Conf 2026 includes sessions from developers and engineers who are building modern systems with Azure Cosmos DB today. Here are just a few of the talks you’ll see during the event.
AI, agents, and intelligent retrieval
Modern AI applications require more than a vector database—they need persistent memory, coordination between agents, and scalable retrieval systems. Several sessions at Azure Cosmos DB Conf 2026 will explore how developers are using Azure Cosmos DB to power these new AI architectures.
Farah Abdou, Lead Machine Learning Engineer, SmartServe — Cutting AI Agent Costs with Azure Cosmos DB: The Agent Memory Fabric
In this session, Farah Abdou will demonstrate how Azure Cosmos DB can serve as a unified Agent Memory Fabric for multi-agent AI systems. By combining vector search for semantic caching, Change Feed for event-driven coordination, and optimistic concurrency for conflict prevention, this architecture enables faster agent collaboration while reducing operational complexity.
Varun Kumar Kotte, Senior Machine Learning Engineer, Adobe — Production RAG on Azure Cosmos DB: Version-Aware Enterprise QA
Varun Kumar Kotte will present a production architecture behind Adobe’s AI Assistant, showing how Azure Cosmos DB supports version-aware retrieval across large enterprise documentation sets. The talk explores how RAG pipelines can maintain semantic accuracy while serving hundreds of thousands of documents with low-latency responses.
Performance, cost, and operating at scale
Running Azure Cosmos DB workloads at scale requires strong visibility into performance, partitioning strategy, and RU consumption. These sessions focus on diagnosing real-world issues and optimizing systems as workloads grow.
Patrick Oguaju. Software Developer, Next — Designing Cost-Efficient, High-Scale Systems with Azure Cosmos DB
Patrick Oguaju will walk through how a production system was redesigned to dramatically reduce Azure Cosmos DB costs without sacrificing performance or reliability. The session covers practical data modeling patterns, indexing tradeoffs, and observability techniques used to identify hidden cost drivers. You’ll see how small design decisions—such as partitioning strategy and query patterns—can have a significant impact on throughput consumption. By the end of the talk, you’ll have a clearer understanding of how to diagnose cost issues and apply proven techniques to keep your own workloads efficient at scale.
Anurag Dutt, Multi-Cloud Engineer — From Rising RU Costs to Stable Performance: A Practical Cosmos DB Case Study
Anurag Dutt will examine a real high-volume workload and demonstrate how issues like hot partitions, cross-partition queries, and inefficient indexing were identified and corrected. The session walks through the decision-making process behind each optimization and the resulting improvements in cost and latency.
Distributed systems and event-driven architecture
Azure Cosmos DB often acts as the operational backbone for distributed systems that coordinate work across services and process real-time events.
Eric Boyd, Founder & CEO, responsiveX — Distributed Locks, Sagas, and Coordination with Cosmos DB
Eric Boyd will explore how Azure Cosmos DB can be used as a coordination layer for distributed workflows. The session covers lock and lease patterns, saga orchestration, and strategies for handling retries, contention, and race conditions across multiple regions. He also shares practical guidance on when to centralize coordination versus when to push responsibility to services themselves. Attendees will leave with concrete patterns they can apply to build more resilient, globally distributed systems without introducing unnecessary coupling or operational overhead.
Tural Suleymani, Engineering Manager, VOE Consulting — Designing High-Scale Event-Driven Microservices with Azure Cosmos DB
Tural Suleymani will share lessons learned from building event-driven microservices using Azure Cosmos DB Change Feed as the backbone for domain events. The session demonstrates patterns for building scalable, loosely coupled systems while maintaining observability and resilience.
Developer productivity and modern development workflows
Developer tooling and workflows play a critical role in building reliable distributed systems. These sessions explore how modern development environments and secure architectures help teams move faster.
Sajeetharan Sinnathurai, Principal Program Manager, Microsoft — Setting Up Your Azure Cosmos DB Development Environment (and Supercharging It with AI)
Sajeetharan Sinnathurai will show how to configure a complete Azure Cosmos DB development workflow using the emulator, VS Code tooling, testing strategies, and AI coding assistants. The session demonstrates how developers can accelerate development while maintaining best practices for Cosmos DB applications.
Pamela Fox, Principal Cloud Advocate (Python), Microsoft — Know Your User: Identity-Aware MCP Servers with Cosmos DB
Pamela Fox will demonstrate how to build a Python MCP server that authenticates users with Microsoft Entra ID and securely stores per-user data in Azure Cosmos DB. The session highlights identity-first architectures for AI systems and modern cloud applications.
Migration and hybrid architectures
Modernizing applications often requires bridging existing systems with new cloud-native platforms. Azure Cosmos DB Conf 2026 includes sessions exploring migration strategies and hybrid deployment models.
Sergiy Smyrnov, Senior Specialist, Data & AI Global Black Belt, Microsoft — From JOINs to JSON: Migrating a Real-World ASP.NET App to Cosmos DB with GitHub Copilot
Sergiy Smyrnov will demonstrate how an AI-assisted workflow can analyze a relational schema, generate a migration plan, and convert an ASP.NET application to run on Azure Cosmos DB for NoSQL. Using the classic AdventureWorks database and a real ASP.NET application, the session walks through the full migration journey—from relational modeling and schema analysis to provisioning Azure Cosmos DB infrastructure and rewriting the application data layer.
Khelan Modi, Program Manager, Microsoft — One Codebase, Any Cloud: Building a Retail Database with OSS and Azure
Khelan Modi will demonstrate how developers can build applications using MongoDB-compatible APIs that run both on-premises and in the cloud. The session walks through a retail application architecture—including product catalog, inventory, orders, and vector-powered recommendations—built once and deployed using open-source DocumentDB and Azure DocumentDB. Khelan will show how the same drivers, queries, and application code can run across both environments without modification, enabling portable architectures while still benefiting from Azure’s managed capabilities.
Explore the full lineup
These sessions are just a sample of what you’ll find at Azure Cosmos DB Conf 2026. With 21 speakers from around the world, the conference offers a broad look at how developers are using Azure Cosmos DB, open-source DocumentDB, and Azure DocumentDB to build intelligent, distributed, and modern applications.
Visit the Azure Cosmos DB Conf 2026 website for the full speaker list, session details, conference news, and the latest event updates.
Be sure to register now for the live event on April 28, 2026. Registering will help keep you up to speed with email updates from the Azure Cosmos DB team, including speaker announcements, session updates, and other conference news.