If you’re building apps with agents such as Codex or Claude, this article is for you – I’m going to show you to use agent skills to make them smarter, so they write better code, faster.
We’ll look at how to install and use these agent skills in Xcode, Claude Code, Codex, Gemini, and more. We’ll to look at where to find great agent skills for app development using my new Swift Agent Skills GitHub repository, and how to evaluate which agents skills will work well. And I’m also going to show you how agent skills are different from AGENTS files, and why you might want both.
The result will be that using AI to write, review, or test code will be so much better, whether you’re using SwiftUI, Swift concurrency, SwiftData, Swift Testing, or something else.
If you missed my previous video about how to build apps with AI, I suggest you start there – it walks you through using Xcode, Claude Code, Codex, and more, with lots of tips for getting the most from Xcode, configuring your terminal, getting the most from ChatGPT, and more.
If you’d prefer to watch this as a video, you can find it below. Alternatively, scroll past the video to read the article instead.
Still here? Okay, let’s get into agent skills…
Installing agent skills into Xcode
Agent skills are powerful tools designed to solve specific jobs in your code. I’ve written agent skills to improve your SwiftUI code, to make the most of Swift Testing, to optimize your Swift concurrency code, and even to make sure you’re using SwiftData effectively – each skill does one thing, and does it well.
Before we get into how agent skills work, I want you to see them in action because they are really transformative.
If you’re installing skills from the command line for tools like Claude Code or Codex, it’s easy. But when using Xcode, it might take a few more steps.
A biweekly livestream about what it's actually like to build software with AI. Real developer stories, honest tool assessments, agentic workflow deep dives. Every other Thursday at 11 AM ET — live on YouTube, X, and LinkedIn.
If you've spent any time building software in the last couple of years, you've felt the shift. AI coding assistants, agentic workflows, LLM-powered UI — it's not a distant future anymore. It's your pull request queue, your design review, your sprint planning. AI is now part of everyday software development.
But most of what we read about AI in software development is either breathless hype or dismissive cynicism. What's harder to find are honest accounts from developers actually in the trenches — shipping real products, hitting real walls, figuring it out as they go.
That's exactly why we're launching the Pragmatic AI in .NET Show.
Each episode will feature developers sharing what and how they're building with AI — the wins, the surprises, and the moments where AI didn't quite do what they expected. We'll dig into the latest developer AI tools, explore agentic workflows, and have frank conversations about where this technology helps and where it still has a way to go.
No hype. No demos that only work in ideal conditions. Just developers talking honestly about what it's like to build software today.
The Landscape
The Developer Landscape Has Genuinely Changed
Let's be honest about where we are. A few years ago, AI in a developer's workflow mostly meant autocomplete that sometimes got lucky. Today, it's something qualitatively different.
AI can now scaffold entire app features, generate test suites, catch bugs during code review, and help developers think through architecture decisions — all before lunch. Tools like GitHub Copilot, Claude Code, Cursor, Codex, and newer agentic frameworks are becoming a real part of how many teams ship software.
But "the landscape has changed" doesn't tell the whole story. The more interesting question is: changed in what ways, for whom, and at what cost?
Here's what we're actually seeing in the .NET community:
AI tools genuinely accelerate certain kinds of work — boilerplate, CRUD operations, test generation, documentation.
They also introduce new categories of problems: hallucinated APIs, subtle logic errors that pass code review, and over-reliance on generated code that developers don't fully understand.
The developers getting the most value aren't treating AI as a replacement for judgment — they're treating it as a highly capable but fallible collaborator.
And the craft of prompting, reviewing AI output, and integrating it into a real codebase is itself a skill that takes time to develop.
None of this is a reason to stay on the sidelines. But it is a reason to be thoughtful.
Reality
The Realities of Building Software with AI
There's a version of the AI narrative: describe what you want, AI builds it, and you ship. If you've tried this on anything beyond a toy project, you know it's more complicated than that.
The reality is messier and more interesting. AI can dramatically speed up parts of your workflow while introducing friction in others. It works best when developers have clarity on what is being built — AI can amplify intent, for better or worse. A vague prompt yields vague code.
"AI is a force multiplier. But it multiplies whatever developers bring to the table — clear thinking, good architecture, solid testing habits. The fundamentals still matter."
Agentic workflows — where AI doesn't just respond to individual prompts but takes sequences of actions toward a goal — are genuinely exciting and genuinely tricky. Getting an agent to reliably navigate a real codebase, understand conventions, and make changes that don't break things downstream is an active area of work, not a solved problem.
We want to build a space where developers can talk honestly about all of this. Where's the leverage? Where are the landmines? What does it actually look like to integrate AI into a professional .NET development workflow?
Topics
What We'll Cover
Each episode of the Pragmatic AI in .NET Show will dig into:
Real developer stories — folks building actual products, not demo apps.
The latest in developer AI tools — what's new, what's worth attention, and honest assessments of what's still rough around the edges.
Agentic workflows — practical exploration of autonomous AI patterns and where they fit in a .NET context.
The meta-skills — prompting, reviewing, integrating, and knowing when not to use AI.
We're intentionally keeping the format conversational. This isn't a polished tutorial series. It's more like pulling up a chair with developers who are figuring this out alongside you.
Why .NET
Why This Matters for the .NET Community
The .NET ecosystem is in an interesting moment. C#/.NET and the broader Microsoft stack have always attracted developers who care about building things that work — reliably, at scale, over time. That ethos doesn't go out the window just because AI is in the picture.
If anything, it makes the conversation more important. How do you maintain code quality when a significant chunk of your codebase is AI-generated? How do you onboard new developers when your workflows have changed? How do you make good architectural decisions when AI can scaffold almost anything?
These are the conversations we want to have. And we think the .NET community — pragmatic by nature — is exactly the right place to have them.
At Uno Platform, we spend a lot of time thinking about how to make cross-platform .NET development faster and more accessible. AI tools are a big part of that picture — MCP tools that give AI "eyes and hands" for app interactivity, smarter design-to-code workflows, and AI-assisted debugging. Good tooling and good judgment work together.
Join Us
Join Us On The Show
The Pragmatic AI in .NET Show kicks off this Thursday at 11 AM ET. We'd love to have you there.
Whether you're already deep in AI-powered workflows or just starting to explore what's possible, there's something in this for you. Come for the developer stories. Stay for the honest conversation about what building software actually looks like right now.
If you have a story to share — something you've built, a workflow that surprised you, a tool that changed how you work — we want to hear from you. Reach out at info@platform.uno.
Learn how to implement microsoft agent framework tool approval human in the loop patterns in C# to keep AI agents safe, auditable, and under human control.
TL;DR: Discover how Smart Suggestions (Slash Menu) and Mentions enhance the React Rich Text Editor’s workflow. The blog explains how slash-triggered commands improve formatting flow, how structured @ tagging strengthens accuracy, and how these features together support smoother content creation, stronger collaboration, and a more intuitive editing experience in modern applications.
Are you building a modern application that demands powerful, collaborative content tools? In today’s fast-paced digital landscape, content creation must be intuitive and efficient to meet workflow demands. The Syncfusion® Rich Text Editor makes content simple and efficient. Its Smart Suggestions and Mentions features improve formatting and collaboration, making it a great fit for blogs, forums, and messaging apps.
In this blog post, we’ll explore how Smart Suggestions and Mentions work, their key benefits, and share sample code to help you implement them.
Why Smart Suggestions and Mentions matter
Modern users expect:
Fast actions without hunting through toolbars.
Structured formatting with minimal effort.
Accurate tagging inside collaborative environments.
Smart Suggestions and Mentions help achieve all of this by providing context-aware menus right where users type.
Configuring Smart Suggestions (Slash Menu) in React Rich Text Editor
Smart Suggestions, also known as the Slash Menu, allow users to type / in the editor to open a quick command popup for actions such as applying headings, creating lists, or inserting media. This removes friction from formatting and makes content creation feel natural, especially for blogging and note-taking.
How Smart Suggestions work
Trigger: Type / inside the editor.
Options: A customizable list of commands (e.g., Paragraph, Headings, lists, media insertion, and more).
This event handler runs when a user selects an item from the Slash Menu.
In the above code example, we check if the selected command is MeetingNotes and use executeCommand method to insert a predefined HTML snippet.
Developers can extend this logic to handle other custom actions, such as inserting templates, signatures, or dynamic content.
For example, a content creator can type / select Heading 1 to format a title, or choose MeetingNotes to insert a predefined note, streamlining their workflow as shown below.
Smart Suggestions menu displayed after typing “/” in the Rich Text Editor.
Benefits of Smart Suggestions
Here are the key features that make this feature efficient.
Faster formatting: Skip toolbars and format inline.
Contextual workflow: Suggestions appear exactly where users type.
Customizability: Tailor the Slash Menu to include app-specific commands, like inserting signatures or templates.
Ready to level up your editor? Explore the Smart Suggestions demo and documentation to start implementing and customizing it.
Configuring Mentions in React Rich Text Editor
The Mentions feature allows users to tag people, groups, or entities by typing @, triggering a suggestion list populated from a data source. This is perfect for collaborative applications like messaging apps, comment sections, or project management tools, ensuring accurate and efficient tagging.
How Mentions work
Trigger: Type @ followed by a character to display the suggestion list.
Data Source: A list of objects (e.g., employee records with names, emails, and profile images).
Customization:
Use itemTemplate to style the suggestion list.
Use displayTemplate to format tagged values.
Properties like suggestionCount, popupWidth, and allowSpaces provide further control.
Integrating Mentions in React Rich Text Editor
Below is a React example showing how to integrate and customize the Mentions feature using Syncfusion’s React Rich Text Editor.
itemTemplate customizes the suggestion list to show both name and email.
Example: In a team messaging app, typing @Maria displays a suggestion list with Maria Smith - maria@example.com, ensuring accurate tagging.
Refer to the following image.
Mention suggestions are displayed after typing “@”
Benefits of Mentions
Collaboration: Simplifies tagging team members, improving communication in collaborative tools.
Accuracy: Selecting from a predefined list reduces typing errors and ensures correct tagging.
Enhanced UI: Customizable suggestion lists with images or status indicators improve the visual experience.
Note: Explore the Mentions demo and documentation for detailed steps on implementing and customizing this feature.
Real-world applications
The combination of Smart Suggestions and Mentions makes the Rich Text Editor ideal for:
Blogging platforms: Use Smart Suggestions to format posts quickly and Mentions to tag contributors.
Collaboration tools: Tag team members in comments or notes for seamless communication.
Support ticket systems: Assign tasks with Mentions and insert predefined responses with Smart Suggestions.
Frequently Asked Questions
What is the difference between Smart Suggestions (Slash Menu) and Mentions?
Smart Suggestions are triggered by typing / and help with formatting actions like adding headings, lists, or inserting templates. Mentions are triggered by typing @ and are used to tag people, entities, or items from a data source within the editor.
Can I create my own custom Smart Suggestion (Slash Menu) commands?
Yes. You can fully customize the Slash Menu by adding your own items with text, icons, descriptions, and actions. Using the slashMenuSettings.items property and handling the slashMenuItemSelect event, you can insert templates, dynamic HTML, signatures, or any custom content.
Does the Mentions feature work with dynamic data from APIs?
Absolutely. Mentions can use any data source, static arrays, remote data, REST APIs, or databases. Bind the dataSource of the MentionComponent to your dynamic data and map fields like text, value, or email using the fields property.
Can I customize how Mention items appear in the suggestion list?
Yes. Mentions support multiple presentation options: Custom itemTemplate for list appearance and Custom displayTemplate for how selected mentions appear inside the editor You can include profile images, roles, email IDs, statuses, or any custom UI element.
Thanks for reading! The Smart Suggestions and Mentions features in Syncfusion Rich Text Editor transform content creation by making it faster, more intuitive, and collaborative.
Smart Suggestions reduce clicks with quick formatting commands.
Mentions ensure accurate, structured tagging in collaborative environments.
Both features are highly customizable, flexible, and ready for real-world applications. Try them out to elevate your content creation experience today!
If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.
Today’s applications require robust security to ensure your application’s sensitive and confidential information is not compromised. This is exactly where access tokens and refresh tokens come in.
Typically, these tokens are generated based on JWT open standard. JWT tokens should be generated in such a way that they have a short expiry time – the shorter the expiry time, the safer they are. There needs to be a way to refresh these tokens, re-authenticate the user, and generate new JWT tokens to continue to use the application uninterrupted.
This article explains JWT-based authentication, access and refresh tokens, and how you can implement them in an ASP.NET Core application.
What do you need to use refresh tokens in ASP.NET Core?
Tokens are digitally encoded signatures that are used to authenticate and authorize access to protected resources in an application. JWT (JSON Web Token) is an open standard commonly used for exchanging information between two parties in a secure manner. Typically, a JWT token is used in ASP.NET Core applications to authenticate users and, if the authentication is successful, provide them access to protected resources in the application.
To understand how refresh tokens operate, it’s imperative that you have a thorough knowledge of how JWT tokens work. Since they are signed digitally, this information is trustworthy and verifiable. If you want to sign a JWT, you can either use a secret key (by leveraging the HMAC algorithm) or a pair of public/private keys (RSA or ECDSA).
What are access tokens?
An access token is a digital (cryptographic) key that provides secure access to API endpoints. A token-based authentication system uses access tokens to allow an application to access APIs on the server. After authentication with valid credentials is successful, access tokens are issued to the user.
The tokens are then passed as ‘bearer’ tokens in the request header while a user requests data from the server. As long as the token is valid, the server understands that the bearer is authorized to access the resource.
Since access tokens cannot be used for an extended period of time, you should leverage refresh tokens to re-authenticate a user in your application sans the need of authenticating the user again. This explains why refresh tokens are used by most applications to refresh access to protected resources by reissuing an access token to the user.
What are Refresh Tokens? Why are they needed?
Since access tokens expire after a certain amount of time, refresh tokens are used to obtain new access tokens after the original has expired. This allows users to remain authenticated without having to log in to the application each time the access token expires – effectively, they are a ‘renewal’ mechanism.
Here are the benefits of refresh tokens at a glance:
Extended access: Refresh tokens allow you to access APIs and applications for prolonged periods without re-logins even after access tokens have expired.
Enhanced security: A long-lived refresh token can reduce token theft considerably since access tokens expire quickly.
Improved user experience: The use of refresh tokens makes it easier for users to interact with apps without the need for re-entering the credentials.
How do refresh tokens work?
Here’s a simplified explanation of how refresh tokens work:
As a first step, the client sends the login credentials to the authentication component.
As soon as the user logs into the application, the credentials are sent to the authentication server for validation.
Assuming the authentication process completes successfully, the authentication component generates two tokens, i.e., an access token and a refresh token, and sends them to the client application.
From now on, the client application takes advantage of the access token to gain access to protected resources of the server application, i.e., the APIs or services.
The access token is verified and, if it’s valid, access to the protected resource is granted.
Steps 4 and 5 are repeated until the access token is no longer valid, i.e., after the access token expires.
Upon expiry of the access token, the client application requests a new access token from the server application using the refresh token.
Lastly, the authentication component generates two new tokens again, i.e., an access token and a refresh token.
New Access Token and Refresh Token generated
9. Steps 4 to 8 are repeated until the refresh token expires.
10. As soon as the refresh token has expired, the authentication server generates access and refresh tokens yet again since the client should be re-authenticated.
How to implement refresh tokens in ASP.NET Core: getting started
In this section, we’ll examine how we can implement refresh tokens in an ASP.NET Core application. We’ll build an ASP.NET Core Web API application to demonstrate how it all works and test the API endpoints using Postman.
In this example, we’ll use the following files:
LoginModel (This model is used to store user credentials to login to the application)
RegisterModel (This model stores user data required to register a new user)
TokenModel (This model contains the access and refresh token and is used to send these tokens in the response)
ApplicationUser (This class extends the functionality of the IdentityUser class of the ASP.NET Core Identity Framework)
ApplicationDbContext (This represents the DbContext used to interact with the underlying database)
MessageCode (This record type contains a list of message codes.)
MessageProvider (This record type contains a list of notification and error messages.)
JwtOptions (This type is used to read configuration data.)
Response (This represents the custom response format we’ll use for sending formatted response out of the controller action methods.)
IAuthenticationService
AuthenticationService (This class represents the Authentication Service that wraps all logic for registering a new user, logging in an existing user, refreshing tokens, etc.)
AuthenticationController (This represents the API that contains action methods to register a new user, login an existing user, refresh tokens, etc. It calls the methods of the AuthenticationService class to perform each of these operations.)
Save 35% on Redgate’s .NET Developer Bundle
Fantastic value on our .NET development tools for performance optimization and debugging.
How to implement refresh tokens in ASP.NET Core: step-by-step guide
To build the application discussed in this article, follow these steps:
Create a new ASP.NET Core application
Install the NuGet packages
Create the models
Create the data context
Register the data context
Create the repositories
Add services to the container
How to create a new ASP.NET Core web API application
To create a new ASP.NET Core Web API project, run the following commands at the command prompt:
dotnet new sln --name RefreshTokenDemo
dotnet new webapi -f net10.0 --no-https --use-controllers --name RefreshTokenDemo
dotnet sln RefreshTokenDemo.sln add RefreshTokenDemo/RefreshTokenDemo.csproj
Install the NuGet package(s)
In this example, you’ll take advantage of JWT tokens for implementing authentication. You can use the Microsoft.AspNetCore.Authentication.JwtBearer NuGet package to work with JWT tokens in ASP.NET Core applications; this can be installed via the NuGet Package Manager or NuGet Package Manager Console.
Create three record types – LoginModel, RegisterModel and TokenModel – as shown in the following code listing:
public record LoginModel
{
public string Username { get; set; }
public string Password { get; set; }
}
public record RegisterModel
{
public string Username { get; set; }
public string Email { get; set; }
public string Password { get; set; }
}
public record TokenModel
{
public string? AccessToken { get; set; }
public string? RefreshToken { get; set; }
}
While the LoginModel and RegisterModel types will be used to store login and register data for the user, the TokenModel will be used to store the access and refresh tokens respectively. Note the usage of record type in the preceding code example.
In C#, a record is a class (or struct) primarily designed to store data when working with immutable data models. You can use a record type in place of a class or a struct when you want to create a data model with value-based equality and define a type that comprises immutable objects.
Next, create a new class named ApplicationUser. This extends the IdentityUser class to add custom properties to the default ASP.NET Core IdentityUser class:
using Microsoft.AspNetCore.Identity;
public class ApplicationUser : IdentityUser
{
public string? RefreshToken { get; set; }
public DateTime RefreshTokenExpiryTime { get; set; }
}
Create the MessageCode enum
Create an enum named MessageCode. This will contain the message codes (as integer constants) we’ll use in this example:
Next, create a record type called MessageProvider. This will be used to return a text message based on the value of the MessageCode enum as a parameter. Hence, if the value of the parameter is LoginSuccess (or integer value 0), the text “User logged in successfully.” will be returned:
public record MessageProvider
{
public static string GetMessage(MessageCode code)
{
switch (code)
{
case MessageCode.LoginSuccess:
return "User logged in successfully.";
case MessageCode.InvalidCredentials:
return "Invalid credentials.";
case MessageCode.UserAlreadyExists:
return "User already exists.";
case MessageCode.UserCreationFailed:
return "User creation failed.";
case MessageCode.UserCreatedSuccessfully:
return "User created successfully.";
case MessageCode.InvalidRequest:
return "Invalid request.";
case MessageCode.InvalidTokenPair:
return "Invalid access token or refresh token.";
case MessageCode.RefreshTokenSuccess:
return "Token refreshed successfully.";
case MessageCode.UnexpectedError:
return "An unexpected error occurred.";
default:
throw new ArgumentOutOfRangeException
("Invalid message code.");
}
}
}
Create the response type
In this example, we’ll use a custom response record type that can be used to send out responses from the controller in a pre-defined custom format. Create a new record type called Response and replace the auto-generated code with:
public record Response<T>
{
public string? Message { get; set; }
public T? Data { get; set; }
public HttpStatusCode StatusCode { get; set; }
public static Response<T> Create(
HttpStatusCode statusCode,
T? data = default,
MessageCode? messageCode = null)
{
return new Response<T>
{
StatusCode = statusCode,
Data = data,
Message = messageCode.HasValue
?
MessageProvider.GetMessage(messageCode.Value)
: null
};
}
}
The Response record type shown here is a generic wrapper. It contains fields corresponding to the message to be sent as a response from the action methods of the controller (a HTTP status code), as well as data which will optionally contain the controller-generated access token and refresh token.
Create the JWT section in the configuration file
Create a new section in the appsettings.json file. This is to define the necessary security parameters for validating and generating JWT tokens in your ASP.NET Core API.
Now that the models have been created, you can now create the data context class for interacting with the underlying database. In Entity Framework Core, the data context acts as the bridge of communication between your application and the underlying database. It represents a session of connectivity with the database, enabling you to execute database operations without having to write raw SQL queries.
In this example, the data context class is named ApplicationDbContext. It extends the IdentityDbContext of the ASP.NET Core Identity Framework:
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore;
public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options)
{ }
protected override void OnModelCreating(ModelBuilder builder)
{
base.OnModelCreating(builder);
}
}
Create the authentication service
The AuthenticationService class encapsulates the process of token creation, token validation, and token refresh logic in one place. It implements the IAuthenticationService interface:
The AuthenticationService class uses constructor injection to create instances of the UserManager class, and the JwtOptions record class, pertaining to the ASP.NET Core Identity Framework. JwtOptions reads the required configuration data:
public sealed class AuthenticationService : IAuthenticationService
{
private readonly UserManager<ApplicationUser> _userManager;
private readonly JwtOptions _jwtOptions;
public AuthenticationService(
UserManager<ApplicationUser> userManager,
IOptions<JwtOptions> jwtOptions)
{
_jwtOptions = jwtOptions.Value ??
throw new ArgumentNullException(nameof(jwtOptions));
_userManager = userManager ??
throw new ArgumentNullException(nameof(userManager));
if (string.IsNullOrWhiteSpace(_jwtOptions.SecretKey))
{
throw new InvalidOperationException
("The Secret Key is not configured.");
}
}
}
The AuthenticationService class contains three async methods: LoginAsync, RegisterAsync and RefreshTokensAsync. Each of these methods are called from the controller class. The source code of these three methods is:
public async Task<Response<object>> LoginAsync(LoginRequest request,
CancellationToken cancellationToken = default)
{
var user = await _userManager.FindByNameAsync(request.Username);
if (user == null || !await _userManager.CheckPasswordAsync(user, request.Password))
{
return Response<object>.Create(
HttpStatusCode.BadRequest,
null,
MessageCode.InvalidCredentials);
}
var tokens = await GenerateTokensAsync(user, cancellationToken);
return Response<object>.Create(
HttpStatusCode.OK,
new { tokens.AccessToken, tokens.RefreshToken },
MessageCode.LoginSuccess);
}
public async Task<Response<object>> RegisterAsync(RegisterRequest request,
CancellationToken cancellationToken = default)
{
var existingUser = await _userManager.FindByNameAsync(request.Username);
if (existingUser != null)
{
return Response<object>.Create(
HttpStatusCode.BadRequest,
null,
MessageCode.UserAlreadyExists);
}
var user = new ApplicationUser
{
Email = request.Email,
SecurityStamp = Guid.NewGuid().ToString(),
UserName = request.Username
};
var result = await _userManager.CreateAsync(user, request.Password);
if (!result.Succeeded)
{
return Response<object>.Create(
HttpStatusCode.BadRequest,
null,
MessageCode.UserCreationFailed);
}
return Response<object>.Create(
HttpStatusCode.OK,
null,
MessageCode.UserCreatedSuccessfully);
}
public async Task<Response<object>> RefreshTokensAsync(RefreshTokenRequest request,
CancellationToken cancellationToken = default)
{
var principal = GetPrincipalFromExpiredToken(request.AccessToken ?? string.Empty);
var username = principal.Identity?.Name;
var user = await _userManager.Users
.FirstOrDefaultAsync(
u => u.UserName == username && u.RefreshToken == request.RefreshToken,
cancellationToken);
if (user == null || user.RefreshTokenExpiryTime <= DateTime.UtcNow)
{
return Response<object>.Create(
HttpStatusCode.BadRequest,
null,
MessageCode.InvalidTokenPair);
}
var tokens = await GenerateTokensAsync(user, cancellationToken);
return Response<object>.Create(
HttpStatusCode.OK,
new { tokens.AccessToken, tokens.RefreshToken },
MessageCode.RefreshTokenSuccess);
}
The complete source code for the AuthenticationService class is available in the Github repository.
How to create migrations using Entity Framework (EF) Core
In Entity Framework (EF) Core, migrations enable schema versioning for your database. You can either create or update the schema from your application using C# models (such as the ApplicationUser model in this example).
Once the migration has been executed successfully and you’ve applied the changes to the database, a new database with the name you specified in the configuration file – along with the associated identity database tables such as AspNetUsers and AspNetRole – will be created automatically.
To create a migration in EF Core, run the Add-Migration command in the Package Manager Console window:
Add-Migration RefreshTokenDemoMigration
You can also create a migration by running the following command at the .NET CLI:
dotnet ef migrations add RefreshTokenDemoMigration
Once you run the migration, a new solution folder called Migrations will be created. To apply the migration you created against the underlying database, run the Update-Database command at the Package Manager Console window:
Update-Database
Once you’ve executed the command, the changes will be applied against the underlying database. A new database will be created, as well as the tables created per your model design. The database will be named whatever you specified in the connection string.
Create the authentication controller
The AutenticationController contains action methods that can be called to register a new user, login an existing user, and regenerate both access and refresh tokens if the former has expired. The actual logic for each of these actions is wrapped inside the AuthenticationService class to ensure your controller is lean, clean, and maintainable.
The following code shows the AuthenticationController class and its action methods. Note how the instance of type IAuthenticationService is injected using constructor injection:
[Route("api/[controller]")]
[ApiController]
public class AuthenticationController : ControllerBase
{
private readonly IAuthenticationService _authenticationService;
public AuthenticationController(IAuthenticationService authService)
{
_authenticationService = authService;
}
[HttpPost("login")]
public async Task<IActionResult> Login([FromBody] LoginRequest request)
{
if (!ModelState.IsValid)
{
var response = Response<object>.Create(
System.Net.HttpStatusCode.BadRequest,
null,
MessageCode.InvalidCredentials);
return BadRequest(response);
}
var responseFromService = await _authenticationService.LoginAsync(request);
if (responseFromService != null)
{
if (responseFromService.StatusCode == System.Net.HttpStatusCode.BadRequest)
{
return BadRequest(responseFromService);
}
}
return Ok(responseFromService);
}
[HttpPost("register")]
public async Task<IActionResult> Register([FromBody] RegisterRequest request)
{
if (!ModelState.IsValid)
{
var response = Response<object>.Create(
System.Net.HttpStatusCode.BadRequest,
null,
MessageCode.UserCreationFailed);
return BadRequest(response);
}
var responseFromService = await _authenticationService.RegisterAsync(request);
if (responseFromService != null)
{
if (responseFromService.StatusCode == System.Net.HttpStatusCode.BadRequest)
{
return BadRequest(responseFromService);
}
}
return Ok(responseFromService);
}
[HttpPost("refresh-token")]
public async Task<IActionResult> RefreshToken([FromBody] RefreshTokenRequest request)
{
var responseFromService = await _authenticationService.RefreshTokensAsync(request);
if (responseFromService != null)
{
if (responseFromService.StatusCode == System.Net.HttpStatusCode.BadRequest)
{
return BadRequest(responseFromService);
}
}
return Ok(responseFromService);
}
}
}
What is the Program.cs file?
The Program.cs file serves as the entry point of for your ASP.NET Core application, analogous to the Main() function in your console applications. This file contains code that bootstraps the web host, configures the services you need, and sets up the HTTP request processing pipeline.
For example, the following statement in the Program.cs file loads configuration data, environment variables, sets up the web host, and prepares the dependency injection container for registering the services you’ll need:
var builder = WebApplication.CreateBuilder(args);
The next section in the Program.cs file registers the services with the request processing pipeline. For example, the following code adds an instance of type IAuthenticationService to the services collection:
Next, you should use the following piece of code in the Program.cs file to register the ASP.NET Core Identity system in the DI container. This is required to provide user management capabilities in your application.
The following code snippet shows how you can add an instance of type IAuthenticationService as a scoped service so that you can access it in the application:
In the following code snippet, the statement Configure<JwtOptions> takes advantage of the Options Pattern to automatically bind the “JWT” section from the appsettings.json to the JwtOptions record type we created earlier:
The complete source code of the Program.cs file is available in the GitHub repository for your reference.
How to execute the application using Postman
In this example, we’ll use Postman to test the API endpoints. Postman is a powerful, versatile API testing platform that lets you create, test, document, and manage your APIs. With it, you can send HTTP requests using verbs such as GET, POST, PUT, PATCH, and DELETE, and work with a wide variety of data formats. You can also use Postman to handle authentication, create automated test scripts, and even create mock servers for testing purposes.
When the application is launched, you’ll be able to invoke the API endpoints from Postman. The first thing you should do is register a new user by invoking the api/Authentication/Register endpoint and specifying the new user’s username, password, and email address in the request body:
New user registered successfully
Once a new user has been registered, you should be able to invoke the api/Authentication/Login endpoint to login to the application by specifying the user’s credentials in the request body. If the request is valid, an access token and a refresh token will be returned in the response:
Invoking the Login endpoint of the AuthenticationService in Postman
If you pass the access token generated here as a Bearer Token in the body of the request to invoke the HTTP Get endpoint of the WeatherForecast controller, the authentication system will validate the access token passed. If validated, you’ll be able to see data returned in the response:
WeatherForecast data returned as a response
The WeatherForecast controller is created by default when you create a new ASP.NET Core Web API project in Visual Studio.
If you invoke the same endpoint after the expiry of the access token, the HTTP GET endpoint of the WeatherForecast controller will return an HTTP 401 response. This implies that the token is no longer valid, so the request has not been authenticated and the user is no longer authorized to access this endpoint.
At this point, you’ll need a valid access point to access the endpoint again. To do this, you should pass the access token and the refresh token generated when you invoked the api/Authentication/Login endpoint earlier:
How to invoke the api/Authentication/refresh-token endpoint and pass the access token and refresh token in the body of the request. This generates new access and refresh tokens.
Final thoughts
In this article, we’ve examined the approaches you should take to implement refresh tokens to secure your APIs with high reliability – all while providing end users with the most seamless experience.
By enabling your application to refresh tokens when they expire, many of the issues associated with traditional static tokens can be addressed, and this approach can be effectively used in a distributed application. When your application can recreate the tokens used to authenticate users, you can enforce a one-time-use policy – and even revoke tokens on-demand.
The official Azure SQL Dev’s Corner blog recently wrote about how to enable soft deletes in Azure SQL using row-level security, and it’s a nice, clean, short tutorial. I like posts like that because the feature is pretty cool and accomplishes a real business goal. It’s always tough deciding where to draw the line on how much to include in a blog post, so I forgive them for not including one vital caveat with this feature.
Row-level security can make queries go single-threaded.
This isn’t a big deal when your app is brand new, but over time, as your data gets bigger, this is a performance killer.
Setting Up the Demo
To illustrate it, I’ll copy a lot of code from their post, but I’ll use the big Stack Overflow database. After running the below code, I’m going to have two Users tables with soft deletes set up: a regular dbo.Users one with no security, and a dbo.Users_Secured one with row-level security so folks can’t see the IsDeleted = 1 rows if they don’t have permissions.
USE StackOverflow;
GO
/* The Stack database doesn't ship with soft deletes,
so we have to add an IsDeleted column to implement it.
Fortunately this is a metadata-only operation, and the
table isn't rewritten. All rows just instantly get a
0 default value. */
ALTER TABLE dbo.Users ADD IsDeleted BIT NOT NULL DEFAULT 0;
GO
/* Copy the Users table into a new Secured one: */
CREATE TABLE [dbo].[Users_Secured](
[Id] [int] IDENTITY(1,1) NOT NULL,
[AboutMe] [nvarchar](max) NULL,
[Age] [int] NULL,
[CreationDate] [datetime] NOT NULL,
[DisplayName] [nvarchar](40) NOT NULL,
[DownVotes] [int] NOT NULL,
[EmailHash] [nvarchar](40) NULL,
[LastAccessDate] [datetime] NOT NULL,
[Location] [nvarchar](100) NULL,
[Reputation] [int] NOT NULL,
[UpVotes] [int] NOT NULL,
[Views] [int] NOT NULL,
[WebsiteUrl] [nvarchar](200) NULL,
[AccountId] [int] NULL,
[IsDeleted] [bit] NOT NULL,
CONSTRAINT [PK_Users_Secured_Id] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF,
DATA_COMPRESSION = PAGE) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Users_Secured] ADD DEFAULT ((0)) FOR [IsDeleted]
GO
SET IDENTITY_INSERT dbo.Users_Secured ON;
GO
INSERT INTO dbo.Users_Secured (Id, AboutMe, Age, CreationDate,
DisplayName, DownVotes, EmailHash, LastAccessDate,
Location, Reputation, UpVotes, Views, WebsiteUrl,
AccountId, IsDeleted)
SELECT Id, AboutMe, Age, CreationDate,
DisplayName, DownVotes, EmailHash, LastAccessDate,
Location, Reputation, UpVotes, Views, WebsiteUrl,
AccountId, IsDeleted
FROM dbo.Users;
GO
SET IDENTITY_INSERT dbo.Users_Secured OFF;
GO
DropIndexes @TableName = 'Users';
GO
CREATE LOGIN TodoDbUser WITH PASSWORD = 'Long@12345';
GO
CREATE USER TodoDbUser FOR LOGIN TodoDbUser;
GO
GRANT SELECT, INSERT, UPDATE ON dbo.Users TO TodoDbUser;
GO
CREATE FUNCTION dbo.fn_SoftDeletePredicate(@IsDeleted BIT)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
SELECT 1 AS fn_result
WHERE
(
DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID('TodoDbUser')
AND @IsDeleted = 0
)
OR DATABASE_PRINCIPAL_ID() <> DATABASE_PRINCIPAL_ID('TodoDbUser');
GO
CREATE SECURITY POLICY dbo.Users_Secured_SoftDeleteFilterPolicy
ADD FILTER PREDICATE dbo.fn_SoftDeletePredicate(IsDeleted)
ON dbo.Users_Secured
WITH (STATE = ON);
GO
Now let’s start querying the two tables to see the performance problem.
Querying by the Primary Key: Still Fast
The Azure post kept things simple by not using indexes, so we’ll start that way too. I’ll turn on actual execution plans and get a single row, and compare the differences between the tables:
SELECT * FROM dbo.Users
WHERE Id = 26837;
SELECT * FROM dbo.Users_Secured
WHERE Id = 26837;
If all you’re doing is getting one row, and you know the Id of the row you’re looking for, you’re fine. SQL Server dives into that one row, fetches it for you, and doesn’t need multiple CPU cores to accomplish the goal. Their actual execution plans look identical at first glance:
If you hover your mouse over the Users_Secured table operation, you’ll notice an additional predicate that we didn’t ask for: row-level security is automatically checking the IsDeleted column for us:
Querying Without Indexes: Starts to Get Slower
Let’s find the top-ranked people in Las Vegas:
SELECT TOP 101 *
FROM dbo.Users
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
SELECT TOP 101 *
FROM dbo.Users_Secured
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
Their actual execution plans show the top query at about 1.4 seconds for the unsecured table, and the bottom query at about 3 seconds for the secured table:
The reason isn’t security per se: the reason is that the row-level security function inhibits parallelism. The top query plan went parallel, and the bottom query did not. If you click on the secured table’s SELECT icon, the plan’s properties will explain that the row-level security function can’t be parallelized:
That’s not good.
When you’re using the database’s built-in row-level security functions, it’s more important than ever to do a good job of indexing. Thankfully, the query plan has a missing index recommendation to help, so let’s dig into it.
Missing Index (Impact 99.6592):
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[Users_Secured] ([Location])
INCLUDE ([AboutMe],[Age],[CreationDate],[DisplayName],[DownVotes],
[EmailHash],[LastAccessDate],[Reputation],[UpVotes],[Views],
[WebsiteUrl],[AccountId],[IsDeleted])
The index simply ignores the IsDeleted and Reputation columns, even though they’d both be useful to have in the key! The missing index hint recommendations are seriously focused on the WHERE clause filters that the query passed in, but not necessarily on the filters that SQL Server is implementing behind the scenes for row-level security. Ouch.
Let’s do what a user would do: try creating the recommended index on both tables – even though the number of include columns is ridiculous – and then try again:
CREATE NONCLUSTERED INDEX Location_Includes
ON [dbo].[Users] ([Location])
INCLUDE ([AboutMe],[Age],[CreationDate],[DisplayName],[DownVotes],
[EmailHash],[LastAccessDate],[Reputation],[UpVotes],[Views],
[WebsiteUrl],[AccountId],[IsDeleted]);
GO
CREATE NONCLUSTERED INDEX Location_Includes
ON [dbo].[Users_Secured] ([Location])
INCLUDE ([AboutMe],[Age],[CreationDate],[DisplayName],[DownVotes],
[EmailHash],[LastAccessDate],[Reputation],[UpVotes],[Views],
[WebsiteUrl],[AccountId],[IsDeleted]);
GO
SELECT TOP 101 *
FROM dbo.Users
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
SELECT TOP 101 *
FROM dbo.Users_Secured
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
GO
Our actual execution plans are back to looking identical:
Neither of them require parallelism because we can dive into Las Vegas, and read all of the folks there, filtering out the appropriate IsDeleted rows, and then sort the remainder, all on one CPU core, in a millisecond. The cost is just that we literally doubled the table’s size because the missing index recommendation included every single column in the table!
A More Realistic Single-Column Index
When faced with an index recommendation that includes all of the table’s columns, most DBAs would either lop off all the includes and just use the keys, or hand-review the query to hand-craft a recommended index. Let’s start by dropping the old indexes, and creating new ones with only the key column that Microsoft had recommended:
CREATE INDEX Location ON dbo.Users(Location);
DROP INDEX Location_Includes ON dbo.Users;
CREATE INDEX Location ON dbo.Users_Secured(Location);
DROP INDEX Location_Includes ON dbo.Users_Secured;
GO
SELECT TOP 101 *
FROM dbo.Users
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
SELECT TOP 101 *
FROM dbo.Users_Secured
WHERE Location = N'Las Vegas, NV'
ORDER BY Reputation DESC;
GO
Summary: Single-Threaded is Bad, but Indexes Help.
The database’s built-in row-level security is a really cool (albeit underused) feature to help you accomplish business goals faster, without trying to roll your own code. Yes, it does have limitations, like inhibiting parallelism and making indexing more challenging, but don’t let that stop you from investigating it. Just know you’ll have to spend a little more time doing performance tuning down the road.
In this case, we’re indexing not to reduce reads, but to avoid doing a lot of work on a single CPU core. Our secured table still can’t go parallel, but thanks to the indexes, the penalty of row-level security disappears for this particular query.
Experienced readers will notice that there are a lot of topics I didn’t cover in this post: whether to index for the IsDeleted column, the effect of residual predicates on IsDeleted and Reputation, and how CPU and storage are affected. However, just as Microsoft left off the parallelism thing to keep their blog post tightly scoped, I gotta keep mine scoped too! This is your cue to pick up this blog post with anything you’re passionate about, and extend it to cover the topics you wanna teach today.