Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149401 stories
·
33 followers

Announcing Pragmatic AI in .NET Show

1 Share
&&
Announcement AI in .NET
New Show

A biweekly livestream about what it's actually like to build software with AI. Real developer stories, honest tool assessments, agentic workflow deep dives. Every other Thursday at 11 AM ET — live on YouTube, X, and LinkedIn.

If you've spent any time building software in the last couple of years, you've felt the shift. AI coding assistants, agentic workflows, LLM-powered UI — it's not a distant future anymore. It's your pull request queue, your design review, your sprint planning. AI is now part of everyday software development.

But most of what we read about AI in software development is either breathless hype or dismissive cynicism. What's harder to find are honest accounts from developers actually in the trenches — shipping real products, hitting real walls, figuring it out as they go.

That's exactly why we're launching the Pragmatic AI in .NET Show.

The Show

Introducing the Pragmatic AI in .NET Show

Every Other Thursday · 11 AM ET

Each episode will feature developers sharing what and how they're building with AI — the wins, the surprises, and the moments where AI didn't quite do what they expected. We'll dig into the latest developer AI tools, explore agentic workflows, and have frank conversations about where this technology helps and where it still has a way to go.

No hype. No demos that only work in ideal conditions. Just developers talking honestly about what it's like to build software today.

The Landscape

The Developer Landscape Has Genuinely Changed

Let's be honest about where we are. A few years ago, AI in a developer's workflow mostly meant autocomplete that sometimes got lucky. Today, it's something qualitatively different.

AI can now scaffold entire app features, generate test suites, catch bugs during code review, and help developers think through architecture decisions — all before lunch. Tools like GitHub Copilot, Claude Code, Cursor, Codex, and newer agentic frameworks are becoming a real part of how many teams ship software.

But "the landscape has changed" doesn't tell the whole story. The more interesting question is: changed in what ways, for whom, and at what cost?

Here's what we're actually seeing in the .NET community:

  • AI tools genuinely accelerate certain kinds of work — boilerplate, CRUD operations, test generation, documentation.
  • They also introduce new categories of problems: hallucinated APIs, subtle logic errors that pass code review, and over-reliance on generated code that developers don't fully understand.
  • The developers getting the most value aren't treating AI as a replacement for judgment — they're treating it as a highly capable but fallible collaborator.
  • And the craft of prompting, reviewing AI output, and integrating it into a real codebase is itself a skill that takes time to develop.

None of this is a reason to stay on the sidelines. But it is a reason to be thoughtful.

Reality

The Realities of Building Software with AI

There's a version of the AI narrative: describe what you want, AI builds it, and you ship. If you've tried this on anything beyond a toy project, you know it's more complicated than that.

The reality is messier and more interesting. AI can dramatically speed up parts of your workflow while introducing friction in others. It works best when developers have clarity on what is being built — AI can amplify intent, for better or worse. A vague prompt yields vague code.

"AI is a force multiplier. But it multiplies whatever developers bring to the table — clear thinking, good architecture, solid testing habits. The fundamentals still matter."

Agentic workflows — where AI doesn't just respond to individual prompts but takes sequences of actions toward a goal — are genuinely exciting and genuinely tricky. Getting an agent to reliably navigate a real codebase, understand conventions, and make changes that don't break things downstream is an active area of work, not a solved problem.

We want to build a space where developers can talk honestly about all of this. Where's the leverage? Where are the landmines? What does it actually look like to integrate AI into a professional .NET development workflow?

Topics

What We'll Cover

Each episode of the Pragmatic AI in .NET Show will dig into:

  • Real developer stories — folks building actual products, not demo apps.
  • The latest in developer AI tools — what's new, what's worth attention, and honest assessments of what's still rough around the edges.
  • Agentic workflows — practical exploration of autonomous AI patterns and where they fit in a .NET context.
  • The meta-skills — prompting, reviewing, integrating, and knowing when not to use AI.

We're intentionally keeping the format conversational. This isn't a polished tutorial series. It's more like pulling up a chair with developers who are figuring this out alongside you.

Why .NET

Why This Matters for the .NET Community

The .NET ecosystem is in an interesting moment. C#/.NET and the broader Microsoft stack have always attracted developers who care about building things that work — reliably, at scale, over time. That ethos doesn't go out the window just because AI is in the picture.

If anything, it makes the conversation more important. How do you maintain code quality when a significant chunk of your codebase is AI-generated? How do you onboard new developers when your workflows have changed? How do you make good architectural decisions when AI can scaffold almost anything?

These are the conversations we want to have. And we think the .NET community — pragmatic by nature — is exactly the right place to have them.

At Uno Platform, we spend a lot of time thinking about how to make cross-platform .NET development faster and more accessible. AI tools are a big part of that picture — MCP tools that give AI "eyes and hands" for app interactivity, smarter design-to-code workflows, and AI-assisted debugging. Good tooling and good judgment work together.

Join Us

Join Us On The Show

The Pragmatic AI in .NET Show kicks off this Thursday at 11 AM ET. We'd love to have you there.

Whether you're already deep in AI-powered workflows or just starting to explore what's possible, there's something in this for you. Come for the developer stories. Stay for the honest conversation about what building software actually looks like right now.

First Three Guests

Kevin Griffin
.NET Foundation President & Consultant
Jonathan "J" Tower
Founder at TrailHead Technology Partners
Eric D. Boyd
Founder/CEO of ResponsiveX
Be a Guest

If you have a story to share — something you've built, a workflow that surprised you, a tool that changed how you work — we want to hear from you. Reach out at info@platform.uno.

See you on the show.

The post Announcing Pragmatic AI in .NET Show appeared first on Uno Platform.

Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Tool Approval and Human-in-the-Loop in Microsoft Agent Framework

1 Share

Learn how to implement microsoft agent framework tool approval human in the loop patterns in C# to keep AI agents safe, auditable, and under human control.

Read the whole story
alvinashcraft
44 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Add Mentions and Slash Commands to a React Rich Text Editor

1 Share

How to Add Mentions and Slash Commands to a React Rich Text Editor

TL;DR: Discover how Smart Suggestions (Slash Menu) and Mentions enhance the React Rich Text Editor’s workflow. The blog explains how slash-triggered commands improve formatting flow, how structured @ tagging strengthens accuracy, and how these features together support smoother content creation, stronger collaboration, and a more intuitive editing experience in modern applications.

Are you building a modern application that demands powerful, collaborative content tools? In today’s fast-paced digital landscape, content creation must be intuitive and efficient to meet workflow demands. The Syncfusion® Rich Text Editor makes content simple and efficient. Its Smart Suggestions and Mentions features improve formatting and collaboration, making it a great fit for blogs, forums, and messaging apps.

In this blog post, we’ll explore how Smart Suggestions and Mentions work, their key benefits, and share sample code to help you implement them.

Why Smart Suggestions and Mentions matter

Modern users expect:

  • Fast actions without hunting through toolbars.
  • Structured formatting with minimal effort.
  • Accurate tagging inside collaborative environments.

Smart Suggestions and Mentions help achieve all of this by providing context-aware menus right where users type.

Configuring Smart Suggestions (Slash Menu) in React Rich Text Editor

Smart Suggestions, also known as the Slash Menu, allow users to type / in the editor to open a quick command popup for actions such as applying headings, creating lists, or inserting media. This removes friction from formatting and makes content creation feel natural, especially for blogging and note-taking.

How Smart Suggestions work

  • Trigger: Type / inside the editor.
  • Options: A customizable list of commands (e.g., Paragraph, Headings, lists, media insertion, and more).
  • Customization: Configure via the slashMenuSettings class:
    • Enabling/disabling the feature.
    • Define custom items using the items property.
    • Handle custom actions with the slashMenuItemSelect event.

Enabling Smart Suggestions in React Rich Text Editor

Here’s a quick example showing how to enable and customize the Slash Menu using Syncfusion’s React Rich Text Editor.

/**
   * Initialize Rich Text Editor from React element
   */
import {
    HtmlEditor,
    Image,
    Table,
    Inject,
    Link,
    QuickToolbar,
    RichTextEditorComponent,
    Toolbar,
    EmojiPicker,
    PasteCleanup,
    SlashMenu
} from '@syncfusion/ej2-react-richtexteditor';
import * as React from 'react';
import './App.css';

function App() {
    var editorObj;
    const meetingNotes = '
        <p><strong>Meeting Notes</strong></p>
        <table class="e-rte-table" style="width: 100%; min-width: 0px; height: 150px;">
            <tbody>
                <tr style="height: 20%;">
                    <td style="width: 50%;"><strong>Attendees</strong></td>
                    <td style="width: 50%;" class=""><br></td>
                </tr>
                <tr style="height: 20%;">
                    <td style="width: 50%;"><strong>Date & Time</strong></td>
                    <td style="width: 50%;"><br></td>
                </tr>
                <tr style="height: 20%;">
                    <td style="width: 50%;"><strong>Agenda</strong></td>
                    <td style="width: 50%;"><br></td>
                </tr>
                <tr style="height: 20%;">
                    <td style="width: 50%;"><strong>Discussed Items</strong></td>
                    <td style="width: 50%;"><br></td> 
                </tr>
                <tr style="height: 20%;">
                    <td style="width: 50%;"><strong>Action Items</strong></td>
                    <td style="width: 50%;"><br></td>
                </tr>
            </tbody>
        </table>
    ';

    const toolbarSettings = {
        items: [
            'Bold', 'Italic', 'Underline', 'StrikeThrough', '|',
            'FontName', 'FontSize', 'FontColor', 'BackgroundColor', '|',
            'Formats', 'Alignments', 'Blockquote', '|', 'NumberFormatList', 'BulletFormatList', '|','CreateLink', 'Image', 'CreateTable', '|', 'EmojiPicker', '|',
            'SourceCode', '|', 'Undo', 'Redo'
        ]
    };
    // Define custom Slash Menu items 
    const slashMenuSettings = {
        enable: true,
        items: [
            'Paragraph', 'Heading 1', 'Heading 2', 'Heading 3', 'Heading 4',
            'OrderedList', 'UnorderedList', 'Blockquote', 'Link', 'Image',
            'Table', 'Emojipicker',
            {
                text: 'Meeting notes',
                description: 'Insert a meeting note template.',
                iconCss: 'e-icons e-description',
                type: 'Custom',
                command: 'MeetingNotes'
            }
        ]
    };

    // Handle custom Slash Menu item selection 

    function onSlashMenuItemSelect(args) {
        if (args.itemData.command  === 'MeetingNotes') {
            // Insert custom meeting note 
            editorObj.executeCommand('insertHTML', meetingNotes);
        }
    }

    return (
        <RichTextEditorComponent
            ref={(scope) => { editorObj = scope; }}
            placeholder="Type '/' and choose format"
            toolbarSettings={toolbarSettings}
            slashMenuSettings={slashMenuSettings}
            slashMenuItemSelect={onSlashMenuItemSelect.bind(this)}
        >
            <Inject
                services={[
                    HtmlEditor,
                    SlashMenu,
                    Toolbar,
                    Link,
                    QuickToolbar,
                    Image,
                    PasteCleanup,
                    Table,
                    EmojiPicker
                ]}
            />
        </RichTextEditorComponent>
    );
}

export default App;

Code explanation

  • slashMenuSettings
    • The enable: true property activates the Slash Menu and injects the SlashMenu service into the editor.
    • The items array defines the available commands, including default options like Paragraph and Heading 1, as well as custom items (e.g., MeetingNotes).
  • slashMenuItemSelect
    • This event handler runs when a user selects an item from the Slash Menu.
    • In the above code example, we check if the selected command is MeetingNotes and use executeCommand method to insert a predefined HTML snippet.
    • Developers can extend this logic to handle other custom actions, such as inserting templates, signatures, or dynamic content.

For example, a content creator can type / select Heading 1 to format a title, or choose MeetingNotes to insert a predefined note, streamlining their workflow as shown below.

Smart Suggestions menu displayed after typing “/” in the Rich Text Editor.
Smart Suggestions menu displayed after typing “/” in the Rich Text Editor.

Benefits of Smart Suggestions

Here are the key features that make this feature efficient.

  • Faster formatting: Skip toolbars and format inline.
  • Contextual workflow: Suggestions appear exactly where users type.
  • Customizability: Tailor the Slash Menu to include app-specific commands, like inserting signatures or templates.

Ready to level up your editor? Explore the Smart Suggestions demo and documentation to start implementing and customizing it.

Configuring Mentions in React Rich Text Editor

The Mentions feature allows users to tag people, groups, or entities by typing @, triggering a suggestion list populated from a data source. This is perfect for collaborative applications like messaging apps, comment sections, or project management tools, ensuring accurate and efficient tagging.

How Mentions work

  • Trigger: Type @ followed by a character to display the suggestion list.
  • Data Source: A list of objects (e.g., employee records with names, emails, and profile images).
  • Customization:
    • Use itemTemplate to style the suggestion list.
    • Use displayTemplate to format tagged values.
    • Properties like suggestionCount, popupWidth, and allowSpaces provide further control.

Integrating Mentions in React Rich Text Editor

Below is a React example showing how to integrate and customize the Mentions feature using Syncfusion’s React Rich Text Editor.

/**
   * Initialize Rich Text Editor from React element
   */
import { 
    HtmlEditor,
    Image,
    Table,
    Inject,
    Link,
    QuickToolbar,
    RichTextEditorComponent,
    Toolbar,
    EmojiPicker,
    PasteCleanup,
    SlashMenu
} from '@syncfusion/ej2-react-richtexteditor';
import { MentionComponent } from '@syncfusion/ej2-react-dropdowns';
import * as React from 'react';
import './App.css';

function App() {
    // Sample data for Mentions 
    const mentionData = [
        { Name: 'Selma Rose', EmailId: 'selma@example.com' },
        { Name: 'Maria Smith', EmailId: 'maria@example.com' },
        { Name: 'John Doe', EmailId: 'john@example.com' }
    ];

    const fieldsData = { text: 'Name' };
    const toolbarSettings = {
        items: [
            'Bold', 'Italic', 'Underline', 'StrikeThrough', '|',
            'FontName', 'FontSize', 'FontColor', 'BackgroundColor', '|',
            'Formats', 'Alignments', 'Blockquote', '|', 'NumberFormatList', 'BulletFormatList', '|', 'CreateLink', 'Image', 'CreateTable', '|', 'EmojiPicker', '|',
            'SourceCode', '|', 'Undo', 'Redo'
        ]
    };

    return (
        <div>
            <RichTextEditorComponent
                id="mentionRTE"
                toolbarSettings={toolbarSettings}
                height={400}
                placeholder="Type @ to tag a name" toolbarSettings={toolbarSettings}
            >
                <Inject
                    services={[
                        HtmlEditor,
                        Toolbar,
                        Link,
                        QuickToolbar,
                        Image,
                        PasteCleanup,
                        Table,
                        EmojiPicker
                    ]}
                />
            </RichTextEditorComponent>
            <MentionComponent
                target="#mentionRTE_rte-edit-view"
                dataSource={mentionData}
                fields={fieldsData}
                suggestionCount={5}
                popupWidth="250px"
                itemTemplate="<span>${Name} - ${EmailId}</span>"
            />
        </div>
    );
}

export default App;

Code explanation

  • mentionData: A sample dataset containing objects with Name and EmailId fields for populating the suggestion list.
  • fieldsData: Specifies which field (Name) should be displayed in the suggestion list.
  • MentionComponent:
    • The target property binds the Mentions feature to the Rich Text Editor’s editable area (e.g., #mentionRTE_rte-edit-view).
    •  dataSource and fields properties link the component to the dataset.
    •  suggestionCount limits the number of suggestions displayed.
    •  popupWidth sets the width of the suggestion list.
    •  itemTemplate customizes the suggestion list to show both name and email.

Example: In a team messaging app, typing @Maria displays a suggestion list with Maria Smith - maria@example.com, ensuring accurate tagging.

Refer to the following image.

Mention suggestions are displayed after typing “@”
Mention suggestions are displayed after typing “@

Benefits of Mentions

  • Collaboration: Simplifies tagging team members, improving communication in collaborative tools.
  • Accuracy: Selecting from a predefined list reduces typing errors and ensures correct tagging.
  • Enhanced UI: Customizable suggestion lists with images or status indicators improve the visual experience.

Note: Explore the Mentions demo and documentation for detailed steps on implementing and customizing this feature.

Real-world applications

The combination of Smart Suggestions and Mentions makes the Rich Text Editor ideal for:

  • Blogging platforms: Use Smart Suggestions to format posts quickly and Mentions to tag contributors.
  • Collaboration tools: Tag team members in comments or notes for seamless communication.
  • Support ticket systems: Assign tasks with Mentions and insert predefined responses with Smart Suggestions.

Frequently Asked Questions

What is the difference between Smart Suggestions (Slash Menu) and Mentions?

Smart Suggestions are triggered by typing / and help with formatting actions like adding headings, lists, or inserting templates. Mentions are triggered by typing @ and are used to tag people, entities, or items from a data source within the editor.

Can I create my own custom Smart Suggestion (Slash Menu) commands?

Yes. You can fully customize the Slash Menu by adding your own items with text, icons, descriptions, and actions. Using the slashMenuSettings.items property and handling the slashMenuItemSelect event, you can insert templates, dynamic HTML, signatures, or any custom content.

Does the Mentions feature work with dynamic data from APIs?

Absolutely. Mentions can use any data source, static arrays, remote data, REST APIs, or databases. Bind the dataSource of the MentionComponent to your dynamic data and map fields like text, value, or email using the fields property.

Can I customize how Mention items appear in the suggestion list?

Yes. Mentions support multiple presentation options: Custom itemTemplate for list appearance and Custom displayTemplate for how selected mentions appear inside the editor
You can include profile images, roles, email IDs, statuses, or any custom UI element.

Explore the endless possibilities with Syncfusion’s outstanding React UI components.

Conclusion

Thanks for reading! The Smart Suggestions and Mentions features in Syncfusion Rich Text Editor transform content creation by making it faster, more intuitive, and collaborative.

  • Smart Suggestions reduce clicks with quick formatting commands.
  • Mentions ensure accurate, structured tagging in collaborative environments.

Both features are highly customizable, flexible, and ready for real-world applications.
Try them out to elevate your content creation experience today!

If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.

You can also contact us through our support forumsupport portal, or feedback portal for queries. We are always happy to assist you!

Read the whole story
alvinashcraft
56 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to use refresh tokens in ASP.NET Core – a complete guide

1 Share

Today’s applications require robust security to ensure your application’s sensitive and confidential information is not compromised. This is exactly where access tokens and refresh tokens come in.

Typically, these tokens are generated based on JWT open standard. JWT tokens should be generated in such a way that they have a short expiry time – the shorter the expiry time, the safer they are. There needs to be a way to refresh these tokens, re-authenticate the user, and generate new JWT tokens to continue to use the application uninterrupted.

This article explains JWT-based authentication, access and refresh tokens, and how you can implement them in an ASP.NET Core application.

What do you need to use refresh tokens in ASP.NET Core?

To work with the code examples illustrated in this article, you need Visual Studio 22 – download it here if you haven’t done so already. You’ll also need.NET Core: https://dotnet.microsoft.com/download/archives

What is JWT-based authentication?

Tokens are digitally encoded signatures that are used to authenticate and authorize access to protected resources in an application. JWT (JSON Web Token) is an open standard commonly used for exchanging information between two parties in a secure manner. Typically, a JWT token is used in ASP.NET Core applications to authenticate users and, if the authentication is successful, provide them access to protected resources in the application.

To understand how refresh tokens operate, it’s imperative that you have a thorough knowledge of how JWT tokens work. Since they are signed digitally, this information is trustworthy and verifiable. If you want to sign a JWT, you can either use a secret key (by leveraging the HMAC algorithm) or a pair of public/private keys (RSA or ECDSA).

What are access tokens?

An access token is a digital (cryptographic) key that provides secure access to API endpoints. A token-based authentication system uses access tokens to allow an application to access APIs on the server. After authentication with valid credentials is successful, access tokens are issued to the user.

The tokens are then passed as ‘bearer’ tokens in the request header while a user requests data from the server. As long as the token is valid, the server understands that the bearer is authorized to access the resource.

A graph showing what access tokens are.

Since access tokens cannot be used for an extended period of time, you should leverage refresh tokens to re-authenticate a user in your application sans the need of authenticating the user again. This explains why refresh tokens are used by most applications to refresh access to protected resources by reissuing an access token to the user.

What are Refresh Tokens? Why are they needed?

Since access tokens expire after a certain amount of time, refresh tokens are used to obtain new access tokens after the original has expired. This allows users to remain authenticated without having to log in to the application each time the access token expires – effectively, they are a ‘renewal’ mechanism.

Here are the benefits of refresh tokens at a glance:

  • Extended access: Refresh tokens allow you to access APIs and applications for prolonged periods without re-logins even after access tokens have expired.

  • Enhanced security: A long-lived refresh token can reduce token theft considerably since access tokens expire quickly.

  • Improved user experience: The use of refresh tokens makes it easier for users to interact with apps without the need for re-entering the credentials.

How do refresh tokens work?

Here’s a simplified explanation of how refresh tokens work:

  1. As a first step, the client sends the login credentials to the authentication component.

  2. As soon as the user logs into the application, the credentials are sent to the authentication server for validation.

  3. Assuming the authentication process completes successfully, the authentication component generates two tokens, i.e., an access token and a refresh token, and sends them to the client application.

  4. From now on, the client application takes advantage of the access token to gain access to protected resources of the server application, i.e., the APIs or services.

  5. ​The access token is verified and, if it’s valid, access to the protected resource is granted.

  6. Steps 4 and 5 are repeated until the access token is no longer valid, i.e., after the access token expires.

  7. Upon expiry of the access token, the client application requests a new access token from the server application using the refresh token.

  8. Lastly, the authentication component generates two new tokens again, i.e., an access token and a refresh token.
An image showing New Access Token and Refresh Token generated.
New Access Token and Refresh Token generated

9. Steps 4 to 8 are repeated until the refresh token expires.

10. As soon as the refresh token has expired, the authentication server generates access and refresh tokens yet again since the client should be re-authenticated.

How to implement refresh tokens in ASP.NET Core: getting started

In this section, we’ll examine how we can implement refresh tokens in an ASP.NET Core application. We’ll build an ASP.NET Core Web API application to demonstrate how it all works and test the API endpoints using Postman.

In this example, we’ll use the following files:

  • LoginModel (This model is used to store user credentials to login to the application)

  • RegisterModel (This model stores user data required to register a new user)

  • TokenModel (This model contains the access and refresh token and is used to send these tokens in the response)

  • ApplicationUser (This class extends the functionality of the IdentityUser class of the ASP.NET Core Identity Framework)

  • ApplicationDbContext (This represents the DbContext used to interact with the underlying database)

  • MessageCode (This record type contains a list of message codes.)

  • MessageProvider (This record type contains a list of notification and error messages.)

  • JwtOptions (This type is used to read configuration data.)

  • Response (This represents the custom response format we’ll use for sending formatted response out of the controller action methods.)

  • IAuthenticationService

  • AuthenticationService (This class represents the Authentication Service that wraps all logic for registering a new user, logging in an existing user, refreshing tokens, etc.)

  • AuthenticationController (This represents the API that contains action methods to register a new user, login an existing user, refresh tokens, etc. It calls the methods of the AuthenticationService class to perform each of these operations.)

Save 35% on Redgate’s .NET Developer Bundle

Fantastic value on our .NET development tools for performance optimization and debugging.
Learn more

How to implement refresh tokens in ASP.NET Core: step-by-step guide

To build the application discussed in this article, follow these steps:

  1. Create a new ASP.NET Core application

  2. Install the NuGet packages

  3. Create the models

  4. Create the data context

  5. Register the data context

  6. Create the repositories

  7. Add services to the container

How to create a new ASP.NET Core web API application

To create a new ASP.NET Core Web API project, run the following commands at the command prompt:

dotnet new sln --name RefreshTokenDemo
dotnet new webapi -f net10.0 --no-https --use-controllers --name RefreshTokenDemo
dotnet sln RefreshTokenDemo.sln add RefreshTokenDemo/RefreshTokenDemo.csproj

Install the NuGet package(s)

In this example, you’ll take advantage of JWT tokens for implementing authentication. You can use the Microsoft.AspNetCore.Authentication.JwtBearer NuGet package to work with JWT tokens in ASP.NET Core applications; this can be installed via the NuGet Package Manager or NuGet Package Manager Console.

You’ll need to install the following packages:

Microsoft.AspNetCore.Authentication.JwtBearer
Microsoft.EntityFrameworkCore.SqlServer
Microsoft.AspNetCore.Identity.EntityFrameworkCore
Microsoft.EntityFrameworkCore.Tools
Microsoft.EntityFrameworkCore.Design

To do this, run the following commands in the NuGet Package Manager Console Window:

Install-Package Microsoft.AspNetCore.Authentication.JwtBearer
Install-Package Microsoft.EntityFrameworkCore.SqlServer
Install-Package Microsoft.AspNetCore.Identity.EntityFrameworkCore
Install-Package Microsoft.EntityFrameworkCore.Tools
Install-Package Microsoft.EntityFrameworkCore.Design

Alternatively, you can install these packages by executing the following commands at the Windows shell:

dotnet add package Microsoft.AspNetCore.Authentication.JwtBearer
dotnet add package Microsoft.EntityFrameworkCore.SqlServer
dotnet add package Microsoft.AspNetCore.Identity.EntityFrameworkCore
dotnet add package Microsoft.EntityFrameworkCore.Tools
dotnet add package Microsoft.EntityFrameworkCore.Design

Create the models

Create three record types – LoginModel, RegisterModel and TokenModel – as shown in the following code listing:

public record LoginModel
{
   public string Username { get; set; }
   public string Password { get; set; }
}

public record RegisterModel
{
   public string Username { get; set; }
   public string Email { get; set; }
   public string Password { get; set; }
}

public record TokenModel
{
    public string? AccessToken { get; set; }
    public string? RefreshToken { get; set; }
}

While the LoginModel and RegisterModel types will be used to store login and register data for the user, the TokenModel will be used to store the access and refresh tokens respectively. Note the usage of record type in the preceding code example.

In C#, a record is a class (or struct) primarily designed to store data when working with immutable data models. You can use a record type in place of a class or a struct when you want to create a data model with value-based equality and define a type that comprises immutable objects.

Next, create a new class named ApplicationUser. This extends the IdentityUser class to add custom properties to the default ASP.NET Core IdentityUser class:

using Microsoft.AspNetCore.Identity;
public class ApplicationUser : IdentityUser
{
    public string? RefreshToken { get; set; }
    public DateTime RefreshTokenExpiryTime { get; set; }
}

Create the MessageCode enum

Create an enum named MessageCode. This will contain the message codes (as integer constants) we’ll use in this example:

public enum MessageCode
{
    LoginSuccess,
    InvalidCredentials,
    UserAlreadyExists,
    UserCreationFailed,
    UserCreatedSuccessfully,
    InvalidRequest,
    InvalidTokenPair,
    AccessTokenSuccess,
    RefreshTokenSuccess,
    UnexpectedError
}

Create the MessageProvider type

Next, create a record type called MessageProvider. This will be used to return a text message based on the value of the MessageCode enum as a parameter. Hence, if the value of the parameter is LoginSuccess (or integer value 0), the text “User logged in successfully.” will be returned:

public record MessageProvider
 {
     public static string GetMessage(MessageCode code)
     {
       switch (code)
       {
          case MessageCode.LoginSuccess:
              return "User logged in successfully.";
          case MessageCode.InvalidCredentials:
              return "Invalid credentials.";
          case MessageCode.UserAlreadyExists:
              return "User already exists.";
          case MessageCode.UserCreationFailed:
              return "User creation failed.";
          case MessageCode.UserCreatedSuccessfully:
              return "User created successfully.";
          case MessageCode.InvalidRequest:
              return "Invalid request.";
          case MessageCode.InvalidTokenPair:
              return "Invalid access token or refresh token.";
          case MessageCode.RefreshTokenSuccess:
              return "Token refreshed successfully.";
          case MessageCode.UnexpectedError:
              return "An unexpected error occurred.";
          default:
              throw new ArgumentOutOfRangeException
              ("Invalid message code.");
         }
     }
 }

Create the response type

In this example, we’ll use a custom response record type that can be used to send out responses from the controller in a pre-defined custom format. Create a new record type called Response and replace the auto-generated code with:

public record Response<T>
{
    public string? Message { get; set; }
    public T? Data { get; set; }
    public HttpStatusCode StatusCode { get; set; }

    public static Response<T> Create(
        HttpStatusCode statusCode,
        T? data = default,
        MessageCode? messageCode = null)
    {
        return new Response<T>
        {
            StatusCode = statusCode,
            Data = data,
            Message = messageCode.HasValue
                ?  
            MessageProvider.GetMessage(messageCode.Value)
                : null
        };
    }
}

The Response record type shown here is a generic wrapper. It contains fields corresponding to the message to be sent as a response from the action methods of the controller (a HTTP status code), as well as data which will optionally contain the controller-generated access token and refresh token.

Create the JWT section in the configuration file

Create a new section in the appsettings.json file. This is to define the necessary security parameters for validating and generating JWT tokens in your ASP.NET Core API.

"JWT": {
  "ValidAudience": "http://localhost:4200",
  "ValidIssuer": "http://localhost:5000",
  "SecretKey": "Specify your custom secret key here",
  "AccessTokenValidityInMinutes": 1,
  "RefreshTokenValidityInMinutes": 60
}

This configuration data will be read in the Program.cs file using the JwtOptions record type we’ll now create.

Create the JwtOptions type

The JwtOptions record type is used to read the configuration data required to create and manage the tokens.

public sealed record JwtOptions
{
  public string SecretKey { get; init; } = string.Empty;
  public string ValidIssuer { get; init; } = string.Empty;
  public string ValidAudience { get; init; } = string.Empty;
  public int AccessTokenValidityInMinutes { get; init; } = 0;
  public int RefreshTokenValidityInMinutes { get; init; } = 0;
}

This type is used in the AuthenticationService class in this application.

Enjoying this article? Subscribe to the Simple Talk newsletter

Get selected articles, event information, podcasts and other industry content delivered straight to your inbox.
Subscribe now

Create the data context

Now that the models have been created, you can now create the data context class for interacting with the underlying database. In Entity Framework Core, the data context acts as the bridge of communication between your application and the underlying database. It represents a session of connectivity with the database, enabling you to execute database operations without having to write raw SQL queries.

In this example, the data context class is named ApplicationDbContext. It extends the IdentityDbContext of the ASP.NET Core Identity Framework:

using Microsoft.AspNetCore.Identity.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore;
public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
   public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options)
   { }

   protected override void OnModelCreating(ModelBuilder builder)
   {
       base.OnModelCreating(builder);
   }
}

Create the authentication service

The AuthenticationService class encapsulates the process of token creation, token validation, and token refresh logic in one place. It implements the IAuthenticationService interface:

public interface IAuthenticationService
    {
        Task<Response<object>> 
            LoginAsync(LoginRequest request, 
            CancellationToken cancellationToken = default);
        Task<Response<object>> 
            RegisterAsync(RegisterRequest request, 
            CancellationToken cancellationToken = default);
        Task<Response<object>> 
        RefreshTokensAsync(RefreshTokenRequest request, 
            CancellationToken cancellationToken = default);
    }

The AuthenticationService class uses constructor injection to create instances of the UserManager class, and the JwtOptions record class, pertaining to the ASP.NET Core Identity Framework. JwtOptions reads the required configuration data:

public sealed class AuthenticationService : IAuthenticationService
{
    private readonly UserManager<ApplicationUser> _userManager;
    private readonly JwtOptions _jwtOptions;

    public AuthenticationService(
        UserManager<ApplicationUser> userManager,
        IOptions<JwtOptions> jwtOptions)
    {
        _jwtOptions = jwtOptions.Value ?? 
        throw new ArgumentNullException(nameof(jwtOptions));
        _userManager = userManager ?? 
        throw new ArgumentNullException(nameof(userManager));

        if (string.IsNullOrWhiteSpace(_jwtOptions.SecretKey))
        {
            throw new InvalidOperationException
            ("The Secret Key is not configured.");
        }
    }
}

The AuthenticationService class contains three async methods: LoginAsync, RegisterAsync and RefreshTokensAsync. Each of these methods are called from the controller class. The source code of these three methods is:

public async Task<Response<object>> LoginAsync(LoginRequest request, 
            CancellationToken cancellationToken = default)
        {
            var user = await _userManager.FindByNameAsync(request.Username);
            if (user == null || !await _userManager.CheckPasswordAsync(user, request.Password))
            {
                return Response<object>.Create(
                    HttpStatusCode.BadRequest,
                    null,
                    MessageCode.InvalidCredentials);
            }

            var tokens = await GenerateTokensAsync(user, cancellationToken);

            return Response<object>.Create(
                HttpStatusCode.OK,
                new { tokens.AccessToken, tokens.RefreshToken },
                MessageCode.LoginSuccess);
        }

        public async Task<Response<object>> RegisterAsync(RegisterRequest request, 
            CancellationToken cancellationToken = default)
        {
            var existingUser = await _userManager.FindByNameAsync(request.Username);
            if (existingUser != null)
            {
                return Response<object>.Create(
                    HttpStatusCode.BadRequest,
                    null,
                    MessageCode.UserAlreadyExists);
            }

            var user = new ApplicationUser
            {
                Email = request.Email,
                SecurityStamp = Guid.NewGuid().ToString(),
                UserName = request.Username
            };

            var result = await _userManager.CreateAsync(user, request.Password);

            if (!result.Succeeded)
            {
                return Response<object>.Create(
                    HttpStatusCode.BadRequest,
                    null,
                    MessageCode.UserCreationFailed);
            }

            return Response<object>.Create(
                HttpStatusCode.OK,
                null,
                MessageCode.UserCreatedSuccessfully);
        }

        public async Task<Response<object>> RefreshTokensAsync(RefreshTokenRequest request, 
            CancellationToken cancellationToken = default)
        {
                var principal = GetPrincipalFromExpiredToken(request.AccessToken ?? string.Empty);
                var username = principal.Identity?.Name;

                var user = await _userManager.Users
                    .FirstOrDefaultAsync(
                        u => u.UserName == username && u.RefreshToken == request.RefreshToken,
                        cancellationToken);

                if (user == null || user.RefreshTokenExpiryTime <= DateTime.UtcNow)
                {
                    return Response<object>.Create(
                        HttpStatusCode.BadRequest,
                        null,
                        MessageCode.InvalidTokenPair);
                }

                var tokens = await GenerateTokensAsync(user, cancellationToken);

                return Response<object>.Create(
                    HttpStatusCode.OK,
                    new { tokens.AccessToken, tokens.RefreshToken },
                    MessageCode.RefreshTokenSuccess);            
        }

The complete source code for the AuthenticationService class is available in the Github repository.

How to create migrations using Entity Framework (EF) Core

In Entity Framework (EF) Core, migrations enable schema versioning for your database. You can either create or update the schema from your application using C# models (such as the ApplicationUser model in this example).

Once the migration has been executed successfully and you’ve applied the changes to the database, a new database with the name you specified in the configuration file – along with the associated identity database tables such as AspNetUsers and AspNetRole – will be created automatically.

To create a migration in EF Core, run the Add-Migration command in the Package Manager Console window:

Add-Migration RefreshTokenDemoMigration

You can also create a migration by running the following command at the .NET CLI:

dotnet ef migrations add RefreshTokenDemoMigration

Once you run the migration, a new solution folder called Migrations will be created. To apply the migration you created against the underlying database, run the Update-Database command at the Package Manager Console window:

Update-Database

Once you’ve executed the command, the changes will be applied against the underlying database. A new database will be created, as well as the tables created per your model design. The database will be named whatever you specified in the connection string.

Create the authentication controller

The AutenticationController contains action methods that can be called to register a new user, login an existing user, and regenerate both access and refresh tokens if the former has expired. The actual logic for each of these actions is wrapped inside the AuthenticationService class to ensure your controller is lean, clean, and maintainable.

The following code shows the AuthenticationController class and its action methods. Note how the instance of type IAuthenticationService is injected using constructor injection:

[Route("api/[controller]")]
    [ApiController]

    public class AuthenticationController : ControllerBase
    {
        private readonly IAuthenticationService _authenticationService;

        public AuthenticationController(IAuthenticationService authService)
        {
            _authenticationService = authService;
        }

        [HttpPost("login")]
        public async Task<IActionResult> Login([FromBody] LoginRequest request)
        {
            if (!ModelState.IsValid)
            {
                var response = Response<object>.Create(
                    System.Net.HttpStatusCode.BadRequest,
                    null,
                    MessageCode.InvalidCredentials);

                return BadRequest(response);
            }

            var responseFromService = await _authenticationService.LoginAsync(request);

            if (responseFromService != null)
            {
                if (responseFromService.StatusCode == System.Net.HttpStatusCode.BadRequest)
                {
                    return BadRequest(responseFromService);
                }
            }

            return Ok(responseFromService);
        }

        [HttpPost("register")]
        public async Task<IActionResult> Register([FromBody] RegisterRequest request)
        {
            if (!ModelState.IsValid)
            {
                var response = Response<object>.Create(
                    System.Net.HttpStatusCode.BadRequest,
                    null,
                    MessageCode.UserCreationFailed);

                return BadRequest(response);
            }

            var responseFromService = await _authenticationService.RegisterAsync(request);

            if (responseFromService != null)
            {
                if (responseFromService.StatusCode == System.Net.HttpStatusCode.BadRequest)
                {
                    return BadRequest(responseFromService);
                }
            }

            return Ok(responseFromService);
        }

        [HttpPost("refresh-token")]
        public async Task<IActionResult> RefreshToken([FromBody] RefreshTokenRequest request)
        {
            var responseFromService = await _authenticationService.RefreshTokensAsync(request);

            if (responseFromService != null)
            {
                if (responseFromService.StatusCode == System.Net.HttpStatusCode.BadRequest)
                {
                    return BadRequest(responseFromService);
                }
            }

            return Ok(responseFromService);
        }
    }
}

What is the Program.cs file?

The Program.cs file serves as the entry point of for your ASP.NET Core application, analogous to the Main() function in your console applications. This file contains code that bootstraps the web host, configures the services you need, and sets up the HTTP request processing pipeline.

For example, the following statement in the Program.cs file loads configuration data, environment variables, sets up the web host, and prepares the dependency injection container for registering the services you’ll need:

var builder = WebApplication.CreateBuilder(args);

The next section in the Program.cs file registers the services with the request processing pipeline. For example, the following code adds an instance of type IAuthenticationService to the services collection:

builder.Services.AddDbContext<ApplicationDbContext>(options =>
{
    options.UseSqlServer(
        builder.Configuration.GetConnectionString("DefaultConnection"));
});

Next, you should use the following piece of code in the Program.cs file to register the ASP.NET Core Identity system in the DI container. This is required to provide user management capabilities in your application.

builder.Services.AddIdentity<ApplicationUser, IdentityRole>()
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders();

The following code snippet shows how you can add an instance of type IAuthenticationService as a scoped service so that you can access it in the application:

builder.Services.AddScoped<IAuthenticationService, AuthenticationService>();

In the following code snippet, the statement Configure<JwtOptions> takes advantage of the Options Pattern to automatically bind the “JWT” section from the appsettings.json to the JwtOptions record type we created earlier:

builder.Services.Configure<JwtOptions>(
    builder.Configuration.GetSection("JWT"));

The complete source code of the Program.cs file is available in the GitHub repository for your reference.

How to execute the application using Postman

In this example, we’ll use Postman to test the API endpoints. Postman is a powerful, versatile API testing platform that lets you create, test, document, and manage your APIs. With it, you can send HTTP requests using verbs such as GET, POST, PUT, PATCH, and DELETE, and work with a wide variety of data formats. You can also use Postman to handle authentication, create automated test scripts, and even create mock servers for testing purposes.

When the application is launched, you’ll be able to invoke the API endpoints from Postman. The first thing you should do is register a new user by invoking the api/Authentication/Register endpoint and specifying the new user’s username, password, and email address in the request body:

Image showing the new user being registered successfully.
New user registered successfully

Once a new user has been registered, you should be able to invoke the api/Authentication/Login endpoint to login to the application by specifying the user’s credentials in the request body. If the request is valid, an access token and a refresh token will be returned in the response:

Image showing the invoking of the login endpoint of the AuthenticationService in Postman.
Invoking the Login endpoint of the AuthenticationService in Postman

If you pass the access token generated here as a Bearer Token in the body of the request to invoke the HTTP Get endpoint of the WeatherForecast controller, the authentication system will validate the access token passed. If validated, you’ll be able to see data returned in the response:

Image showing WeatherForecast data being returned as a response.
WeatherForecast data returned as a response

The WeatherForecast controller is created by default when you create a new ASP.NET Core Web API project in Visual Studio.

If you invoke the same endpoint after the expiry of the access token, the HTTP GET endpoint of the WeatherForecast controller will return an HTTP 401 response. This implies that the token is no longer valid, so the request has not been authenticated and the user is no longer authorized to access this endpoint.

At this point, you’ll need a valid access point to access the endpoint again. To do this, you should pass the access token and the refresh token generated when you invoked the api/Authentication/Login endpoint earlier:

An image showing how to invoke the api/Authentication/refresh-token endpoint and pass the access token and refresh token in the body of the request. This generates new access and refresh tokens.
How to invoke the api/Authentication/refresh-token endpoint and pass the access token and refresh token in the body of the request. This generates new access and refresh tokens.

Final thoughts

In this article, we’ve examined the approaches you should take to implement refresh tokens to secure your APIs with high reliability – all while providing end users with the most seamless experience.

By enabling your application to refresh tokens when they expire, many of the issues associated with traditional static tokens can be addressed, and this approach can be effectively used in a distributed application. When your application can recreate the tokens used to authenticate users, you can enforce a one-time-use policy – and even revoke tokens on-demand.

The post How to use refresh tokens in ASP.NET Core – a complete guide appeared first on Simple Talk.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Row-Level Security Can Slow Down Queries. Index For It.

1 Share

The official Azure SQL Dev’s Corner blog recently wrote about how to enable soft deletes in Azure SQL using row-level security, and it’s a nice, clean, short tutorial. I like posts like that because the feature is pretty cool and accomplishes a real business goal. It’s always tough deciding where to draw the line on how much to include in a blog post, so I forgive them for not including one vital caveat with this feature.

Row-level security can make queries go single-threaded.

This isn’t a big deal when your app is brand new, but over time, as your data gets bigger, this is a performance killer.

Setting Up the Demo

To illustrate it, I’ll copy a lot of code from their post, but I’ll use the big Stack Overflow database. After running the below code, I’m going to have two Users tables with soft deletes set up: a regular dbo.Users one with no security, and a dbo.Users_Secured one with row-level security so folks can’t see the IsDeleted = 1 rows if they don’t have permissions.

USE StackOverflow;
GO
/* The Stack database doesn't ship with soft deletes,
so we have to add an IsDeleted column to implement it. 
Fortunately this is a metadata-only operation, and the
table isn't rewritten. All rows just instantly get a
0 default value. */
ALTER TABLE dbo.Users ADD IsDeleted BIT NOT NULL DEFAULT 0;
GO
/* Copy the Users table into a new Secured one: */
CREATE TABLE [dbo].[Users_Secured](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [AboutMe] [nvarchar](max) NULL,
    [Age] [int] NULL,
    [CreationDate] [datetime] NOT NULL,
    [DisplayName] [nvarchar](40) NOT NULL,
    [DownVotes] [int] NOT NULL,
    [EmailHash] [nvarchar](40) NULL,
    [LastAccessDate] [datetime] NOT NULL,
    [Location] [nvarchar](100) NULL,
    [Reputation] [int] NOT NULL,
    [UpVotes] [int] NOT NULL,
    [Views] [int] NOT NULL,
    [WebsiteUrl] [nvarchar](200) NULL,
    [AccountId] [int] NULL,
    [IsDeleted] [bit] NOT NULL,
 CONSTRAINT [PK_Users_Secured_Id] PRIMARY KEY CLUSTERED 
(
    [Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, 
    IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, 
    ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF, 
    DATA_COMPRESSION = PAGE) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Users_Secured] ADD  DEFAULT ((0)) FOR [IsDeleted]
GO
SET IDENTITY_INSERT dbo.Users_Secured ON;
GO
INSERT INTO dbo.Users_Secured (Id, AboutMe, Age, CreationDate, 
    DisplayName, DownVotes, EmailHash, LastAccessDate, 
    Location, Reputation, UpVotes, Views, WebsiteUrl, 
    AccountId, IsDeleted)
SELECT Id, AboutMe, Age, CreationDate, 
    DisplayName, DownVotes, EmailHash, LastAccessDate, 
    Location, Reputation, UpVotes, Views, WebsiteUrl, 
    AccountId, IsDeleted
FROM dbo.Users;
GO
SET IDENTITY_INSERT dbo.Users_Secured OFF;
GO
DropIndexes @TableName = 'Users';
GO





CREATE LOGIN TodoDbUser WITH PASSWORD = 'Long@12345';
GO
CREATE USER TodoDbUser FOR LOGIN TodoDbUser;
GO
GRANT SELECT, INSERT, UPDATE ON dbo.Users TO TodoDbUser;
GO


CREATE FUNCTION dbo.fn_SoftDeletePredicate(@IsDeleted BIT)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
    SELECT 1 AS fn_result
    WHERE
        (
            DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID('TodoDbUser')
            AND @IsDeleted = 0
        )
        OR DATABASE_PRINCIPAL_ID() <> DATABASE_PRINCIPAL_ID('TodoDbUser');
GO

CREATE SECURITY POLICY dbo.Users_Secured_SoftDeleteFilterPolicy
ADD FILTER PREDICATE dbo.fn_SoftDeletePredicate(IsDeleted)
ON dbo.Users_Secured
WITH (STATE = ON);
GO

Now let’s start querying the two tables to see the performance problem.

Querying by the Primary Key: Still Fast

The Azure post kept things simple by not using indexes, so we’ll start that way too. I’ll turn on actual execution plans and get a single row, and compare the differences between the tables:

SELECT * FROM dbo.Users
    WHERE Id = 26837;

SELECT * FROM dbo.Users_Secured
    WHERE Id = 26837;

If all you’re doing is getting one row, and you know the Id of the row you’re looking for, you’re fine. SQL Server dives into that one row, fetches it for you, and doesn’t need multiple CPU cores to accomplish the goal. Their actual execution plans look identical at first glance:

Single row fetch

If you hover your mouse over the Users_Secured table operation, you’ll notice an additional predicate that we didn’t ask for: row-level security is automatically checking the IsDeleted column for us:

Checking security

Querying Without Indexes: Starts to Get Slower

Let’s find the top-ranked people in Las Vegas:

SELECT TOP 101 *
    FROM dbo.Users
    WHERE Location = N'Las Vegas, NV'
    ORDER BY Reputation DESC;

SELECT TOP 101 *
    FROM dbo.Users_Secured
    WHERE Location = N'Las Vegas, NV'
    ORDER BY Reputation DESC;

Their actual execution plans show the top query at about 1.4 seconds for the unsecured table, and the bottom query at about 3 seconds for the secured table:

Las Vegas, baby

The reason isn’t security per se: the reason is that the row-level security function inhibits parallelism. The top query plan went parallel, and the bottom query did not. If you click on the secured table’s SELECT icon, the plan’s properties will explain that the row-level security function can’t be parallelized:

No parallelism

That’s not good.

When you’re using the database’s built-in row-level security functions, it’s more important than ever to do a good job of indexing. Thankfully, the query plan has a missing index recommendation to help, so let’s dig into it.

The Missing Index Recommendation Problems

Those of you who’ve been through my Fundamentals of Index Tuning class will have learned how Microsoft comes up with missing index recommendations, but I’mma be honest, dear reader, the quality of this one surprises even me:

Missing Index (Impact 99.6592): 
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] 
ON [dbo].[Users_Secured] ([Location]) 
INCLUDE ([AboutMe],[Age],[CreationDate],[DisplayName],[DownVotes],
[EmailHash],[LastAccessDate],[Reputation],[UpVotes],[Views],
[WebsiteUrl],[AccountId],[IsDeleted])

The index simply ignores the IsDeleted and Reputation columns, even though they’d both be useful to have in the key! The missing index hint recommendations are seriously focused on the WHERE clause filters that the query passed in, but not necessarily on the filters that SQL Server is implementing behind the scenes for row-level security. Ouch.

Let’s do what a user would do: try creating the recommended index on both tables – even though the number of include columns is ridiculous – and then try again:

CREATE NONCLUSTERED INDEX Location_Includes 
    ON [dbo].[Users] ([Location]) 
    INCLUDE ([AboutMe],[Age],[CreationDate],[DisplayName],[DownVotes],
    [EmailHash],[LastAccessDate],[Reputation],[UpVotes],[Views],
    [WebsiteUrl],[AccountId],[IsDeleted]);
GO
CREATE NONCLUSTERED INDEX Location_Includes 
    ON [dbo].[Users_Secured] ([Location]) 
    INCLUDE ([AboutMe],[Age],[CreationDate],[DisplayName],[DownVotes],
    [EmailHash],[LastAccessDate],[Reputation],[UpVotes],[Views],
    [WebsiteUrl],[AccountId],[IsDeleted]);
GO
SELECT TOP 101 *
    FROM dbo.Users
    WHERE Location = N'Las Vegas, NV'
    ORDER BY Reputation DESC;

SELECT TOP 101 *
    FROM dbo.Users_Secured
    WHERE Location = N'Las Vegas, NV'
    ORDER BY Reputation DESC;
GO

Our actual execution plans are back to looking identical:

With a covering index

Neither of them require parallelism because we can dive into Las Vegas, and read all of the folks there, filtering out the appropriate IsDeleted rows, and then sort the remainder, all on one CPU core, in a millisecond. The cost is just that we literally doubled the table’s size because the missing index recommendation included every single column in the table!

A More Realistic Single-Column Index

When faced with an index recommendation that includes all of the table’s columns, most DBAs would either lop off all the includes and just use the keys, or hand-review the query to hand-craft a recommended index. Let’s start by dropping the old indexes, and creating new ones with only the key column that Microsoft had recommended:

CREATE INDEX Location ON dbo.Users(Location);
DROP INDEX Location_Includes ON dbo.Users;
CREATE INDEX Location ON dbo.Users_Secured(Location);
DROP INDEX Location_Includes ON dbo.Users_Secured;
GO
SELECT TOP 101 *
    FROM dbo.Users
    WHERE Location = N'Las Vegas, NV'
    ORDER BY Reputation DESC;

SELECT TOP 101 *
    FROM dbo.Users_Secured
    WHERE Location = N'Las Vegas, NV'
    ORDER BY Reputation DESC;
GO

The actual execution plans of both queries perform identically:

Key lookup plan 1

Summary: Single-Threaded is Bad, but Indexes Help.

The database’s built-in row-level security is a really cool (albeit underused) feature to help you accomplish business goals faster, without trying to roll your own code. Yes, it does have limitations, like inhibiting parallelism and making indexing more challenging, but don’t let that stop you from investigating it. Just know you’ll have to spend a little more time doing performance tuning down the road.

In this case, we’re indexing not to reduce reads, but to avoid doing a lot of work on a single CPU core. Our secured table still can’t go parallel, but thanks to the indexes, the penalty of row-level security disappears for this particular query.

Experienced readers will notice that there are a lot of topics I didn’t cover in this post: whether to index for the IsDeleted column, the effect of residual predicates on IsDeleted and Reputation, and how CPU and storage are affected. However, just as Microsoft left off the parallelism thing to keep their blog post tightly scoped, I gotta keep mine scoped too! This is your cue to pick up this blog post with anything you’re passionate about, and extend it to cover the topics you wanna teach today.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

A Look Ahead at Azure Cosmos DB Conf 2026: From AI Agents to Global Scale

1 Share

Banner for Azure Cosmos DB Conf 2026 featuring Microsoft and AMD logos. The text reads “Azure Cosmos DB Conf 2026” with the event date and time: April 28, 9:00 AM – 2:00 PM PST. A friendly cartoon astronaut floats in a space-themed background with planets, stars, and database icons.

Join us for Azure Cosmos DB Conf 2026, a free global, virtual developer event focused on building modern applications with Azure Cosmos DB.

This year, Azure Cosmos DB Conf will feature 21 speakers from across the globe, bringing together Microsoft engineers, community leaders, architects, and developers to share how they are building modern applications with Azure Cosmos DB. Attendees will hear directly from experts using Azure Cosmos DB to power real systems—from AI agent memory architectures and retrieval-augmented generation pipelines to globally distributed event-driven microservices and cost-efficient high-scale workloads.

You can also expect talks exploring Open Source DocumentDB and Azure DocumentDB (with MongoDB compatibility), demonstrating how developers can build portable architectures that run both on-premises and in the cloud while maintaining full compatibility with the MongoDB developer ecosystem.

Still curious about what you’ll see? Below, you can watch a short recap from Azure Cosmos DB Conf 2025 to get a sense of the technical depth and real‑world focus that shape the conference.

An Expanded Azure Cosmos DB Conf, Powered by AMD

White AMD logo on a black background.

This year’s event is our biggest yet. Thanks to our partnership with AMD, Azure Cosmos DB Conf has expanded from three hours to five hours of live programming, giving us more time for deep technical sessions, product insights, and real‑world engineering stories. That includes a behind‑the‑scenes look at how Azure Cosmos DB runs at planetary scale, with Andrew Liu, Principal GPM, Azure Cosmos DB at Microsoft, walking through the internal architecture that powers request processing, replication, and high availability on AMD‑powered infrastructure across Azure datacenters. From data placement and partitioning to quorum‑based replication and the mechanics behind Request Units and serverless execution, this session trades slides for whiteboards and focuses on how the system actually works under the hood.

Keynote: Azure Cosmos DB Platform Evolution and Real‑World Learnings

Headshot of a smiling man with short dark hair wearing a white collared shirt, photographed outdoors with a blurred green foliage background.Evolving the Azure Cosmos DB Platform

In the opening keynote, Kirill Gavrylyuk, Vice President of Azure Cosmos DB at Microsoft, will highlight how the platform continues to evolve to support modern production workloads. Over the past year, Azure Cosmos DB has delivered meaningful improvements across AI‑driven applications, retrieval workloads, performance, reliability, and developer productivity—including advances in vector indexing, full‑text and hybrid search, workload control, and security and backup capabilities that help teams build faster and operate with confidence at scale.

From Production Systems to Open Architectures

The keynote will also highlight how developers are applying these capabilities across the broader Azure Cosmos DB ecosystem in real production environments. Kirill will share success stories from teams using Azure Cosmos DB, Azure DocumentDB (with MongoDB compatibility), and the open source DocumentDB project, part of the Linux Foundation, to meet different architectural and operational requirements, including cloud and hybrid deployments, AI applications, real time analytics, and mission critical workloads. These examples reflect how developers choose the right option for their scenario while maintaining consistent performance characteristics, scalability, and operational reliability as systems grow in complexity and scale.

A look at some of the 25 minute sessions at Azure Cosmos DB Conf 2026

Azure Cosmos DB Conf 2026 includes sessions from developers and engineers who are building modern systems with Azure Cosmos DB today. Here are just a few of the talks you’ll see during the event.

AI, agents, and intelligent retrieval

Modern AI applications require more than a vector database—they need persistent memory, coordination between agents, and scalable retrieval systems. Several sessions at Azure Cosmos DB Conf 2026 will explore how developers are using Azure Cosmos DB to power these new AI architectures.

Professional headshot of a woman wearing a light pink hijab and a light gray blazer over a white blouse, smiling against a neutral background.Farah Abdou, Lead Machine Learning Engineer, SmartServe — Cutting AI Agent Costs with Azure Cosmos DB: The Agent Memory Fabric

In this session, Farah Abdou will demonstrate how Azure Cosmos DB can serve as a unified Agent Memory Fabric for multi-agent AI systems. By combining vector search for semantic caching, Change Feed for event-driven coordination, and optimistic concurrency for conflict prevention, this architecture enables faster agent collaboration while reducing operational complexity.

Professional headshot of a man wearing a dark suit, white shirt, and tie, with sunglasses, standing outdoors with greenery and palm trees blurred in the background.Varun Kumar Kotte, Senior Machine Learning Engineer, Adobe — Production RAG on Azure Cosmos DB: Version-Aware Enterprise QA

Varun Kumar Kotte will present a production architecture behind Adobe’s AI Assistant, showing how Azure Cosmos DB supports version-aware retrieval across large enterprise documentation sets. The talk explores how RAG pipelines can maintain semantic accuracy while serving hundreds of thousands of documents with low-latency responses.

Performance, cost, and operating at scale

Running Azure Cosmos DB workloads at scale requires strong visibility into performance, partitioning strategy, and RU consumption. These sessions focus on diagnosing real-world issues and optimizing systems as workloads grow.

Headshot of a man wearing glasses and a gray shirt, resting his chin on his hand while sitting in an office with ceiling lights and desks in the background.Patrick Oguaju. Software Developer, Next — Designing Cost-Efficient, High-Scale Systems with Azure Cosmos DB

Patrick Oguaju will walk through how a production system was redesigned to dramatically reduce Azure Cosmos DB costs without sacrificing performance or reliability. The session covers practical data modeling patterns, indexing tradeoffs, and observability techniques used to identify hidden cost drivers. You’ll see how small design decisions—such as partitioning strategy and query patterns—can have a significant impact on throughput consumption. By the end of the talk, you’ll have a clearer understanding of how to diagnose cost issues and apply proven techniques to keep your own workloads efficient at scale.

Black-and-white photo of a man wearing glasses, a denim jacket, and a graphic T-shirt, standing indoors with framed posters and equipment visible in the background.Anurag Dutt, Multi-Cloud Engineer — From Rising RU Costs to Stable Performance: A Practical Cosmos DB Case Study

Anurag Dutt will examine a real high-volume workload and demonstrate how issues like hot partitions, cross-partition queries, and inefficient indexing were identified and corrected. The session walks through the decision-making process behind each optimization and the resulting improvements in cost and latency.

Distributed systems and event-driven architecture

Azure Cosmos DB often acts as the operational backbone for distributed systems that coordinate work across services and process real-time events.

Headshot of a man with shoulder-length hair and a beard, wearing a light gray sweater, seated indoors with plants and shelves in the background.Eric Boyd, Founder & CEO, responsiveX — Distributed Locks, Sagas, and Coordination with Cosmos DB

Eric Boyd will explore how Azure Cosmos DB can be used as a coordination layer for distributed workflows. The session covers lock and lease patterns, saga orchestration, and strategies for handling retries, contention, and race conditions across multiple regions. He also shares practical guidance on when to centralize coordination versus when to push responsibility to services themselves. Attendees will leave with concrete patterns they can apply to build more resilient, globally distributed systems without introducing unnecessary coupling or operational overhead.

Tural SuleymaniSpeaker presenting in front of a projected slide, gesturing while explaining content to an audience., Engineering Manager, VOE Consulting — Designing High-Scale Event-Driven Microservices with Azure Cosmos DB

Tural Suleymani will share lessons learned from building event-driven microservices using Azure Cosmos DB Change Feed as the backbone for domain events. The session demonstrates patterns for building scalable, loosely coupled systems while maintaining observability and resilience.

Developer productivity and modern development workflows

Developer tooling and workflows play a critical role in building reliable distributed systems. These sessions explore how modern development environments and secure architectures help teams move faster.

Professional headshot of a smiling man with dark hair and a trimmed beard wearing a dark button-up shirt against a neutral background.Sajeetharan Sinnathurai, Principal Program Manager, Microsoft — Setting Up Your Azure Cosmos DB Development Environment (and Supercharging It with AI)

Sajeetharan Sinnathurai will show how to configure a complete Azure Cosmos DB development workflow using the emulator, VS Code tooling, testing strategies, and AI coding assistants. The session demonstrates how developers can accelerate development while maintaining best practices for Cosmos DB applications.

Headshot of a woman with short brown hair wearing a teal top, smiling in front of a wooden fence background.Pamela Fox, Principal Cloud Advocate (Python), Microsoft — Know Your User: Identity-Aware MCP Servers with Cosmos DB

Pamela Fox will demonstrate how to build a Python MCP server that authenticates users with Microsoft Entra ID and securely stores per-user data in Azure Cosmos DB. The session highlights identity-first architectures for AI systems and modern cloud applications.

Migration and hybrid architectures

Modernizing applications often requires bridging existing systems with new cloud-native platforms. Azure Cosmos DB Conf 2026 includes sessions exploring migration strategies and hybrid deployment models.

Professional headshot of a man with a mustache wearing a light purple checkered shirt against a dark blue studio background.Sergiy Smyrnov, Senior Specialist, Data & AI Global Black Belt, Microsoft — From JOINs to JSON: Migrating a Real-World ASP.NET App to Cosmos DB with GitHub Copilot

Sergiy Smyrnov will demonstrate how an AI-assisted workflow can analyze a relational schema, generate a migration plan, and convert an ASP.NET application to run on Azure Cosmos DB for NoSQL. Using the classic AdventureWorks database and a real ASP.NET application, the session walks through the full migration journey—from relational modeling and schema analysis to provisioning Azure Cosmos DB infrastructure and rewriting the application data layer.

Portrait of a smiling man wearing a black jacket and shirt, standing outdoors with trees and autumn foliage in the background.Khelan Modi, Program Manager, Microsoft — One Codebase, Any Cloud: Building a Retail Database with OSS and Azure

Khelan Modi will demonstrate how developers can build applications using MongoDB-compatible APIs that run both on-premises and in the cloud. The session walks through a retail application architecture—including product catalog, inventory, orders, and vector-powered recommendations—built once and deployed using open-source DocumentDB and Azure DocumentDB. Khelan will show how the same drivers, queries, and application code can run across both environments without modification, enabling portable architectures while still benefiting from Azure’s managed capabilities.

Explore the full lineup

These sessions are just a sample of what you’ll find at Azure Cosmos DB Conf 2026. With 21 speakers from around the world, the conference offers a broad look at how developers are using Azure Cosmos DB, open-source DocumentDB, and Azure DocumentDB to build intelligent, distributed, and modern applications.

Visit the Azure Cosmos DB Conf 2026 website for the full speaker list, session details, conference news, and the latest event updates.

Be sure to register now for the live event on April 28, 2026. Registering will help keep you up to speed with email updates from the Azure Cosmos DB team, including speaker announcements, session updates, and other conference news.

We look forward to seeing you there!

The post A Look Ahead at Azure Cosmos DB Conf 2026: From AI Agents to Global Scale appeared first on Azure Cosmos DB Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories