Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152551 stories
·
33 followers

Now Available: Monthly Subscriptions with a 12-Month Commitment

1 Share
Screens showing monthly subscriptions with a 12-month commitment feature, payment schedule and commitment details for App Store subscriptions.

Today, we’re introducing a new way that people can pay for your auto-renewable subscriptions on the App Store: monthly subscriptions with a 12-month commitment. This new payment option allows you to offer subscribers more affordable options. People can cancel their subscription at any time, which will prevent the subscription from renewing after they’ve completed their agreed-to payments to fulfill their commitment.

To provide transparency, people can easily view the number of completed and remaining payments for the subscription in their Apple Account. Apple will also send email and, if opted in, push notifications ahead of their renewal date to remind them of their upcoming purchase.

Starting today, you can configure this type of subscription in App Store Connect and test it in Xcode. With the exception of the United States and Singapore, monthly subscriptions with a 12-month commitment will be available worldwide to people on iOS 26.4, iPadOS 26.4, macOS Tahoe 26.4, and visionOS 26.4, or later, with the release of iOS 26.5, iPadOS 26.5, macOS Tahoe 26.5, and visionOS 26.5 in May.

Learn about configuring subscriptions

Learn about auto-renewable subscriptions

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Combining API versioning with OpenAPI in .NET 10 applications

1 Share

This is a guest blog from Sander ten Brinke, a Microsoft MVP and Senior Software Engineer, with a passion for building scalable and maintainable applications.

A lot has changed for ASP.NET Core when it comes to building APIs over the last couple of years. The introduction of Minimal APIs, alongside controllers, has made it easier than ever to get started with building APIs, with .NET 10’s support for built-in request validation making it an even stronger contender.

Even though building APIs has become easier, one aspect that remains crucial is API versioning. Proper versioning ensures that your API can evolve without breaking existing clients. API versioning has always been supported thanks to libraries like Asp.Versioning. But with the release of Microsoft.AspNetCore.OpenApi, Microsoft’s own OpenAPI library for ASP.NET Core, implementing versioning has changed — especially if you want an officially supported approach.

Microsoft’s package sets up OpenAPI with versioning in mind (due to the URL being /openapi/v1.json by default), but ASP.NET Core doesn’t come with extensive built-in API versioning support. Since the release of Microsoft.AspNetCore.OpenApi in .NET 9, there have been a lot of questions online about how to integrate versioning without having to write a lot of custom code, duplicate OpenAPI and versioning definitions, and more.

In this post, we’ll walk through how to implement API versioning in .NET 10 applications — covering both controllers and Minimal APIs — while keeping your OpenAPI documentation accurate and up to date for each version. We’ll take the officially supported approach, keeping duplicate code and configuration to a minimum.

We’ll start with implementing API versioning without OpenAPI, and then we’ll integrate OpenAPI into our versioned API setup, showing how to generate separate OpenAPI documents for each API version. Finally, we’ll add SwaggerUI support using Swashbuckle.AspNetCore.SwaggerUI and Scalar support with Scalar.AspNetCore to visualize our versioned API documentation and discuss how to maintain it as your API evolves. By presenting API versioning in a step-based approach, it becomes clear what code changes each step requires.

To do this, we’ll use the brand new Asp.Versioning v10 package, also known as ASP.NET API Versioning, which is the first version to officially support both .NET 10 and the new built-in OpenAPI support, making the integration cleaner and simpler than ever before.

The importance of API versioning

But first, let’s make sure we understand the importance of API versioning. If you already understand API versioning, you can skip ahead to the next sections.

API versioning is essential for maintaining backward compatibility as your API evolves. It allows you to introduce new features, fix bugs, and make changes without disrupting existing clients. There are several strategies for versioning APIs, including:

  • URL Path Versioning (e.g., /api/v1/resource)
  • Query String Versioning (e.g., /api/resource?version=1.0)
  • Header Versioning (e.g., X-API-Version: 1.0)
  • Media Type Versioning (e.g., Accept: application/json; v=1.0)
    • This is less common in ASP.NET Core applications due to the need for custom media type formatters, but it is still a valid and widely-used approach in the industry. GitHub is a well-known example of an API that uses media type versioning.

The examples above use a v prefix with either a major version number or a major-minor version number. However, there are other versioning formats you can use, such as date-based versioning (e.g., 2026-03-01), status-based versioning (e.g., v1-beta), and more. The versioning format is entirely up to you, and choosing the right versioning strategy depends on your specific use case and client requirements.

While you could implement API versioning yourself, using a library like Asp.Versioning simplifies the process significantly, providing built-in support for various versioning strategies and seamless integration with ASP.NET Core.

Note

This post focuses on Asp.Versioning v10.0.0, the first release to officially support both ASP.NET Core 10 and the new built-in OpenAPI library. The prior stable version, Asp.Versioning v8.x.x, will work with .NET 10 via implicit roll-forward, but v10 is purpose-built for the new OpenAPI integration and brings improvements and bug fixes — so it’s the recommended choice for .NET 10 applications.

We’ll look at how to set this up in both Minimal APIs and controllers in a bit. First, let’s explore why OpenAPI is important in this context.

About the code samples

All complete code samples in this post are formatted as file-based apps, a feature introduced in C# 14 and .NET 10 that lets you run .NET applications from a single .cs file — no project file required! Copy any complete sample to a .cs file and run it with dotnet <filename>.cs. The #:sdk and #:package directives at the top of each sample automatically configure the required SDK and NuGet packages. Make sure you have the .NET 10 SDK installed!

The changes to OpenAPI in .NET 9 and 10

Since .NET 9, Microsoft.AspNetCore.OpenApi has become the default way to generate OpenAPI documentation for ASP.NET Core applications, replacing Swashbuckle.AspNetCore. Setting it up is straightforward, and it seems geared for versioning out of the box, as the URL for accessing the OpenAPI document includes a version segment by default: /openapi/v1.json.

Note about OpenAPI tools

While Swashbuckle and NSwag are still viable and widely-used options for OpenAPI documentation in .NET, this post focuses on the newer built-in OpenAPI support.

If you haven’t set up OpenAPI in your .NET 9/10 application yet, here’s a quick example of how to do it:

#:property PublishAot=false
#:sdk Microsoft.NET.Sdk.Web
#:package Microsoft.AspNetCore.OpenApi@10.0.4

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddOpenApi();

var app = builder.Build();

// This sets up the OpenAPI endpoint at /openapi/v1/openapi.json
// If you'd prefer YAML, you can change the URL to end up with .yaml instead
app.MapOpenApi();

app.MapGet("/users", () =>
{
    var users = new List<UserDto>
    {
        new(1, "Ada Lovelace", "ada@example.com"),
        new(2, "Grace Hopper", "grace@example.com"),
        new(3, "Conner Pilot", "copilot@example.com"),
    };

    return TypedResults.Ok<List<UserDto>>(users);
})
.WithName("GetUsers");

app.Run();

record UserDto(int Id, string Name, string Email);
{
  "openapi": "3.1.1",
  "info": {
    "title": "UsersApi | v1",
    "version": "1.0.0"
  },
  "servers": [
    {
      "url": "http://localhost:5055/"
    }
  ],
  "paths": {
    "/users": {
      "get": {
        "tags": ["UsersApi"],
        "operationId": "GetUsers",
        "responses": {
          "200": {
            "description": "OK",
            "content": {
              "application/json": {
                "schema": {
                  "type": "array",
                  "items": {
                    "$ref": "#/components/schemas/UserDto"
                  }
                }
              }
            }
          }
        }
      }
    }
  },
  "components": {
    "schemas": {
      "UserDto": {
        "required": ["id", "name", "email"],
        "type": "object",
        "properties": {
          "id": {
            "pattern": "^-?(?:0|[1-9]\\d*)$",
            "type": ["integer", "string"],
            "format": "int32"
          },
          "name": {
            "type": "string"
          },
          "email": {
            "type": "string"
          }
        }
      }
    }
  },
  "tags": [
    {
      "name": "UsersApi"
    }
  ]
}

You can also customize the OpenAPI document by using transformers and enrich your operations by using TypedResults. To learn about this and other approaches, check out the official documentation for Microsoft.AspNetCore.OpenApi.

An introduction to API versioning with Asp.Versioning

Before we dive into the specifics of how to set up API versioning with OpenAPI in .NET 10, let’s briefly introduce the Asp.Versioning library. This library provides a comprehensive solution for API versioning in ASP.NET Core applications, supporting various versioning strategies and seamless integration with both Minimal APIs and controllers. It has been widely adopted in the .NET community, having 800 million downloads with all packages combined!

Asp.Versioning is a collection of several libraries that you can use to add versioning for controllers, Minimal APIs, OData, and more. In this post, we’ll focus only on controllers and Minimal APIs, which require the following packages:

API Type Required Packages (.NET 10+)
Controllers Asp.Versioning.Mvc Asp.Versioning.Mvc.ApiExplorer
Minimal APIs Asp.Versioning.Http Asp.Versioning.Mvc.ApiExplorer

The library has an interesting history, being part of the .NET Foundation and developed by Chris Martinez while he worked at Microsoft.

Note

Looking for the source code? All of the code in this post, and more, can be found in my sample repository on GitHub. For more samples, check out Asp.Versioning’s official samples.

To demonstrate how to set up API versioning in .NET 10, let’s create a simple sample application that includes a minimal amount of code required to get started. This will use query string versioning for the sake of simplicity, but you can easily swap to another strategy if you prefer, which will be covered later.

API versioning for controllers

Controllers use the Asp.Versioning.Mvc package, which provides a set of attributes and conventions to define API versions. You can specify the version for each controller or action using attributes like [ApiVersion("1.0")] and [ApiVersion("2.0")].

First, you have to set up the required services:

#:property PublishAot=false
#:sdk Microsoft.NET.Sdk.Web
#:package Asp.Versioning.Mvc@10.0.0

using Asp.Versioning;
using Microsoft.AspNetCore.Mvc;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllers();

builder.Services.AddApiVersioning()
    .AddMvc();

var app = builder.Build();

app.MapControllers();
app.Run();

// For a file-based app, controller classes go below app.Run()
// (see the next code snippet)

Then you need to add the versioning attributes to your controllers. A solid approach is to have one controller per version:

[ApiController]
[Route("api/users")]
[ApiVersion("1.0")]
public class UsersV1Controller : ControllerBase
{
    [HttpGet]
    public ActionResult<UserV1[]> Get()
    {
        return Ok(new[]
        {
            new UserV1(1, "John Doe"),
            new UserV1(2, "Alice Dewett"),
        });
    }
}

[ApiController]
[Route("api/users")]
[ApiVersion("2.0")]
public class UsersV2Controller : ControllerBase
{
    [HttpGet]
    public ActionResult<UserV2[]> Get()
    {
        return Ok(new[]
        {
            new UserV2(1, "John Doe", new DateOnly(1990, 1, 1)),
            new UserV2(2, "Alice Dewett", new DateOnly(1992, 2, 2)),
        });
    }
}

public record UserV1(int Id, string Name);
public record UserV2(int Id, string Name, DateOnly BirthDate);

Asp.Versioning supports query string versioning by default. You can now reach these endpoints by going to api/users?api-version=1.0 for the first version, and api/users?api-version=2.0 for the second version!

API versioning for Minimal APIs

Minimal APIs use the Asp.Versioning.Http package instead of Asp.Versioning.Mvc. This package provides extension methods to define API versions directly on the route groups. Before you do that, though, you’ll need to call NewVersionedApi to create a new API versioning group, which will allow you to define multiple versions in the route group.

#:property PublishAot=false
#:sdk Microsoft.NET.Sdk.Web
#:package Asp.Versioning.Http@10.0.0

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddApiVersioning();

var app = builder.Build();

var usersApi = app.NewVersionedApi("Users");

var usersv1 = usersApi.MapGroup("api/users").HasApiVersion("1.0");
var usersv2 = usersApi.MapGroup("api/users").HasApiVersion("2.0");

usersv1.MapGet("", () => TypedResults.Ok(new[]
{
    new UserV1(1, "John Doe"),
    new UserV1(2, "Alice Dewett"),
}));

usersv2.MapGet("", () => TypedResults.Ok(new[]
{
    new UserV2(1, "John Doe", new DateOnly(1990, 1, 1)),
    new UserV2(2, "Alice Dewett", new DateOnly(1992, 2, 2)),
}));

app.Run();

record UserV1(int Id, string Name);
record UserV2(int Id, string Name, DateOnly BirthDate);

Just like with controllers, you can reach these endpoints by going to api/users?api-version=1.0 for the first version, and api/users?api-version=2.0 for the second version!

How to organize your API versions?

Adding all the API groups and versions in the Program.cs file can quickly become unmanageable as your API grows. I really like the approach from one of Asp.Versioning‘s example projects, which keeps Program.cs focused and easy to scan.

app.MapUsers().ToV1().ToV2().ToV3();
app.MapScores().ToV1().ToV2().ToV3();

Here, MapUsers() and MapScores() are extension methods that call app.NewVersionedApi(), and ToV1(), ToV2(), etc. are extension methods that define the versioned route groups and endpoints. This way, you can keep your Program.cs file clean and organized, and you can easily find and manage your API versions.

For controller-based projects, Asp.Versioning supports convention-based versioning such as Version by .NET Namespace. See the documentation and example in the repo for more information.

Changing the versioning strategy

In the examples above, we used query string versioning for simplicity, but Asp.Versioning supports various versioning strategies, and you can easily switch between them by configuring the API versioning options. Let’s take a look at implementing URL and header versioning.

URL versioning

To swap to URL versioning, you need to change AddApiVersioning:

builder.Services.AddApiVersioning(options =>
{
    // API versioning by URL segment (api/v1/users)
    options.ApiVersionReader = new UrlSegmentApiVersionReader();
});

Now, you can use api/v1/users in the URL to go to the first version of the API, and api/v2/users to go to the second version!

URL versioning is a popular choice as it makes the versioning explicit in the URL, making it very easy to use and see what version you’re calling. However, it does mean clients need to update their URLs when a new version is released. Another downside is that it isn’t “truly” RESTful, as the URL should represent the resource, which, even though it is a different version, is still the same resource, and thus the URL should ideally not change. That said, this is a common and widely accepted approach to versioning, and it works well in many scenarios, so it’s a good option to consider.

Header versioning

If you want to use header versioning, you can change the setup to this:

builder.Services.AddApiVersioning(options =>
{
    // API versioning by header (X-API-Version: 1.0)
    options.ApiVersionReader = new HeaderApiVersionReader("X-API-Version");
});

Header, query string, and media type versioning are more RESTful, as the URL represents the resource, and the version is specified in the header, query string, or content type. This allows clients to call the same URL regardless of the version, and simply specify the version they want to use in the header, query string, or content type.

This approach, just like query string versioning, does lead to a scenario of a user potentially forgetting to specify the version. To deal with this, you can set a default API version, which will be used when no version is specified. This can be done by setting the DefaultApiVersion property in the API versioning options:

builder.Services.AddApiVersioning(options =>
{
    // Set the default API version to 1.0 explicitly
    // This is already set to 1.0 by default, but shown here for demonstration
    options.DefaultApiVersion = new ApiVersion(1, 0);

    // If the user does not specify a version, you can let the API use the default version
    // This is disabled by default.
    // Enabling this feature is a trade-off between convenience and explicitness.
    // Changing the default version could break clients that aren't using versioning.
    // Consider your API's audience and usage patterns when deciding to enable this.
    options.AssumeDefaultVersionWhenUnspecified = true;
});

It’s also possible to combine multiple versioning strategies, for example allowing clients to use either query or header versioning, which can be useful for supporting different types of clients. To do this, you can use the ApiVersionReader.Combine method:

builder.Services.AddApiVersioning(options =>
{
    options.ApiVersionReader = ApiVersionReader.Combine(
        new QueryStringApiVersionReader("api-version"),
        new HeaderApiVersionReader("X-API-Version")
    );
});

And now that you understand the basics of API versioning, it’s time to put OpenAPI and API versioning together! Keep in mind that there are many more features to explore in Asp.Versioning, so make sure to check out the official documentation and the samples!

Combining API versioning with OpenAPI in .NET 10

Note

Asp.Versioning.OpenApi v10.0.0-rc.1 is currently in Release Candidate. See the release notes for details.

This section will also cover both controllers and Minimal APIs. As discussed at the beginning of this post, Asp.Versioning v10.0.0 introduces a new package that can be used to integrate API versioning with OpenAPI in a clean and simple way, without having to write a lot of custom code or duplicate configuration: Asp.Versioning.OpenApi. This package is required for both controllers and Minimal APIs, and it provides a set of extension methods to generate OpenAPI documentation for each API version.

We’ll update the samples we created in the previous sections to include OpenAPI documentation for each version of the API with the query string versioning strategy. We’ll also focus on one document per version, which is the recommended approach for versioning your OpenAPI documentation, as it allows clients to easily access the documentation for the specific version of the API they are using, without having to filter through a single document that contains all versions.

Setting up API versioning with OpenAPI for controllers

Combining OpenAPI with API versioning for controllers requires the following changes to the setup:

  • You must call AddApiExplorer after AddApiVersioning to ensure that the API versioning information is included in the OpenAPI document.
    • The API Explorer is ASP.NET Core’s built-in service for discovering and describing the API endpoints in your application. By adding it after AddApiVersioning, you ensure that the versioning information is included in the API descriptions, which is crucial for generating accurate OpenAPI documentation.
  • You must call AddOpenApi from the Asp.Versioning namespace after activating API versioning to ensure that you use the correct variant of AddOpenApi that integrates with API versioning.
  • We call WithDocumentPerVersion() after MapOpenApi() to generate a separate OpenAPI document for each API version, preventing us from having to manually call AddOpenApi() multiple times for each version, which can lead to maintenance issues when having to update both Controller attributes and OpenAPI configuration when adding new versions.
#:property PublishAot=false
#:sdk Microsoft.NET.Sdk.Web
#:package Asp.Versioning.Mvc@10.0.0
#:package Asp.Versioning.Mvc.ApiExplorer@10.0.0
#:package Asp.Versioning.OpenApi@10.0.0-rc.1

using Asp.Versioning;
using Microsoft.AspNetCore.Mvc;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllers();

// We don't need to customize the API versioning options for this example as we are using query string versioning.
builder.Services.AddApiVersioning()
.AddApiExplorer(options =>
{
    // Calling "AddApiExplorer" is required for OpenAPI versioning to work correctly.
    // Without this, the generated OpenAPI documents will not be versioned.

    // GroupNameFormat specifies the format of the API version.
    // Without this, versioning will use the literal group names. In our case, that would be 1.0.
    // For compatibility with the "default" /openapi/v1.json behavior from Microsoft.AspNetCore.OpenApi, we use v'VVV' so we can retrieve it using v1.json.
    // See https://github.com/dotnet/aspnet-api-versioning/wiki/Version-Format#custom-api-version-format-strings for more information about formatting API versions.
    options.GroupNameFormat = "'v'VVV";
})
.AddMvc()
// You must call "AddOpenApi" after "AddApiVersioning" to ensure you use Asp.Versioning's variant.
// This variant of "AddOpenApi" is required to properly integrate with API versioning and generate versioned OpenAPI documents.
// You can call an overload of "AddOpenApi" to customize the OpenAPI generation, just like you would with Microsoft.AspNetCore.OpenApi's "AddOpenApi".
.AddOpenApi();

var app = builder.Build();

// WithDocumentPerVersion() is an extension method provided by the Asp.Versioning.OpenApi package.
// It configures the OpenAPI endpoint to generate a separate document for each API version.
// This allows clients to retrieve documentation specific to the version of the API they are using.
// This approach is preferable compared to having to call "services.AddOpenApi()" multiple times for each version, which can lead to maintenance issues and potential misconfigurations when adding new versions.
app.MapOpenApi().WithDocumentPerVersion();

app.MapControllers();

app.Run();

// For a file-based app, paste the controller classes from
// the "API versioning for controllers" section below app.Run()

You can now retrieve your versioned OpenAPI documents at /openapi/v1.json and /openapi/v2.json for the first and second version of the API, respectively!

Setting up API versioning with OpenAPI for Minimal APIs

Next up, Minimal APIs! Luckily, the code is the exact same as for controllers, except for the fact that we do not need to call AddMvc. In case you do want to see it:

#:property PublishAot=false
#:sdk Microsoft.NET.Sdk.Web
#:package Asp.Versioning.Http@10.0.0
#:package Asp.Versioning.Mvc.ApiExplorer@10.0.0
#:package Asp.Versioning.OpenApi@10.0.0-rc.1

using Asp.Versioning;

var builder = WebApplication.CreateBuilder(args);

// We don't need to customize the API versioning options for this example as we are using query string versioning.
builder.Services.AddApiVersioning()
.AddApiExplorer(options =>
{
    // Calling "AddApiExplorer" is required for OpenAPI versioning to work correctly.
    // Without this, the generated OpenAPI documents will not be versioned.

    // GroupNameFormat specifies the format of the API version.
    // Without this, versioning will use the literal group names. In our case, that would be 1.0.
    // For compatibility with the "default" /openapi/v1.json behavior from Microsoft.AspNetCore.OpenApi, we use v'VVV' so we can retrieve it using v1.json.
    // See https://github.com/dotnet/aspnet-api-versioning/wiki/Version-Format#custom-api-version-format-strings for more information about formatting API versions.
    options.GroupNameFormat = "'v'VVV";
})
// You must call "AddOpenApi" after "AddApiVersioning" to ensure you use Asp.Versioning's variant.
// This variant of "AddOpenApi" is required to properly integrate with API versioning and generate versioned OpenAPI documents.
// You can call an overload of "AddOpenApi" to customize the OpenAPI generation, just like you would with Microsoft.AspNetCore.OpenApi's "AddOpenApi".
.AddOpenApi();

var app = builder.Build();

// WithDocumentPerVersion() is an extension method provided by the Asp.Versioning.OpenApi package.
// It configures the OpenAPI endpoint to generate a separate document for each API version.
// This allows clients to retrieve documentation specific to the version of the API they are using.
// This approach is preferable compared to having to call "services.AddOpenApi()" multiple times for each version, which can lead to maintenance issues and potential misconfigurations when adding new versions.
app.MapOpenApi().WithDocumentPerVersion();

// Paste the API endpoints and records from the "API versioning for Minimal APIs" section here,
// then add `app.Run();` at the end.

Now you know how to set up API versioning with OpenAPI for both controllers and Minimal APIs in .NET 10!

Adding SwaggerUI and Scalar support for versioned APIs

Now that we have our versioned OpenAPI documents, we can add support for visualizing them using tools like SwaggerUI and Scalar. Both of these tools allow you to visualize your API documentation in a user-friendly way, making it easier for developers to understand and interact with your API.

SwaggerUI used to be included by default in ASP.NET Core applications thanks to Swashbuckle.AspNetCore, a NuGet package included in ASP.NET Core project templates. This is no longer the case since ASP.NET Core 9 with the introduction of Microsoft.AspNetCore.OpenApi.

Scalar is a newer tool that provides a more modern and customizable interface for visualizing OpenAPI documentation, and it can be added to your project using the Scalar.AspNetCore NuGet package. Performance-wise, Scalar is more efficient than SwaggerUI, but both tools are great options for visualizing your API documentation, and the choice between them depends on your specific needs and preferences.

Adding SwaggerUI support

To add SwaggerUI support for your versioned APIs, you can use the Swashbuckle.AspNetCore.SwaggerUI package, which provides middleware to serve the SwaggerUI interface. Unlike the full Swashbuckle.AspNetCore package, this only includes the UI component and does not include OpenAPI document generation, as we are using Microsoft.AspNetCore.OpenApi for that. You can configure this package to point to your versioned OpenAPI documents, allowing developers to easily explore and test your API endpoints.

The setup required is the same for both controllers and Minimal APIs. We’ll cover the setup for Minimal APIs, but the same code can be used for controllers as well.

#:property PublishAot=false
#:sdk Microsoft.NET.Sdk.Web
#:package Asp.Versioning.Http@10.0.0
#:package Asp.Versioning.Mvc.ApiExplorer@10.0.0
#:package Asp.Versioning.OpenApi@10.0.0-rc.1
#:package Swashbuckle.AspNetCore.SwaggerUI@10.1.4

using Asp.Versioning;
using Asp.Versioning.ApiExplorer;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddApiVersioning()
.AddApiExplorer(options =>
{
    options.GroupNameFormat = "'v'VVV";
})
.AddOpenApi();

var app = builder.Build();

app.MapOpenApi().WithDocumentPerVersion();

// Paste the API endpoints and records from the "API versioning for Minimal APIs" section here

// UseSwaggerUI MUST come after MapOpenApi() and the API endpoint definitions.
app.UseSwaggerUI(options =>
{
    // We reverse the list of API versions so the newest version is rendered first
    foreach (var description in app.DescribeApiVersions().Reverse())
    {
        options.SwaggerEndpoint(
            $"/openapi/{description.GroupName}.json",
            description.GroupName.ToUpperInvariant());
    }
});

app.Run();

We’ve now added SwaggerUI support by calling app.UseSwaggerUI() and configuring it to point to our versioned OpenAPI documents, based on the API versions described in our application, which we retrieve using app.DescribeApiVersions(). We can visit the SwaggerUI interface at /swagger to explore and test our API endpoints!

SwaggerUI showing versioned API documentation with two API versions in the dropdown.

Figure: SwaggerUI with versioned API documentation

Adding Scalar support

Next, we can add Scalar support for our versioned APIs using the Scalar.AspNetCore package. This package provides middleware to serve the Scalar interface, which can be configured to point to your versioned OpenAPI documents, similar to how we set up SwaggerUI.

Again, the setup is the same for both controllers and Minimal APIs. We’ll cover the setup for Minimal APIs, but the same code can be used for controllers as well.

#:property PublishAot=false
#:sdk Microsoft.NET.Sdk.Web
#:package Asp.Versioning.Http@10.0.0
#:package Asp.Versioning.Mvc.ApiExplorer@10.0.0
#:package Asp.Versioning.OpenApi@10.0.0-rc.1
#:package Scalar.AspNetCore@2.13.0

using Asp.Versioning;
using Asp.Versioning.ApiExplorer;
using Scalar.AspNetCore;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddApiVersioning()
.AddApiExplorer(options =>
{
    options.GroupNameFormat = "'v'VVV";
})
.AddOpenApi();

var app = builder.Build();

app.MapOpenApi().WithDocumentPerVersion();

// Paste the API endpoints and records from the "API versioning for Minimal APIs" section here

// MapScalarApiReference sets up the Scalar UI at /scalar
// AddDocuments registers all known API versions so Scalar shows a dropdown to switch between them.
// You can enrich your OpenAPI document with Scalar specific integrations if you wish.
// To learn more: https://scalar.com/products/api-references/integrations/aspnetcore/openapi-extensions
app.MapScalarApiReference(options =>
{
    var descriptions = app.DescribeApiVersions();

    for (var i = 0; i < descriptions.Count; i++)
    {
        var description = descriptions[i];
        var isDefault = i == descriptions.Count - 1;

        // isDefault is used to mark the default API version in Scalar.
        // This decides which version is selected by default when users visit the Scalar UI.
        options.AddDocument(description.GroupName, description.GroupName, isDefault: isDefault);
    }
});

app.Run();

With app.MapScalarApiReference(), we register Scalar and feed it the same versioned documents via app.DescribeApiVersions(). Visit /scalar to browse and test your endpoints. Scalar is also highly configurable — select Configure in the top-right corner to tweak the theme, layout, and more.

Scalar UI showing two versions of the API in the dropdown menu.

Figure: Scalar with versioned API documentation

Can't decide between SwaggerUI and Scalar?

If you’re having trouble deciding between SwaggerUI and Scalar, you can actually use both! Both tools can be configured to point to your versioned OpenAPI documents, allowing developers to choose their preferred interface for exploring and testing your API endpoints. You can set up SwaggerUI at /swagger and Scalar at /scalar, giving developers the flexibility to use the tool they are most comfortable with.

We now have a complete setup for API versioning with OpenAPI in .NET 10, along with support for visualizing our API documentation using both SwaggerUI and Scalar!

Migrating from Asp.Versioning v8 to v10

You might encounter some breaking changes during the migration process, just like I did. This commit highlights some of the changes I had to make to get my sample application working with the new version. The most significant change is that the Asp.Versioning.OpenApi package is now required for both controllers and Minimal APIs, and that AddOpenApi() must be called from the Asp.Versioning namespace instead of the Microsoft.AspNetCore namespace, after activating API versioning.

During this migration, I actually found a bug in the new version of Asp.Versioning that caused the OpenAPI document to not generate correctly for Minimal APIs, so I created a PR for this. For more information about changes between versions, check out the changes from v8 to v10 in my sample repository and the official documentation for Asp.Versioning!

How Asp.Versioning v10 improves the setup of API versioning

This post has covered the new way of setting up API versioning with OpenAPI in .NET 10 using Asp.Versioning v10.0.0. It also made claims that this new approach reduces duplicate code and makes it easier to set up.

To understand what I mean by this, let’s compare the new approach to how you would set up API versioning with OpenAPI in Asp.Versioning v8.x.x:

Asp.Versioning v8.x.x:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddOpenApi("v1");
builder.Services.AddOpenApi("v2");

builder.Services.AddApiVersioning()
.AddApiExplorer(options =>
{
    options.GroupNameFormat = "'v'VVV";
});

var app = builder.Build();

// Code for the API endpoints using app.NewVersionedApi() can be placed here.

app.MapOpenApi();

Asp.Versioning v10.x.x:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddApiVersioning()
.AddApiExplorer(options =>
{
    options.GroupNameFormat = "'v'VVV";
})
.AddOpenApi();

var app = builder.Build();

// Code for the API endpoints using app.NewVersionedApi() can be placed here.

app.MapOpenApi().WithDocumentPerVersion();

The main differences are that in the new version, you only need to call AddOpenApi() once, instead of having to call AddOpenApi() multiple times for each version. This reduces duplicate code between your API endpoints where you already define your API versions. A combination of Asp.Versioning’s AddOpenApi() and WithDocumentPerVersion() achieves this behavior.

However, this is just the beginning. When you consider SwaggerUI/Scalar, tools many people use in their API development process, a lot more work was needed for Asp.Versioning v8 to get these tools working with versioned OpenAPI documents. Several OpenAPI transformers were needed to manually add the versioning information to the OpenAPI document. You can see the code that was needed to get SwaggerUI working with versioned OpenAPI documents using Asp.Versioning v8. These workarounds are no longer necessary as this is now included in Asp.Versioning v10.

Adding API linting to your versioned OpenAPI documents

We’ve covered API versioning, integration with OpenAPI, and how to visualize your versioned API documentation using SwaggerUI and Scalar. What’s next? Well, you can add API linting to your versioned OpenAPI documents to ensure that they adhere to best practices and organizational standards. This can be done using tools like:

Spectral is a powerful tool for linting your OpenAPI documents. By defining custom rules — or using community-built ones — you can enforce consistency across your API development process. This turns your “guidelines” into enforceable rules, which can be a game-changer for teams looking to maintain high-quality APIs.

You can, for example, add custom rules for validating that all APIs implement API versioning in a specific way. If a team forgets to add API versioning, Spectral can catch this during the pull request review process, preventing unversioned APIs from being released.

Next, there’s tooling like oasdiff, which allows you to compare different versions of your OpenAPI documents to identify changes, additions, or removals in your API. This is especially useful to detect unintended breaking changes between API versions, notifying the developer to introduce a new API version instead of breaking the existing one.

By integrating oasdiff into your CI/CD pipeline, you can let the pull request review process fail once a breaking change is detected, and instruct the contributor to use API versioning instead!

Finishing up

I hope you enjoyed this post! Whenever I had to implement API versioning with OpenAPI in .NET in the past, I often got caught up in the intricacies, so I’m glad I was able to write a setup that works for modern projects, and I hope you found it useful, too! If you have any questions or feedback, feel free to reach out or leave a comment below. Happy coding!

ASP.NET Core Community Standup

Catch the recent interview on the ASP.NET Core Community Standup: Combining API Versioning with OpenAPI:

Author: Sander ten Brinke is a Microsoft MVP and Senior Software Engineer, with a passion for building scalable and maintainable applications. With over 10 years of experience in the industry, he has worked on a wide range of projects, from small startups to large enterprises. He focuses on .NET and Azure, but his interests extend beyond these technologies too, and he enjoys sharing his knowledge through blogging, speaking at conferences, and contributing to open source software, like some of the OpenAPI features he added to ASP.NET Core 10!

The post Combining API versioning with OpenAPI in .NET 10 applications appeared first on .NET Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Building Chat Applications with the .NET MAUI Chat (Conversational UI) Control

1 Share

The .NET MAUI Conversational UI (Chat) component allows integrating chat experiences into your mobile and desktop applications. Learn how to work with this chat component, use cases and how to integrate LLM models.

One of the best ways to connect with a customer is through conversations, whether with a real person or through an AI agent that provides information about the company or solves problems using a knowledge base.

If you are creating .NET MAUI applications, you should know that in the Progress Telerik suite you can find the .NET MAUI Conversational UI (Chat) component, featuring several characteristics that will allow you to create chat-based applications quickly.

Throughout this post, we will create an app that uses AI models to provide tips on healthy eating. Let’s see how to do it!

Understanding the RadChat Control from Telerik for .NET MAUI

The RadChat control from Telerik for .NET MAUI is a component that allows users to interact through a conversational interface, production-ready, capable of handling text messages, attachments, voice-to-text integration and full customization so you can adapt it to your own style, among many other features.

The control is composed of different graphical elements that we can control and modify through code, as shown in the following image:

RadChat Component Visual Structure

Some typical use cases for the control include:

  • AI chatbots
  • Real-time customer support
  • Messaging applications
  • Virtual assistants
  • Image analysis with vision AI models
  • Among many others

Let’s see how to implement the control in a real application.

Setting Up a .NET MAUI Project to Integrate Chat Conversations

The first and most important thing when using the chat component in our applications is to follow the official installation guide, which shows different ways to set up your environment.

Additionally, if you want to use any AI model, you need to set up the project by installing the corresponding NuGet packages. As a personal preference, I always like to use Microsoft.Extensions.AI, as it greatly simplifies handling requests to both OpenAI and Azure OpenAI. For this, install the following packages:

  • Azure.AI.OpenAI
  • Microsoft.Extensions.AI
  • Microsoft.Extensions.AI.OpenAI

Finally, to facilitate handling with the MVVM pattern, I recommend installing the community toolkit through the following package:

  • CommunityToolkit.Mvvm

With the packages installed, let’s implement the control.

Integrating Chat Functionality into a .NET MAUI Project

To use the RadChat control in a .NET MAUI application, you need to do this using the RadChat tag as shown in the following example:

<ContentPage ...
    xmlns:telerik="http://schemas.telerik.com/2022/xaml/maui">

    <telerik:RadChat x:Name="chat" />

</ContentPage>

By including the above code, we will immediately see the chat control in the emulator:

Basic RadChat Interface

Interacting with Chat Messages in a .NET MAUI App

To work with chat messages, you should know that the class ChatMessage is the basic class for handling messages, which contains the property Author, used to define the information of a participant that will appear in the UI. Author contains data such as Name, Avatar and Data.

It is possible to extend this class as the control itself does through TextMessage, which inherits from ChatMessage by adding the property Text.

Knowing this, we will create a list of TextMessage to maintain the conversation history, as well as define the different roles of type Author that will be used in the conversation. For the example, I have created the class ChatViewModel, which looks as follows:

public partial class ChatViewModel : ObservableObject
{
    [ObservableProperty]
    private ObservableCollection<TextMessage> items = [];

    public Author Me { get; }
    public Author Bot { get; }

    public ChatViewModel()
    {
        Me = new Author { Name = "You" };
        Bot = new Author { Name = "NutriBot" };

        Items.Add(new TextMessage
        {
            Author = Bot,
            Text = "Hello!  I'm NutriBot, your AI-powered nutrition assistant.\n\n" +
                    "I can help you with:\n" +
                    "Analyzing how healthy a food is\n" +
                    "Analyzing photos of food or nutrition labels\n" +
                    "Providing recommendations for a healthy diet\n" +
                    "Evaluating recipes\n\n" +
                    "Send me a message or a photo to get started!"
        });            
    }
}

On the other hand, the RadChat control has the properties Author, which allows specifying who is interacting, in addition to Items, responsible for managing the message history:

<telerik:RadChat
    x:Name="chat"
    Author="{Binding Me}"
    ItemsSource="{Binding Items}" />

For the previous viewmodel to work correctly, remember to configure both the code-behind of your page and the dependency injection as follows:

MauiProgram.cs

public static class MauiProgram
{
    public static MauiApp CreateMauiApp()
    {
        var builder = MauiApp.CreateBuilder();
        ...
        builder.Services.AddTransient<ChatViewModel>();            
        ...
        return builder.Build();
    }
}

MainPage.xaml.cs

public MainPage(ChatViewModel viewModel)
{
    InitializeComponent();
    BindingContext = viewModel;
}

Executing the previous code allows you to see the welcome message in the chat history:

Displaying chat app welcome message

Receiving and Returning Chat Messages

So far, we have a list of messages in the application; however, there is no real interaction, meaning that messages are not sent or received back. The control has some commands by default that can help us with this task:

  • SendMessageCommand: executes when a message is sent
  • PickFileCommand: executes when attempting to attach a file
  • PickPhotoCommand: executes when wanting to attach a photo
  • TakePhotoCommand: executes when the camera opens to take a photo

There are other commands for working with attachments, but the main ones are the above.

Let’s implement the functionality to send and receive messages in the viewmodel, creating a property Message that allows binding the user’s message, in addition to a method SendMessage, which will be linked to SendMessageCommand as follows:

public partial class ChatViewModel : ObservableObject
{
    ...
    [ObservableProperty]
    private string message = string.Empty;
    ...
    
    [RelayCommand]
    private async Task SendMessage()
    {
        var messageText = Message;
        Message = string.Empty;

        Items.Add(new TextMessage { Author = Me, Text = messageText });            

        Items.Add(new TextMessage { Author = Bot, Text = "Message received!" });
    }
}

In the UI page, you need to bind this pair of elements using Message and SendMessageCommand:

<telerik:RadChat ...
    Message="{Binding Message}"
    SendMessageCommand="{Binding SendMessageCommand}" />

With these new changes, we can see that there is a response in the chat window after an interaction:

Sending and receiving messages using predefined commands

Now let’s see how to connect an AI model so that we can have more realistic conversations.

Connecting an LLM Model to a Chat App in .NET MAUI

To use AI in our application, I have created a service-like class that manages the prompt, model information and methods related to obtaining results from the LLM.

It is worth noting that I have created some variables to store the connection data with the model, solely for demonstration purposes. Ideally, these data should be stored securely on the device, create an external service to handle it, etc.

The class looks like this:

public class ChatService
{
    private const string Endpoint = "your-endpoint";
    private const string DeploymentName = "your-deployment-name";
    private const string ApiKey = "your-api-key";

    private readonly IChatClient _chatClient;
    private readonly List<ChatMessage> _history;

    private const string SystemPrompt = """
        You are an expert and friendly nutritionist. Your role is to:
        - Analyze food photographs and nutrition labels (nutrition facts)
        - Evaluate how healthy a food is for a balanced diet
        - Provide healthy eating recommendations
        - Suggest healthier alternatives when necessary
        - Analyze recipes and give your professional opinion
        - Answer questions about nutrition, diets, and food wellness
        
        When analyzing an image:
        1. Identify the food or nutrition label
        2. Give a rating from 1-10 on how healthy it is
        3. Explain the nutritional pros and cons
        4. Suggest improvements or alternatives
        
        Always respond in a concise but informative manner.
        Don't use emojis.
        """;

    public ChatService()
    {
        _chatClient = new AzureOpenAIClient(
                new Uri(Endpoint),
                new ApiKeyCredential(ApiKey))
            .GetChatClient(DeploymentName)
            .AsIChatClient();

        _history =
        [
            new(ChatRole.System, SystemPrompt)
        ];
    }

    /// <summary>
    /// Sends a text-only message and returns the AI response.
    /// </summary>
    public async Task<string> SendMessageAsync(string userMessage)
    {
        _history.Add(new(ChatRole.User, userMessage));

        var response = await _chatClient.GetResponseAsync(_history);
        var assistantMessage = response.Text ?? string.Empty;

        _history.AddMessages(response);

        return assistantMessage;
    }

    /// <summary>
    /// Sends a message with an image for vision analysis and returns the AI response.
    /// </summary>
    public async Task<string> SendMessageWithImageAsync(string userMessage, byte[] imageBytes, string mimeType)
    {        
        var contents = new List<AIContent>
        {
            new TextContent(string.IsNullOrWhiteSpace(userMessage)
                ? "Analyze this image from a nutritional perspective. How healthy is it?"
                : userMessage),
            new DataContent(imageBytes, mimeType)
        };

        var message = new ChatMessage(ChatRole.User, contents);
        _history.Add(message);

        var response = await _chatClient.GetResponseAsync(_history);
        var assistantMessage = response.Text ?? string.Empty;

        _history.AddMessages(response);

        return assistantMessage;
    }
}

In the code above, you can see that the methods SendMessageAsync are defined to send only text messages, and SendMessageWithImageAsync to send text along with images, which prepares us for the following sections.

To use it, we must update the viewmodel code to receive the instance of the new service:

public partial class ChatViewModel : ObservableObject
{
    private readonly ChatService _chatService;
    ...
    public ChatViewModel(ChatService chatService)
    {
        _chatService = chatService;
        ...
    }
    [RelayCommand]

    private async Task SendMessage()
    {
        ...
        try
        {
            Items.Add(new TextMessage { Author = Me, Text = messageText });

            var response = await _chatService.SendMessageAsync(messageText);

            Items.Add(new TextMessage { Author = Bot, Text = response });
        }
        catch (Exception ex)
        {
            Items.Add(new TextMessage
            {
                Author = Bot,
                Text = $"⚠️ Error processing your message: {ex.Message}"
            });
        }
    }
}

In the previous update, you can also see that I have changed the method SendMessage, adding an try-catch for any error that might occur, in addition to using the service to obtain a response from the LLM model. You should also add the new service to the dependency container in MauiProgram.cs:

builder.Services.AddSingleton<ChatService>();

With the previous changes, we will see a more realistic response created thanks to an LLM model:

Interacting with an llm model

Attaching Elements to the Conversation

Next, we will see how we can attach images to the conversation, with the purpose of querying the AI model for information about them. The first thing we will do is create a model that represents the attachments:

public partial class AttachedFileData : ObservableObject
{
    [ObservableProperty]
    private string name = string.Empty;

    [ObservableProperty]
    private long size;

    [ObservableProperty]
    private Func<Task<Stream>> getStream = () => Task.FromResult<Stream>(Stream.Null);

    [ObservableProperty]
    private byte[]? imageBytes;

    [ObservableProperty]
    private string? mimeType;
}

With the class that defines an attachment ready, we can create a list with a generic AttachedFileData, which will allow us to display them in a special section in the chat window.

public partial class ChatViewModel : ObservableObject
{
    ...
    [ObservableProperty]
    private ObservableCollection<AttachedFileData> attachedFiles = [];
    ...
    [RelayCommand]
    private async Task AttachFile(IList<AttachedFileData>? filesToAttach)
    {
        if (filesToAttach is null) return;
        
        foreach (var file in filesToAttach)
        {
            AttachedFiles.Add(file);                
        }
        
        filesToAttach.Clear();
    }

In the code above, we have also defined a method called AttachFile, which will allow us to attach the attachments to the list. In the graphic control, we need to perform two operations.

  1. Activate the button to attach files via IsMoreButtonVisible
  2. Bind AttachFilesCommand to the viewmodel method
  3. Bind AttachedFilesSource to AttachedFiles

We can see this below:

<telerik:RadChat ...
    AttachFilesCommand="{Binding AttachFilesCommand}"
    AttachedFilesSource="{Binding AttachedFiles}"
    IsMoreButtonVisible="True" />

With the previous modifications, we will see a new button to attach attachments. Although we might think that the above is enough for the LLM to respond correctly to messages with images, if we try to query something with an attached image, we will get a response regarding the LLM model’s unawareness of the attached file:

Attaching files without successful llm response

This happens because we have not added the attached file to the message list. To achieve this, several steps need to be followed:

  1. Detect when the collection of files changes:
public ChatViewModel(ChatService chatService)
{
   ...    
    AttachedFiles.CollectionChanged += OnAttachedFilesChanged;
}
  1. Create the event handler for when the collection changes. This will allow adding the corresponding information according to the type of file:
private async void OnAttachedFilesChanged(object? sender, NotifyCollectionChangedEventArgs e)
{
    if (e.Action == NotifyCollectionChangedAction.Add && e.NewItems != null)
    {
        foreach (AttachedFileData file in e.NewItems)
        {
            await LoadFileData(file);
        }
    }
}

private async Task LoadFileData(AttachedFileData file)
{
    try
    {
        // Load the image bytes from the stream
        using var stream = await file.GetStream();
        using var ms = new MemoryStream();
        await stream.CopyToAsync(ms);
        file.ImageBytes = ms.ToArray();

        // Try to determine MIME type from file extension
        var extension = Path.GetExtension(file.Name).ToLowerInvariant();
        file.MimeType = extension switch
        {
            ".jpg" or ".jpeg" => "image/jpeg",
            ".png" => "image/png",
            ".gif" => "image/gif",
            ".webp" => "image/webp",
            _ => "image/jpeg"
        };
                        
    }
    catch (Exception ex)
    {
        Items.Add(new TextMessage
        {
            Author = Bot,
            Text = $"⚠️ Error loading file {file.Name}: {ex.Message}"
        });
    }
}
  1. In addition to the above, we need to modify the method for sending messages. Before doing that, to display the new messages in the history, we need to create a new class to handle the image information. In our case, it is as follows:
public partial class ChatMessageItem : ObservableObject
{
    [ObservableProperty]
    private object? author;

    [ObservableProperty]
    private string text = string.Empty;

    [ObservableProperty]
    private byte[]? imageData;

    [ObservableProperty]
    private string? imageMimeType;

    [ObservableProperty]
    private string? imageFileName;
}

With this change, we can now update the method that sends the messages, adding the necessary code for a message to contain information about the image. Make sure to change all references from TextMessage to ChatMessageItem:

[RelayCommand]
private async Task SendMessage()
{
    var messageText = Message;
    var filesToSend = AttachedFiles.ToList();
    Message = string.Empty;

    try
    {        
        if (filesToSend.Count > 0)
        {
            var file = filesToSend[0];
            
            Items.Add(new ChatMessageItem
            {
                Author = Me,
                Text = string.IsNullOrWhiteSpace(messageText)
                    ? " Analyzing this image..."
                    : messageText,
                ImageData = file.ImageBytes,
                ImageMimeType = file.MimeType,
                ImageFileName = file.Name
            });
            
            var response = await _chatService.SendMessageWithImageAsync(
                string.IsNullOrWhiteSpace(messageText)
                    ? "Analyze this image from a nutritional perspective."
                    : messageText,
                file.ImageBytes!,
                file.MimeType ?? "image/jpeg");

            Items.Add(new ChatMessageItem { Author = Bot, Text = response });
        }
        else
        {
           ...
        }
    }
    ...
}

If you try to run the application at this moment, you will encounter an exception like the following:

`Unable to convert item of type MauiRadChatTests.ChatMessageItem to Telerik.Maui.Controls.Chat.ChatItem. You need to set the ItemConverter of the RadChat`.

The message is very descriptive about what we need to do: assign a value to ItemConverter. Let’s do that next.

Displaying Attachments in the Conversation History

To understand this section, you should know that the control RadChat works internally with its own types, such as ChatItem, ChatAttachedFile, etc. However, in an MVVM architecture, the viewmodel should not know or depend on specific types in the UI. This is why the control mandates the use of some converters to perform a conversion between business objects and graphical control elements.

ItemConverter is a property we need to assign through a class that inherits from IChatItemConverter. Its purpose is to convert a data model ChatMessageItem (or the type you have defined) into a Telerik UI type ChatItem, so that binding to the property ItemsSource can be done correctly.

The method ConvertToChatItem is used every time RadChat needs to render an element of the collection ItemSource, while ConvertToDataItem is used when RadChat wants to create a new item automatically, such as when the user presses Send. We will use null because we will handle everything from the viewmodel.

In the following example, you can see how we compare whether it is a text message or an image, and based on that we return either ChatAttachmentsMessage or TextMessage:

public class ChatItemConverter : IChatItemConverter
{
    public ChatItem ConvertToChatItem(object dataItem, ChatItemConverterContext context)
    {
        var item = (ChatMessageItem)dataItem;

        var vm = (ChatViewModel)context.Chat.BindingContext;
        var author = item.Author == vm.Bot ? vm.Bot : context.Chat.Author;
        
        if (item.ImageData != null && item.ImageData.Length > 0)
        {            
            var imageSource = ImageSource.FromStream(() => new MemoryStream(item.ImageData));
            
            var attachment = new ChatAttachment
            {
                FileName = item.ImageFileName ?? "image.jpg",
                FileSize = item.ImageData.Length,
                Data = imageSource,
                GetFileStream = () => Task.FromResult<Stream>(new MemoryStream(item.ImageData))
            };

            var attachmentMessage = new ChatAttachmentsMessage
            {
                Data = dataItem,
                Author = author,
                Text = item.Text,
                Attachments = new List<ChatAttachment> { attachment }
            };

            return attachmentMessage;
        }
        
        var textMessage = new TextMessage
        {
            Data = dataItem,
            Author = author,
            Text = item.Text
        };

        return textMessage;
    }

    public object? ConvertToDataItem(object message, ChatItemConverterContext context)
    {        
        return null;
    }
}

In the above code, you can see that we are very specific in loading the image through the use of ImageSource.FromStream, necessary to bind a Image control to an image. This converter needs to be bound to the property ItemConverter as we saw in the exception description:

<ContentPage
    ...
    xmlns:converters="clr-namespace:MauiRadChatTests">


    <ContentPage.Resources>
        <converters:ChatItemConverter x:Key="ChatItemConverter" />
    </ContentPage.Resources>

    <telerik:RadChat ...       
        ItemConverter="{StaticResource ChatItemConverter}" />
</ContentPage>

Now, if we tried to run the application again, we would receive the following error:

The AttachedFileConverter is null. This converter must be set, so that the RadChat can automatically convert IFileInfo instances to a business object that represents an attached file, and add them to the AttachedFilesSource collection. Alternatively, you can add attachments objects in your view model via the AttachFilesCommand or AttachFiles event.

The previous error tells us that we need to implement a second converter, necessary to convert a business object and add it to the collection of attachments. This means we need to implement a class that implements the IChatAttachedFileConverter interface, which will be responsible for translating our file model AttachedFileData to a ChatAttachedFile from Telerik.

In our case, the converter looks like this:

public class AttachedFileConverter : IChatAttachedFileConverter
{
    private static AttachedFileConverter? _instance;
    public static AttachedFileConverter Instance => _instance ??= new AttachedFileConverter();

    public ChatAttachedFile ConvertToChatAttachedFile(object dataItem, ChatAttachedFileConverterContext context)
    {
        var data = (AttachedFileData)dataItem;
        var chatAttachedFile = new ChatAttachedFile
        {
            Data = data,
            FileName = data.Name,
            FileSize = data.Size
        };
        return chatAttachedFile;
    }

    public object ConvertToDataItem(Telerik.Maui.Controls.IFileInfo fileToAttach, ChatAttachedFileConverterContext context)
    {
        return CreateAttachedFileData(fileToAttach);
    }

    internal static AttachedFileData CreateAttachedFileData(Telerik.Maui.Controls.IFileInfo file)
    {
        return new AttachedFileData
        {
            Name = file.FileName,
            Size = file.FileSize,
            GetStream = file.OpenReadAsync,
        };
    }
}

In the control, we must assign the instance of the class to AttachedFileConverter, preferably allowing a single instance through the use of x:Static:

<telerik:RadChat ...
    AttachedFileConverter="{x:Static converters:AttachedFileConverter.Instance}"/>

With the implementation of the previous changes, it is now possible to run the application, which returns information on a query of type image + text:

RadChat control in action receiving text and images

With this, we have created a useful and real application based on a chat component, which we developed quickly and easily thanks to the controls from the Telerik suite for .NET MAUI.

Conclusion

Throughout this post, we have explored the .NET MAUI Conversational UI (Chat) component, which allows integrating chat experiences into your applications. We have seen how to configure it, the parts that make it up, use cases, how to integrate LLM models for its usage, among other topics.

This is just the beginning, as in the official documentation you can find other relevant topics about customizing the control. See you in the next post.

Want to Try It Yourself?

The Telerik UI for .NET MAUI component library comes with a free 30-day trial. So go ahead!

Try Now

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Your WordPress Expert in the Terminal: Try the Studio Code Beta

1 Share

Studio Code is now in beta, and you can try it today — even though we’re still actively building it.

That’s intentional. We wanted to get it into your hands early, gather feedback, and shape the next phase of development together rather than polish it in a vacuum and call it done. Consider this the beginning of that conversation.

To try it, install Studio CLI (either from the desktop app or directly from your terminal) and then run studio code.

What is Studio Code?

Studio Code is a CLI tool — your WordPress expert in the terminal. Think of it like having a senior WordPress developer available as a command: one that reads your codebase, edits files, runs commands, can spin up local sites, and knows WordPress best practices deeply.

It’s like Claude Code or Cursor CLI, but specifically for WordPress. In fact, we’re leveraging the incredible tech of Claude Code to make Studio Code a powerful WordPress coding tool.

A terminal window with Studio Code open and a prompt saying "What can you do?" and the agent's response

General-purpose coding agents don’t have the tools to act on a WordPress site out of the box — they can’t spin up a local environment, run WP-CLI commands, validate block markup against the real editor, or screenshot the result to check their own work. 

Studio Code can. It’s purpose-built for WordPress: it understands block themes, knows WP-CLI inside and out, validates block markup against the real editor, and works the same feedback loop a developer would — run something, check the output, iterate until it looks right, and ship.

You describe what you want in natural language; Studio Code builds it.

What it can actually do

The honest answer: quite a lot, and it’s getting more abilities by the day.

Build a complete WordPress site from a description or reference. You give it a site concept — a bakery, a portfolio, a nonprofit landing page — or a reference URL, and it designs and builds a full block theme: layout, typography, color palette, and page content. It picks fonts, writes CSS, creates the pages, checks the visual output with a screenshot, and fixes what’s broken.

Manage your local WordPress sites. Create sites, start and stop them, install plugins, activate themes, set options, create posts and menus all through natural language. It uses WP-CLI under the hood, but you don’t have to.

Write and validate block content. Block markup has to be structurally valid or WordPress will reject it in the editor. Studio Code validates every block it generates against the real block editor before inserting it — running each block through its save() function in an actual browser.

Validate performance. Is your site fast? Run the /need-for-speed skill to run a performance audit on your local site, and you’ll get specific, actionable recommendations to speed it up.

Preview and publish to WordPress.com. Once you’re happy with your local build, you can generate a hosted preview site link and push to or pull from WordPress.com, where your site will be backed by fully managed hosting, built-in security, and 24/7 expert support.

Clean up your WordPress category taxonomy. Audit your existing categories, merge duplicates, retire dead ones, create missing ones, and re-categorize posts — all through natural language. It exports your content and applies AI-driven structure, but you don’t have to touch a single category setting yourself.

a terminal window asking Studio Code to create a preview site for a client

Why we’re shipping it now

We’re in the middle of building this, and we think that’s important to say out loud.

The core experience works. People are using it to build real sites, prototype ideas quickly, and skip the scaffolding work that eats time without adding value. We’ve seen it go from a brief description to a fully designed, content-filled WordPress site in a few minutes — start to finish.

But there’s more to build as AI gets better every day. We’re refining the design intelligence, improving how it handles complex layouts, and expanding what it can do with existing sites. So we’re doing the thing we believe in: shipping early, being honest about where it is, and building in public.

During the beta, we decided to keep the Studio Code experience free. That may change in the future, and we want your feedback before we can lock that in.

Give it a spin

Once you have the Studio CLI installed, simply run studio code to start using the beta.

We want to know what works, what doesn’t, and what you wish it could do. Open a GitHub issue with your thoughts, feedback, bug reports, and enhancement requests, and check out the documentation for more tips.





Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Give your AI agent direct, structured GitLab access with glab CLI

1 Share

When teams use GitLab Duo, Claude, Cursor, and other AI assistants, more of the development workflow runs through an AI agent acting on your behalf — reading issues, reviewing merge requests, running pipelines, and helping you ship faster. Most developers are already using glab from the terminal to interact with GitLab. Combining the two is a natural next step.

The problem is that without the right tools, AI agents are essentially guessing when it comes to your GitLab projects. They might hallucinate the details of an issue they've never seen, summarize a merge request based on stale training data rather than its actual state, or require you to manually copy context from a browser tab and paste it into a chat window just to get started. Every one of those workarounds is friction: it slows you down, introduces the possibility of error, and puts a hard ceiling on what your agent can actually do on your behalf. The GitLab CLI (glab) changes that by giving agents a direct, reliable interface to your projects.

With glab, your agent fetches what it needs directly from GitLab, acts on it, and reports back — so you spend less time relaying information and more time on the work that matters.

In this tutorial, you'll learn how to use glab to give AI agents structured, reliable access to your GitLab projects. You'll also discover how that unlocks a faster, more capable development workflow.

How to connect your AI agent to GitLab through MCP

The most direct way to supercharge your AI workflow is to give your AI agent native access to glab through Model Context Protocol (MCP).

MCP is an open standard that lets AI tools discover and use external capabilities at runtime. Once connected, your AI assistant can read issues, comment on merge requests, check pipeline status, and write back to GitLab, all without copying anything from the UI or writing a single API call yourself.

To get started, run:

# Start the glab MCP server
glab mcp serve

Once your MCP client is configured, your AI can answer questions like "What's the status of my open MRs?" or "Are there any failing pipelines on main?" by querying GitLab directly, not scraping the web UI, not relying on stale training data. See the full setup docs for configuration steps for Claude Code, Cursor, and other editors.

One detail worth knowing: glab automatically adds --output json when invoked through MCP, for any command that supports it. Your agent gets clean, structured data without you needing to think about output formats. And because glab uses the official MCP SDK, it stays compatible as the protocol evolves.

We've also been deliberate about which commands are exposed through MCP. Commands that require interactive terminal input are intentionally excluded, so your agent never gets stuck waiting for input that will never come. What's exposed is what actually works reliably in an agent context.

Let your AI participate in code review

Most developers have a backlog of MRs waiting for review. It's one of the most time-consuming parts of the job and one of the best places to put AI to work. With glab, your agent doesn't just observe your review queue, it can work through it with you.

See exactly what still needs addressing

Start with this:

glab mr view 2677 --comments --unresolved --output json

This input returns the full MR: metadata, description, and every unresolved discussion, as a single structured JSON payload. Hand that to your AI and it has everything it needs: which threads are open, what the reviewer asked for, and in what context. No tab-switching, no copy-pasting individual comments.

{
  "id": 2677,
  "title": "feat: add OAuth2 support",
  "state": "opened",
  "author": { "username": "jdwick" },
  "labels": ["backend", "needs-review"],
  "blocking_discussions_resolved": false,
  "discussions": [
    {
      "id": "3107030349",
      "resolved": false,
      "notes": [
        {
          "author": { "username": "dmurphy" },
          "body": "This error handling will swallow panics — consider wrapping with recover()",
          "created_at": "2026-03-14T09:23:11.000Z"
        }
      ]
    },
    {
      "id": "3107030412",
      "resolved": false,
      "notes": [
        {
          "author": { "username": "sreeves" },
          "body": "Token refresh logic needs a test for the expired token case",
          "created_at": "2026-03-14T10:05:44.000Z"
        }
      ]
    }
  ]
}

Instead of reading through every thread yourself, you ask your agent "what do I still need to fix in MR 2677?" and get back a prioritized summary with suggested changes. This all happens from a single command.

Close the loop programmatically

Once your AI has helped you address the feedback, it can resolve discussions:

# List all discussions — structured, ready for the agent to process
glab mr note list 456 --output json

# Resolve a discussion once the feedback is addressed
glab mr note resolve 456 3107030349

# Reopen if something needs another look
glab mr note reopen 456 3107030349
[
  {
    "id": 3107030349,
    "body": "This error handling will swallow panics — consider wrapping with recover()",
    "author": { "username": "dmurphy" },
    "resolved": false,
    "resolvable": true
  },
  {
    "id": 3107030412,
    "body": "Token refresh logic needs a test for the expired token case",
    "author": { "username": "sreeves" },
    "resolved": false,
    "resolvable": true
  }
]

Note IDs are visible directly in the GitLab UI and API, no extra lookup needed. Your agent can work through the full list, verify each fix, and resolve as it goes.

Talk to your AI about your code more effectively

Even if you're not running an MCP server, there's a simpler shift that makes a huge difference: using glab to feed your AI better information.

Think about the last time you asked an AI assistant to help triage issues or debug a failing pipeline. You probably copied some text from the GitLab UI and pasted it into the chat. Here's what your agent is actually working with when you do that:

open issues: 12 • milestone: 17.10 • label: bug, needs-triage ...

Compare that to what it gets with glab:

[
  {
    "iid": 902,
    "title": "Pipeline fails on merge to main",
    "labels": ["bug", "needs-triage"],
    "milestone": { "title": "17.10" },
    "assignees": []
  },
  ...
]

Structured, typed, complete; no ambiguity, no parsing guesswork. That's the difference between an agent that can act and one that has to ask follow-up questions.

If you're using the MCP server, you get this automatically: glab adds --output json for any command that supports it. If you're working directly from the terminal, just add the flag yourself:

# Pull open issues for triage
glab issue list --label "needs-triage" --output json

# Check pipeline status
glab ci status --output json

# Get full MR details
glab mr view 456 --output json

We've significantly expanded JSON output support in recent releases. It now covers CI status, milestones, labels, releases, schedules, cluster agents, work items, MR approvers, repo contributors, and more. If glab can retrieve it, your AI can consume it cleanly.

A real workflow

$ glab issue list --label "needs-triage" --milestone "17.10"
--output json
Agent: I found 2 unassigned bugs in the 17.10 milestone that need triage:
1. #902 — Pipeline fails on merge to main (opened 5 days ago)
2. #903 — Auth token not refreshing on expiry (opened 4 days ago)
Both are unassigned. Want me to draft triage notes and suggest assignees based on recent commit history?

Your agent is never limited to built-in commands

glab's first-class commands cover the most common workflows, but your agent is never limited to them. Through glab api, it has authenticated access to the full GitLab REST and GraphQL API surface, using the same session, with no extra credentials or configuration required.

This is a meaningful differentiator. Most CLI tools stop at what their commands expose. With glab, if GitLab's API supports it, your agent can do it. It's always working from a trusted, authenticated context.

A practical example: fetching just the list of changed files in an MR before deciding which diffs to pull in full:

# Get changed file paths — lightweight, no diff content yet
glab api "/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/diffs?per_page=100" \
| jq '.[].new_path'

# Then fetch only the specific file your agent needs
glab api "/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/diffs?per_page=100" \
| jq '.[] | select(.new_path == "path/to/file.go")'
"internal/auth/token.go"
"internal/auth/token_test.go"
"internal/oauth/refresh.go"

For anything the REST API doesn't cover (epics, certain work item queries, complex cross-project data), glab api graphql gives you the full GraphQL interface:

  glab api graphql -f query='
{
  project(fullPath: "gitlab-org/gitlab") {
    mergeRequest(iid: "12345") {
      title
      reviewers { nodes { username } }
    }
  }
}'
{
  "data": {
    "project": {
      "mergeRequest": {
        "title": "feat: add OAuth2 support",
        "reviewers": {
          "nodes": [
            { "username": "dmurphy" },
            { "username": "sreeves" }
          ]
        }
      }
    }
  }
}

Your agent has a single, authenticated entry point to everything GitLab exposes without the token juggling, separate API clients, or configuration overhead.

What's coming and your feedback

Two improvements we're actively working on will make glab even more useful for agent workflows:

Agent-aware help text. Today, --help output is written for humansvat a terminal. We're updating it to surface the non-interactive alternative for every interactive command, flag which commands support --output json, and generally make help a useful resource for agents discovering capabilities at runtime — not just humans.

Better machine-readable errors. When something goes wrong today, agents get the same human-readable error messages as terminal users. We're changing that so errors in JSON mode return structured output, giving your agent the information it needs to handle failures gracefully, retry intelligently, or surface the right context back to you.

Both of these are in active development. If you're already using glab with an AI tool, you're exactly the audience we want feedback from.

  • What friction are you hitting? Commands that don't behave well in agent contexts, error messages that aren't actionable, gaps in JSON output coverage. We want to know.
  • What workflows have you unlocked? Real usage patterns help us prioritize what to build next.

Join the discussion in our feedback issue — that's where we're shaping the roadmap for agent-friendliness, and where your input will have the most direct impact. If you've found a specific gap, open an issue. If you've got a fix in mind, contributions are welcome. Visit CONTRIBUTING.md to get started.

The GitLab CLI has always been about giving developers more control over their workflow. As AI becomes a bigger part of how we all work, that means making glab the best possible interface between your AI tools and your GitLab projects. We're just getting started and we'd love to build the next part with you.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Do Component Libraries Still Matter in the Age of AI?

1 Share

You can have both AI code generation and a solid component library at the foundation. Here are the top six reasons to use both in an integrated approach to code.

One of the hottest hot takes that I see floating around the developer space these days is “Why would I ever use a component library anymore when I can just generate all the components myself with AI?” On the surface, it seems like a fairly reasonable question to ask.

After all, one of the main reasons why developers reached for a component library in the past was to avoid the work of building complex components. And that’s fair! As someone who once had to build a color picker from scratch, I get it—I’m not particularly keen on ever repeating that experience, either.

But now, that’s not really a pain point anymore. If I don’t want to build that color picker myself, I can just ask my new best friend Claude (or Copilot or Cursor or whatever) to do it for me. In a matter of minutes, I can have my shiny new component with no need to wrangle the code myself. That’s the obvious answer … right?

Well, maybe not. While it’s certainly possible to have every component in your application generated by your AI tool of choice, I’d argue that it’s not the best solution. Instead, I’d say that your code generation will be better if you use AI with component libraries. This doesn’t have to be an either/or situation. Why choose when you can have both? Here are my top six reasons why component libraries should remain a core part of your frontend infrastructure, even if you’re a full-on vibe coder:

1. Abundant Reference Material

AI cannot generate from nothing: the code it creates for you is based on the code samples that it has access to. And you know what creates thousands of standardized examples of the exact same components used in all kinds of different situations? Yep: component libraries.

When tons of developers are using the same set of components, there’s more sample data for the AI to learn from. That means it will be able to reproduce better patterns, make more effective choices and generate fewer mistakes. And this goes beyond just the stuff other developers have built using these components. Any worthwhile component library will also come with pages and pages of documentation. Feature overviews, configuration details, API guides, troubleshooting, recommendations, official demos and sample code—what more could your coding agent ask for?

Well, now that you’ve asked … what about third-party reference material? Larger and more popular component libraries will also have unofficial “documentation” in the form of technical blogs, walkthroughs, videos and sample repos that were created by the community. The longer the component library has been around, the more content will have been created—and the more content the AI has to reference, the better the output.

2. Better Accessibility

Let’s be honest for a moment: AI code generating tools do not excel at accessibility. They’re getting better, and I think it’s even fair to say that there will be a time (hopefully in the not-too-distant future) where they might even be good at it. But that day is not today. Until that gap closes, we (the human developers) are still responsible for making the applications and software we build as accessible as possible.

One of the best ways that we can do that—and one that the article linked above calls out specifically—is to leverage libraries that have accessibility built-in on the component level. While it’s possible to attempt to include accessibility constraints and instruction in your prompts, giving an AI code generator a set of accessible primitives to work with will get you a lot further.

The deepest solution is architectural. Instead of relying on every prompt to produce correct primitives, use libraries that encode accessibility into their API contracts.

3. Cleaner (and Fewer) Lines of Code

One of the best parts about using prebuilt components has always been the functionality that comes already baked in. It’s not hard to build a basic data grid that displays the content—the difficulty comes with the virtualization, sorting / filtering / grouping, exporting and more. Each additional feature that you have to build (or generate) is extra code in your codebase that has to be maintained and managed indefinitely.

Using a component library means you can take advantage of code that someone else is maintaining—and isn’t that the dream, ultimately? Your lines of code go down because it takes fewer lines of code to pass true into the predefined sort property than it does to build a sorting function. “But wait!” you cry. “Why do I care how many lines of code something is if I can just make the AI write those lines of code for me?” Well, unfortunately, you do still have to understand, review and maintain those AI-generated lines of code. Nobody is really reading that 2,000+ line PR, but they might read a 200 line one—isn’t that what you’d rather deal with? Less stuff generated from scratch means less to review—and fewer chances for errors to slip in, in the first place.

4. Cost Effectiveness

You know what comes with fewer things to generate and fewer revision cycles to correct errors? Fewer tokens. Most AI coding tools charge by token consumption, which means every line of generated code, new prompt and iteration has real cost attached. Complex components with lots of edge cases can take several revision cycles before they’re production-ready, and those cycles add up.

Using a component library helps cut that cost at multiple points. To begin with, you’ll write shorter prompts and (as mentioned earlier) generate less code because the AI is handling configuration and composition code rather than implementation code. Telling a well-documented component what to do is a much shorter prompt than asking an AI to build that functionality from scratch. Because the output is more predictable, you’ll also spend fewer tokens on corrections. That may not make much of a difference for a single component, but it absolutely scales when you’re doing it across an entire application.

5. Easier Human Revision

Back to human review—AI won’t get it right every time (at least, not yet), which means there will always be places a human needs to step in and make corrections. We already know that it’s harder to open up a document you’re not familiar with and orient yourself in someone else’s code, but that mental lift gets a little easier if you are at least familiar with the tools being used.

Component libraries offer the benefit of consistent patterns: the same properties, the same naming structures, the same mental model. AI-generated components won’t have this kind of standardization, especially if they were generated over time by different developers using different tools. If the AI is generating code using the same set of components every time—and you’re already familiar with those components—you’re going to be able to get up to speed quicker and spend less time figuring out what the AI generated.

6. Alignment with Design

There’s a solid chance that both your development team and the design team are both using generative AI tools in your work—which can lead to even more painful gaps during the handoff. However, if you’re both telling the AI to work with a specific component library, you can start to bring those disparate experiences closer together. Designers might even be able to start vibe coding prototypes they could hand off to a developer!

Alternately, you could also feed your design system tokens (different kind of tokens) into the AI tool to help it create work in alignment with what already exists. If you’re using a tool like Progress ThemeBuilder, designers can go in and customize exactly how each component should look and behave, and then developers can apply that exported CSS to their AI generated layouts—assuming they’re built with the same components.

Build on Top of a Strong Foundation

You can always prompt your AI generator of choice and hope for the best—but from our experience building UI controls, we’ve found that the results are better when you can provide your model with the right building blocks. That’s why we’ve created a suite of AI code generation tools that leverage the Progress Kendo UI and Telerik component libraries, allowing you to generate new layouts, pages and features built with components you know you can trust.

We believe that AI code generation isn’t a replacement for UI controls that are secure, accessible and human-built. Rather than taking the approach that AI should be used instead, our approach is to see where it can be integrated—to expand the ease of use and capabilities of what we’ve already built.

After all, why throw away a decade of knowledge about building UI controls? For us, it’s the ideal foundation to build on top of.

Try the UI Generator

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories