Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152569 stories
·
33 followers

Implementing Level of Authentication (LoA) with ASP.NET Core Identity and Duende

1 Share

This post shows how to implement an application which requires a user to authenticate using passkeys. The identity provider returns three claims to prove the authentication level (loa), the identity level, (loi) and the amr claim showing the used authentication method.

Code: https://github.com/swiss-ssi-group/swiyu-passkeys-idp-loi-loa

Blogs in this series:

The amr claim and the loa claim returns similar values. The amr claim contains the identity provider implementation and the ASP.NET Core Identity implementation of the amr specification. This could be used for validating the authentication method but each IDP uses different values and the level is unclear. Due to this, the loa claim can be used. This claim returns the level of authentication from least secure to most secure. The most secure authentication is passkeys or public/private key certificate authentication. Less then 300 should NOT be used for most use cases.

loa (Level of Authentication)

loa.400 : passkeys, (public/private key certificate authentication)
loa.300 : authenticator apps, OpenID verifiable credentials (E-ID, swiyu)
loa.200 : SMS, email, TOTP, 2-step
loa.100 : single factor, SAS key, API Keys, passwords, OTP

Setup

The solution is implemented using Aspire from Microsoft. It uses three applications, the STS which is an OpenID Connect server implemented using Duende and an Identity provider using ASP.NET Core Identity, the web application using Blazor and an API which requires DPoP access tokens and a level of authentication which is phishing resistant. The web application authenticates using a confidential OpenID Connect client using PKCE and OAuth PAR.

OpenID Connect web client

The Blazor application uses two Nuget packages to implement the OIDC authentication client.

  • Duende.AccessTokenManagement.OpenIdConnect
  • Microsoft.AspNetCore.Authentication.OpenIdConnect

The application uses OpenID Connect to authenticate and secure HTTP only cookies to store the session. A client secret is used as this is only a demo, client assertions should be used in productive applications. The client requests and uses DPoP access tokens.

var oidcConfig = builder.Configuration.GetSection("OpenIDConnectSettings");

builder.Services.AddAuthentication(options =>
{
    options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
    options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
    options.DefaultSignOutScheme = OpenIdConnectDefaults.AuthenticationScheme;
})
.AddCookie(options =>
{
    options.Cookie.Name = "__Host-idp-swiyu-passkeys-web";
    options.Cookie.SameSite = SameSiteMode.Lax;
})
.AddOpenIdConnect(options =>
{
    builder.Configuration.GetSection("OpenIDConnectSettings").Bind(options);

    options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;
    options.ResponseType = OpenIdConnectResponseType.Code;

    options.SaveTokens = true;
    options.GetClaimsFromUserInfoEndpoint = true;
    options.MapInboundClaims = false;

    options.ClaimActions.MapUniqueJsonKey("loa", "loa");
    options.ClaimActions.MapUniqueJsonKey("loi", "loi");
    options.ClaimActions.MapUniqueJsonKey(JwtClaimTypes.Email, JwtClaimTypes.Email);

    options.Scope.Add("scope2");
    options.TokenValidationParameters = new TokenValidationParameters
    {
        NameClaimType = "name"
    };
});

var privatePem = File.ReadAllText(Path.Combine(
	builder.Environment.ContentRootPath, "ecdsa384-private.pem"));
var publicPem = File.ReadAllText(Path.Combine(
	builder.Environment.ContentRootPath, "ecdsa384-public.pem"));
	
var ecdsaCertificate = X509Certificate2
	.CreateFromPem(publicPem, privatePem);
	
var ecdsaCertificateKey = new ECDsaSecurityKey(
	$ecdsaCertificate.GetECDsaPrivateKey());

// add automatic token management
builder.Services.AddOpenIdConnectAccessTokenManagement(options =>
{
    var jwk = JsonWebKeyConverter.ConvertFromSecurityKey(ecdsaCertificateKey);
    jwk.Alg = "ES384";
    options.DPoPJsonWebKey = DPoPProofKey
		.ParseOrDefault(JsonSerializer.Serialize(jwk));
});

builder.Services.AddUserAccessTokenHttpClient("dpop-api-client", 
	configureClient: client =>
	{
		client.BaseAddress = new("https+http://apiservice");
	});

OpenID Connect Server using Identity & Duende

The OpenID Connect client is implemented using Duende IdentityServer. The client requires DPoP and uses OAuth PAR, (Pushed Authorization Requests). I added the profile claims into the ID token, this can be removed, but the Blazor client application would be required to support this. The client should use a client assertion in a production application and the scope2 together with the ApiResource definition is added as a demo. This is validated in the API.

// interactive client using code flow + pkce + par + DPoP
new Client
{
    ClientId = "web-client",
    ClientSecrets = { new Secret("super-secret-$123".Sha256()) },

    RequireDPoP = true,
    RequirePushedAuthorization = true,

    AllowedGrantTypes = GrantTypes.Code,
    AlwaysIncludeUserClaimsInIdToken = true,

    RedirectUris = { "https://localhost:7019/signin-oidc" },
    FrontChannelLogoutUri = "https://localhost:7019/signout-oidc",
    PostLogoutRedirectUris = { "https://localhost:7019/signout-callback-oidc" },

    AllowOfflineAccess = true,
    AllowedScopes = { "openid", "profile", "scope2" }
},

The index.html.cs file contains the additional claims implementation. The “loa” and the “loi” claims are added here, depending on the level of authentication and the level of identification. As the User.Claims are immutable, the claims need to be removed and recreated. The amr claim is also recreated because the ASP.NET Core Identity sets an incorrect value for passkeys.

if (!string.IsNullOrEmpty(Input.Passkey?.CredentialJson))
{
    // When performing passkey sign-in, don't perform form validation.
    ModelState.Clear();

    result = await _signInManager.PasskeySignInAsync(Input.Passkey.CredentialJson);
    if (result.Succeeded)
    {
        user = await _userManager.GetUserAsync(User);

        // Sign out first to clear the existing cookie
        await _signInManager.SignOutAsync();

        // Create additional claims
        var additionalClaims = new List<Claim>
        {
            new Claim(Consts.LOA, Consts.LOA_400),
            new Claim(Consts.LOI, Consts.LOI_100),
            // ASP.NET Core bug workaround:
            // https://github.com/dotnet/aspnetcore/issues/64881
            new Claim(JwtClaimTypes.AuthenticationMethod, Amr.Pop)
        };

        // Sign in again with the additional claims
        await _signInManager.SignInWithClaimsAsync(user!, isPersistent: false, additionalClaims);
    }
}

The Profile.cs class implements the IProfileService service from Duende. This is added in the services. The class added the different claims to the different caller profiles.

public class ProfileService : IProfileService
{
    public Task GetProfileDataAsync(ProfileDataRequestContext context)
    {
        // context.Subject is the user for whom the result is being made
        // context.Subject.Claims is the claims collection from the user's session cookie at login time
        // context.IssuedClaims is the collection of claims that your logic has decided to return in the response

        if (context.Caller == IdentityServerConstants.ProfileDataCallers.ClaimsProviderAccessToken)
        {
            // Access token - add custom claims
            AddCustomClaims(context);
        }

        if (context.Caller == IdentityServerConstants.ProfileDataCallers.ClaimsProviderIdentityToken)
        {
            // Identity token - add custom claims and standard profile claims
            AddCustomClaims(context);
            AddProfileClaims(context);
        }

        if (context.Caller == IdentityServerConstants.ProfileDataCallers.UserInfoEndpoint)
        {
            // UserInfo endpoint - add custom claims and standard profile claims
            AddCustomClaims(context);
            AddProfileClaims(context);
        }

        return Task.CompletedTask;
    }

    public Task IsActiveAsync(IsActiveContext context)
    {
        context.IsActive = true;
        return Task.CompletedTask;
    }

    private void AddCustomClaims(ProfileDataRequestContext context)
    {
        // Add OID claim
        var oid = context.Subject.Claims.FirstOrDefault(t => t.Type == "oid");
        if (oid != null)
        {
            context.IssuedClaims.Add(new Claim("oid", oid.Value));
        }

        // Add LOA (Level of Authentication) claim
        var loa = context.Subject.Claims.FirstOrDefault(t => t.Type == Consts.LOA);
        if (loa != null)
        {
            context.IssuedClaims.Add(new Claim(Consts.LOA, loa.Value));
        }

        // Add LOI (Level of Identification) claim
        var loi = context.Subject.Claims.FirstOrDefault(t => t.Type == Consts.LOI);
        if (loi != null)
        {
            context.IssuedClaims.Add(new Claim(Consts.LOI, loi.Value));
        }

        // Add AMR (Authentication Method Reference) claim
        var amr = context.Subject.Claims.FirstOrDefault(t => t.Type == JwtClaimTypes.AuthenticationMethod);
        if (amr != null)
        {
            context.IssuedClaims.Add(new Claim(JwtClaimTypes.AuthenticationMethod, amr.Value));
        }
    }

    private void AddProfileClaims(ProfileDataRequestContext context)
    {
        // Add Name claim (required for User.Identity.Name to work)
        var name = context.Subject.Claims.FirstOrDefault(t => t.Type == JwtClaimTypes.Name);
        if (name != null)
        {
            context.IssuedClaims.Add(new Claim(JwtClaimTypes.Name, name.Value));
        }

        var email = context.Subject.Claims.FirstOrDefault(t => t.Type == JwtClaimTypes.Email);
        if (email != null)
        {
            context.IssuedClaims.Add(new Claim(JwtClaimTypes.Email, email.Value));
        }
    }
}

The result can be displayed in the Blazor application. The default windows mapping is disabled. The level of authentication and the level of identification values are displayed in the UI. When clicking the Weather tab, a HTTP request is sent to the API using the DPoP access token.

DPoP API requires passkeys user authentication

The API uses the following Nuget packages to implement the JWT and DPoP security requirements.

  • Microsoft.AspNetCore.Authentication.JwtBearer
  • Duende.AspNetCore.Authentication.JwtBearer

The AddJwtBearer method is used to validate the DPoP token together with the Duende client library extensions. The ApiResource is validated as well as the standard DPoP requirements.

builder.Services.AddAuthentication("Bearer")
    .AddJwtBearer(options =>
    {
        options.Authority = "https://localhost:5001";
        options.Audience = "dpop-api";

        options.TokenValidationParameters.ValidateAudience = true;
        options.TokenValidationParameters.ValidateIssuer = true;
        options.TokenValidationParameters.ValidAudience = "dpop-api";

        options.MapInboundClaims = false;
        options.TokenValidationParameters.ValidTypes = ["at+jwt"];
    });

// layers DPoP onto the "token" scheme above
builder.Services.ConfigureDPoPTokensForScheme("Bearer", opt =>
{
    opt.ValidationMode = ExpirationValidationMode.IssuedAt; // IssuedAt is the default.
});

builder.Services.AddAuthorization();

builder.Services.AddSingleton<IAuthorizationHandler, AuthzLoaLoiHandler>();

builder.Services.AddAuthorizationBuilder()
    .AddPolicy("authz_checks", policy => policy
        .RequireAuthenticatedUser()
        .AddRequirements(new AuthzLoaLoiRequirement()));

The AuthzLoaLoiHandler is used to validate the loa and later the loi claims. The API returns a 403 if the user that acquired the access token did not use a phishing resistant authentication method.

using Microsoft.AspNetCore.Authorization;

public class AuthzLoaLoiHandler : AuthorizationHandler<AuthzLoaLoiRequirement>
{
    protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, 
         AuthzLoaLoiRequirement requirement)
    {
        var loa = context.User.FindFirst(c => c.Type == "loa");
        var loi = context.User.FindFirst(c => c.Type == "loi");

        if (loa is null || loi is null)
        {
            return Task.CompletedTask;
        }

        // Lets require passkeys to use this API
        // DPoP is required to use the API
        if (loa.Value != "loa.400")
        {
            return Task.CompletedTask;
        }

        context.Succeed(requirement);

        return Task.CompletedTask;
    }
}

Links

https://github.com/dotnet/aspnetcore/issues/64881

https://openid.net/specs/openid-connect-eap-acr-values-1_0-final.html

https://datatracker.ietf.org/doc/html/rfc8176

https://learn.microsoft.com/en-us/aspnet/core/security/authentication/claims

https://damienbod.com/2025/07/02/implement-asp-net-core-openid-connect-with-keykloak-to-implement-level-of-authentication-loa-requirements/



Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

New efficiency upgrades in Red Hat Advanced Cluster Management for Kubernetes 2.15

1 Share

If you’re a platform engineer or SRE, you know that managing infrastructure and efficiently managing it are two very different things. You’ve been able to run virtual machines (VMs) alongside containers in Red Hat Advanced Cluster Management (RHACM) for a while now. But as your fleet grows, finding that one specific VM acting up in a haystack of clusters can feel like a scavenger hunt you didn't sign up for.

Red Hat Advanced Cluster Management for Kubernetes 2.15 redefines your daily workflow instead of just adding features. We’ve taken the capabilities you rely on and made them easier to use so you can stop hunting and start solving.

Here are three ways RHACM 2.15 helps your fleet operations.

1. Stop digging for your VMs

In version 2.15, we overhauled the experience with a new fleet virtualization perspective.

  • See the forest and the trees: A new tree view lets you visualize your entire fleet hierarchy instantly.
  • Cut the noise: We added filtering by namespace, project, or folder so you can drill down to the exact workload you need without the clutter.
  • Fix it faster: Once you find that VM, you can access it using VNC or SSH or migrate it live to another cluster to balance your fleet, all without leaving the console.
  • Ops with clarity: You can see key facts like node, IP address, and storage class in a table and take multi-select actions like start, stop, restart, or pause.

The benefit: You spend less time clicking through menus (context switching) and more time managing your fleet.

2. Stop guessing on resources (and wasting money)

We all have that fear of under-provisioning a workload and causing an outage. So, what do we do? We over-provision "just in case." But across a massive fleet, those extra CPU cores and gigabytes of RAM add up to a seriously wasted budget.

RHACM 2.15 improves on right-sizing recommendations (Tech Preview) by analyzing your real-time resource consumption against what you originally requested. This information shows you where you are over-provisioned or under-utilized.

The benefit: You get data-driven recommendations to optimize your infrastructure usage without risking performance.

3. Scale GitOps to the edge

Managing ten clusters is hard. Managing hundreds across edge locations with spotty internet is a nightmare. Traditional push-based GitOps models often fail on air-gapped or restricted networks. Other pull models require a heavier footprint, which does not work well at small edge sites.

We’re introducing the Argo CD agent (Tech Preview) to change how this works. Instead of the control plane trying to force updates out to remote clusters, or large resource requirements being necessary for pulls, a lightweight agent on the edge reaches back to the hub and ensures round trip commit/reconciliation. This "pull model" is perfect for retail or  manufacturing sites where the network is unreliable.

If you run large fleets, our global hub now supports managed cluster migration (general availability), letting you rebalance managed clusters and their workloads across hubs.

The benefit: You can scale your GitOps workflows to thousands of clusters, even in the most challenging edge environments, without losing control.

Ready to upgrade your workflow?

Infrastructure is complex, but managing it doesn't have to be a struggle. Red Hat Advanced Cluster Management for Kubernetes 2.15 is available now to help you see more, click less, and get your nights and weekends back.

Explore the Red Hat Advanced Cluster Management product page and the technical documentation to learn more.

The post New efficiency upgrades in Red Hat Advanced Cluster Management for Kubernetes 2.15 appeared first on Red Hat Developer.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

File logging in ASP.NET Core made easy with Serilog

1 Share

Learn how to add file logging to an ASP.NET Core app using Serilog, including setup, configuration, and writing logs to daily rolling files.

The page File logging in ASP.NET Core made easy with Serilog appeared on Round The Code.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Collection Expression Arguments in C# 15+

1 Share

There is a nice proposal which would make collection expressions a bit better - Giving the users the ability to pass in arguments to the creation.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Using REGEXP_LIKE in SQL Server 2025

1 Share

Explore the power of REGEXP_LIKE in SQL Server 2025 and how it enhances SQL query performance beyond the LIKE predicate.

The post Using REGEXP_LIKE in SQL Server 2025 appeared first on MSSQLTips.com.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

AI At the Edge with Raspberry Pi

1 Share

As the digital world marches ceaselessly forward, one can’t help but marvel at the myriad ways technology continues to interlace itself into our daily lives. Today, I found myself mesmerized by the power of simple, inexpensive computing devices like Raspberry Pi and their outsize potential in reshaping our interactions with technology. What dawned on me wasn’t merely about the capacity for technical innovation but also about accessibility, and the profound impact these devices could have on learning environments.

This video is from Eli the Computer Guy.

Take a hypothetical example of a small, unassuming piece of hardware like the Raspberry Pi. On the surface, it’s a credit-card-sized computer that might seem unremarkable. Yet, when leveraged appropriately, it transforms into an educational powerhouse. Imagine it as a tiny seed capable of growing into a vast tree of knowledge when planted in the fertile soil of curiosity and guided learning.

Picture this: a classroom where each student has access to a Raspberry Pi. It’s a scenario where the barriers to tech education—cost, accessibility, and complexity—are dramatically lowered. The Pi, with its minimal cost and open-source nature, offers a playground not just for coding and programming, which are vital skills for the future, but also for exploring the vast potential of artificial intelligence (AI), all from a device that fits in the palm of your hand.

Consider its implications in learning programming and AI. With just a few lines of Python code, students can manipulate the physical world: they could program the Pi to turn lights on and off, interact with the internet, control robots, or gather environmental data. This hands-on approach demystifies technology, turning abstract concepts into tangible outcomes that students can see and touch.

Moreover, when connected to different sensors and modules, the Raspberry Pi serves as a bridge linking digital commands to physical actions. This capability could revolutionize education in fields like robotics, environmental science, and more, making these arenas immensely more accessible and engaging.

Now, let’s scale that idea up. These small devices could serve not just individual classrooms but could be integrated into public education systems in underprivileged areas. In regions where educational resources are scarce, the Raspberry Pi could become a linchpin in democratizing access to tech education, providing a low-cost, scalable solution to bring essential skills to the masses.

Furthermore, when networked together or connected to the internet, these devices could enable collaborative projects that cross geographical and socio-economic barriers. Students from different parts of the world could work together to solve problems, share data, and learn from each other, all facilitated by this modest yet powerful tool.

The philosophical takeaway here is profound. In a world where technology often seems to drive further wedge into the disparities of education and economic opportunity, devices like the Raspberry Pi remind us of the potential for technology to level the playing field. As we forge ahead, our focus should not only be on advancing technological capabilities but also on ensuring these advances are accessible to all. This approach not only enriches individual lives but also uplifts entire communities by providing the tools needed to build a self-sustaining cycle of education and innovation.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories