Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152574 stories
·
33 followers

Ubisoft Closes Game Studio Where Workers Voted to Unionize Two Weeks Ago

1 Share
Ubisoft announced Wednesday it will close its studio in Halifax, Nova Scotia — two weeks after 74% of its staff voted to unionize. This means laying off the 71 people at the studio, reports the gaming news site Aftermath: [Communications Workers of America's Canadian affiliate, CWA Canada] said in a statement to Aftermath the union will "pursue every legal recourse to ensure that the rights of these workers are respected and not infringed in any way." The union said in a news release that it's illegal in Canada for companies to close businesses because of unionization. That's not necessarily what happened here, according to the news release, but the union is "demanding information from Ubisoft about the reason for the sudden decision to close." "We will be looking for Ubisoft to show us that this had nothing to do with the employees joining a union," former Ubisoft Halifax programmer and bargaining committee member Jon Huffman said in a statement. "The workers, their families, the people of Nova Scotia, and all of us who love video games made in Canada, deserve nothing less...." Before joining Ubisoft, the studio was best known for its work on the Rocksmith franchise; under Ubisoft, it focused squarely on mobile games. Ubisoft Halifax was quickly removed from the Ubisoft website on Wednesday...

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Carola Lilienthal and Henning Schwentner: Domain-Driven Transformation - Episode 384

1 Share

Carola Lilienthal is an architect and coach at Workplace Solutions. She is the author of Sustainable Software Architecture and shares her knowledge at international conferences.

Henning Schwentner is a software architect, coach, and consultant at WPS – Workplace Solutions where he helps teams modernize legacy systems. He is a thought leader in DDD and software architecture, and he has also authored Domain Storytelling.

Carola's LinkedIn 

Henning's LinkedIn 

Want to Learn More?
Visit AzureDevOps.Show for show notes and additional episodes.





Download audio: https://traffic.libsyn.com/clean/secure/azuredevops/Episode_384.mp3?dest-id=768873
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

300 - From Vibe coding to Software engineering

1 Share

Fragmented is changing. New direction, new cohost. Kaushik explains the pivot
from Android to AI development and introduces Iury Souza.

From vibe coding to software engineering — one episode at a time.

Full shownotes at fragmentedpodcast.com.

Contact us

Co-hosts:





Download audio: https://cdn.simplecast.com/audio/20f35050-e836-44cd-8f7f-fd13e8cb2e44/episodes/63d7400c-f368-4af6-a6ae-cf09f46e6349/audio/56b5d652-11b6-4964-8281-146f3e9380dc/default_tc.mp3?aid=rss_feed&feed=LpAGSLnY
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

497: Turning Machine Code into C with AI

1 Share

In this episode James and Frank dive into the practical realities of using AI in everyday development—arguing that AI shines in brownfield (existing) code because it respects your architecture, while greenfield work rewards iterative prompting. They unpack model quirks: context-window limits, hallucinations, and why trying different models matters. The heart of the show is Frank’s nerdy delight: feeding a 64KB EEPROM through a disassembler and having Sonnet decompile it into readable C, exposing a PID autopilot and hardware checks—proof that AI can accelerate reverse engineering and embedded work. Along the way they share hands-on tips (trim and clean context, use disassembly first, tweak prompts), and fun examples of AI-generated icons and AppleScript. A must-listen for devs curious how AI can supercharge real projects.

Follow Us

⭐⭐ Review Us ⭐⭐

Machine transcription available on http://mergeconflict.fm

Support Merge Conflict

Links:





Download audio: https://aphid.fireside.fm/d/1437767933/02d84890-e58d-43eb-ab4c-26bcc8524289/785deb28-cef6-47e3-82b2-628941e3b7ea.mp3
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Two regimes of Git

1 Share

Using Git for CI is not the same as Tactical Git.

Git is such a versatile tool that when discussing it, interlocutors may often talk past each other. One person's use is so different from the way the next person uses it that every discussion is fraught with risk of misunderstandings. This happens to me a lot, because I use Git in two radically different ways, depending on context.

Should you rebase? Merge? Squash? Cherry-pick?

Often, being more explicit about a context can help address confusion.

I know of at least two ways of using Git that differ so much from each other that I think we may term them two different regimes. The rules I follow in one regime don't all apply in the other, and vice versa.

In this article I'll describe both regimes.

Collaboration #

Most people use Git because it facilitates collaboration. Like other source-control systems, it's a way to share a code base with coworkers, or open-source contributors. Continuous Integration is a subset in this category, and to my knowledge still the best way to collaborate.

When I work in this regime, I follow one dominant rule: Once history is shared with others, it should be considered immutable. When you push to a shared instance of the repository, other people may pull your changes. Changing the history after having shared it is going to confuse most Git clients. It's much easier to abstain from editing shared history.

What if you shared something that contains an error? Then fix the error and push that update, too. Sometimes, you can use git revert for this.

A special case is reserved for mistakes that involve leaking security-sensitive data. If you accidentally share a password, a revert doesn't rectify the problem. The data is still in the history, so this is a singular case where I know of no better remedy than rewriting history. That is, however, quite bothersome, because you now need to communicate to every other collaborator that this is going to happen, and that they may be best off making a new clone of the repository. If there's a better way to address such situations, I don't know of it, but would be happy to learn.

Another consequence of the Collaboration regime follows from the way pull requests are typically implemented. In GitHub, sending a pull request is a two-step process: First you push a branch, and then you click a button to send the pull request. I usually use the GitHub web user interface to review my own pull-request branch before pushing the button. Occasionally I spot an error. At this point I consider the branch 'unshared', so I may decide to rewrite the history of that branch and force-push it. Once, however, I've clicked the button and sent the pull request, I consider the branch shared, and the same rules apply: Rewriting history is not allowed.

One implication of this is that the set of Git actions you need to know is small: You can effectively get by with git add, commit, pull, push, and possibly a few more.

Many of the 'advanced' Git features, such as rebase and squash, allow you to rewrite history, so aren't allowed in this regime.

Tactical Git #

As far as I can tell, Git wasn't originally created for this second use case, but it turns out that it's incredibly useful for local management of code files. This is what I've previously described as Tactical Git.

Once you realize that you have a version-control system at your fingertips, the opportunities are manifold. You can perform experiments in a branch that only exists on your machine. You may, for example, test alternative API design ideas, implementations, etc. There's no reason to litter the code base with commented-out code because you're afraid that you'll need something later. Just commit it on a local branch. If it later turns out that the experiment didn't turn out to your liking, commit it anyway, but then check out master. You'll leave the experiment on your local machine, and it's there if you need it later.

You can even used failed experiments as evidence that a particular idea has undesirable consequences. Have you ever been in a situation where a coworker suggests a new way of doing things. You may have previously responded that you've already tried that, and it didn't work. How well did that answer go over with your coworker?

He or she probably wasn't convinced.

What if, however, you've kept that experiment on your own machine? Now you can say: "Not only have I already tried this, but I'm happy to share the relevant branch with you."

You can see an example of that in listing 8.10 in Code That Fits in Your Head. This code listing is based on a side-branch never merged into master. If you have the book, you also have access to the entire Git repository, and you can check for yourself that commit 0bb8068 is a dead-end branch named explode-maitre-d-arguments.

Under the Tactical Git regime, you can also go back and edit mistakes when working on code that you haven't yet shared. I use micro-commits, so I tend to check in small commits often. Sometimes, as I'm working with the code, I notice that I made a mistake a few commits ago. Since I'm a neat freak, I often use interactive rebase to go back and correct my mistakes before sharing the history with anyone else. I don't do that to look perfect, but rather to leave behind a legible trail of changes. If I already know that I made a mistake before I've shared my code with anyone else, there's no reason to burden others with both the mistake and its rectification.

In general, I aim to leave as nice a Git history as possible. This is not only for my collaborators' sake, but for my own, too. Legible Git histories and micro-commits make it easier to troubleshoot later, as this story demonstrates.

The toolset useful for Tactical Git is different than for collaboration. You still use add and commit, of course, but I also use (interactive) rebase often, as well as stash and branch. Only rarely do I need cherry-pick, but it's useful when I do need it.

Conclusion #

When discussing good Git practices, it's easy to misunderstand each other because there's more than one way to use Git. I know of at least two radically different modes: Collaboration and Tactical Git. The rules that apply under the Collaboration regime should not all be followed slavishly when in the Tactical Git regime. Specifically, the rule about rewriting history is almost turned on its head. Under the Collaboration regime, do not rewrite Git history; under the Tactical Git regime, rewriting history is encouraged.


This blog is totally free, but if you like it, please consider supporting it.
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Implementing Level of Authentication (LoA) with ASP.NET Core Identity and Duende

1 Share

This post shows how to implement an application which requires a user to authenticate using passkeys. The identity provider returns three claims to prove the authentication level (loa), the identity level, (loi) and the amr claim showing the used authentication method.

Code: https://github.com/swiss-ssi-group/swiyu-passkeys-idp-loi-loa

Blogs in this series:

The amr claim and the loa claim returns similar values. The amr claim contains the identity provider implementation and the ASP.NET Core Identity implementation of the amr specification. This could be used for validating the authentication method but each IDP uses different values and the level is unclear. Due to this, the loa claim can be used. This claim returns the level of authentication from least secure to most secure. The most secure authentication is passkeys or public/private key certificate authentication. Less then 300 should NOT be used for most use cases.

loa (Level of Authentication)

loa.400 : passkeys, (public/private key certificate authentication)
loa.300 : authenticator apps, OpenID verifiable credentials (E-ID, swiyu)
loa.200 : SMS, email, TOTP, 2-step
loa.100 : single factor, SAS key, API Keys, passwords, OTP

Setup

The solution is implemented using Aspire from Microsoft. It uses three applications, the STS which is an OpenID Connect server implemented using Duende and an Identity provider using ASP.NET Core Identity, the web application using Blazor and an API which requires DPoP access tokens and a level of authentication which is phishing resistant. The web application authenticates using a confidential OpenID Connect client using PKCE and OAuth PAR.

OpenID Connect web client

The Blazor application uses two Nuget packages to implement the OIDC authentication client.

  • Duende.AccessTokenManagement.OpenIdConnect
  • Microsoft.AspNetCore.Authentication.OpenIdConnect

The application uses OpenID Connect to authenticate and secure HTTP only cookies to store the session. A client secret is used as this is only a demo, client assertions should be used in productive applications. The client requests and uses DPoP access tokens.

var oidcConfig = builder.Configuration.GetSection("OpenIDConnectSettings");

builder.Services.AddAuthentication(options =>
{
    options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
    options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
    options.DefaultSignOutScheme = OpenIdConnectDefaults.AuthenticationScheme;
})
.AddCookie(options =>
{
    options.Cookie.Name = "__Host-idp-swiyu-passkeys-web";
    options.Cookie.SameSite = SameSiteMode.Lax;
})
.AddOpenIdConnect(options =>
{
    builder.Configuration.GetSection("OpenIDConnectSettings").Bind(options);

    options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;
    options.ResponseType = OpenIdConnectResponseType.Code;

    options.SaveTokens = true;
    options.GetClaimsFromUserInfoEndpoint = true;
    options.MapInboundClaims = false;

    options.ClaimActions.MapUniqueJsonKey("loa", "loa");
    options.ClaimActions.MapUniqueJsonKey("loi", "loi");
    options.ClaimActions.MapUniqueJsonKey(JwtClaimTypes.Email, JwtClaimTypes.Email);

    options.Scope.Add("scope2");
    options.TokenValidationParameters = new TokenValidationParameters
    {
        NameClaimType = "name"
    };
});

var privatePem = File.ReadAllText(Path.Combine(
	builder.Environment.ContentRootPath, "ecdsa384-private.pem"));
var publicPem = File.ReadAllText(Path.Combine(
	builder.Environment.ContentRootPath, "ecdsa384-public.pem"));
	
var ecdsaCertificate = X509Certificate2
	.CreateFromPem(publicPem, privatePem);
	
var ecdsaCertificateKey = new ECDsaSecurityKey(
	$ecdsaCertificate.GetECDsaPrivateKey());

// add automatic token management
builder.Services.AddOpenIdConnectAccessTokenManagement(options =>
{
    var jwk = JsonWebKeyConverter.ConvertFromSecurityKey(ecdsaCertificateKey);
    jwk.Alg = "ES384";
    options.DPoPJsonWebKey = DPoPProofKey
		.ParseOrDefault(JsonSerializer.Serialize(jwk));
});

builder.Services.AddUserAccessTokenHttpClient("dpop-api-client", 
	configureClient: client =>
	{
		client.BaseAddress = new("https+http://apiservice");
	});

OpenID Connect Server using Identity & Duende

The OpenID Connect client is implemented using Duende IdentityServer. The client requires DPoP and uses OAuth PAR, (Pushed Authorization Requests). I added the profile claims into the ID token, this can be removed, but the Blazor client application would be required to support this. The client should use a client assertion in a production application and the scope2 together with the ApiResource definition is added as a demo. This is validated in the API.

// interactive client using code flow + pkce + par + DPoP
new Client
{
    ClientId = "web-client",
    ClientSecrets = { new Secret("super-secret-$123".Sha256()) },

    RequireDPoP = true,
    RequirePushedAuthorization = true,

    AllowedGrantTypes = GrantTypes.Code,
    AlwaysIncludeUserClaimsInIdToken = true,

    RedirectUris = { "https://localhost:7019/signin-oidc" },
    FrontChannelLogoutUri = "https://localhost:7019/signout-oidc",
    PostLogoutRedirectUris = { "https://localhost:7019/signout-callback-oidc" },

    AllowOfflineAccess = true,
    AllowedScopes = { "openid", "profile", "scope2" }
},

The index.html.cs file contains the additional claims implementation. The “loa” and the “loi” claims are added here, depending on the level of authentication and the level of identification. As the User.Claims are immutable, the claims need to be removed and recreated. The amr claim is also recreated because the ASP.NET Core Identity sets an incorrect value for passkeys.

if (!string.IsNullOrEmpty(Input.Passkey?.CredentialJson))
{
    // When performing passkey sign-in, don't perform form validation.
    ModelState.Clear();

    result = await _signInManager.PasskeySignInAsync(Input.Passkey.CredentialJson);
    if (result.Succeeded)
    {
        user = await _userManager.GetUserAsync(User);

        // Sign out first to clear the existing cookie
        await _signInManager.SignOutAsync();

        // Create additional claims
        var additionalClaims = new List<Claim>
        {
            new Claim(Consts.LOA, Consts.LOA_400),
            new Claim(Consts.LOI, Consts.LOI_100),
            // ASP.NET Core bug workaround:
            // https://github.com/dotnet/aspnetcore/issues/64881
            new Claim(JwtClaimTypes.AuthenticationMethod, Amr.Pop)
        };

        // Sign in again with the additional claims
        await _signInManager.SignInWithClaimsAsync(user!, isPersistent: false, additionalClaims);
    }
}

The Profile.cs class implements the IProfileService service from Duende. This is added in the services. The class added the different claims to the different caller profiles.

public class ProfileService : IProfileService
{
    public Task GetProfileDataAsync(ProfileDataRequestContext context)
    {
        // context.Subject is the user for whom the result is being made
        // context.Subject.Claims is the claims collection from the user's session cookie at login time
        // context.IssuedClaims is the collection of claims that your logic has decided to return in the response

        if (context.Caller == IdentityServerConstants.ProfileDataCallers.ClaimsProviderAccessToken)
        {
            // Access token - add custom claims
            AddCustomClaims(context);
        }

        if (context.Caller == IdentityServerConstants.ProfileDataCallers.ClaimsProviderIdentityToken)
        {
            // Identity token - add custom claims and standard profile claims
            AddCustomClaims(context);
            AddProfileClaims(context);
        }

        if (context.Caller == IdentityServerConstants.ProfileDataCallers.UserInfoEndpoint)
        {
            // UserInfo endpoint - add custom claims and standard profile claims
            AddCustomClaims(context);
            AddProfileClaims(context);
        }

        return Task.CompletedTask;
    }

    public Task IsActiveAsync(IsActiveContext context)
    {
        context.IsActive = true;
        return Task.CompletedTask;
    }

    private void AddCustomClaims(ProfileDataRequestContext context)
    {
        // Add OID claim
        var oid = context.Subject.Claims.FirstOrDefault(t => t.Type == "oid");
        if (oid != null)
        {
            context.IssuedClaims.Add(new Claim("oid", oid.Value));
        }

        // Add LOA (Level of Authentication) claim
        var loa = context.Subject.Claims.FirstOrDefault(t => t.Type == Consts.LOA);
        if (loa != null)
        {
            context.IssuedClaims.Add(new Claim(Consts.LOA, loa.Value));
        }

        // Add LOI (Level of Identification) claim
        var loi = context.Subject.Claims.FirstOrDefault(t => t.Type == Consts.LOI);
        if (loi != null)
        {
            context.IssuedClaims.Add(new Claim(Consts.LOI, loi.Value));
        }

        // Add AMR (Authentication Method Reference) claim
        var amr = context.Subject.Claims.FirstOrDefault(t => t.Type == JwtClaimTypes.AuthenticationMethod);
        if (amr != null)
        {
            context.IssuedClaims.Add(new Claim(JwtClaimTypes.AuthenticationMethod, amr.Value));
        }
    }

    private void AddProfileClaims(ProfileDataRequestContext context)
    {
        // Add Name claim (required for User.Identity.Name to work)
        var name = context.Subject.Claims.FirstOrDefault(t => t.Type == JwtClaimTypes.Name);
        if (name != null)
        {
            context.IssuedClaims.Add(new Claim(JwtClaimTypes.Name, name.Value));
        }

        var email = context.Subject.Claims.FirstOrDefault(t => t.Type == JwtClaimTypes.Email);
        if (email != null)
        {
            context.IssuedClaims.Add(new Claim(JwtClaimTypes.Email, email.Value));
        }
    }
}

The result can be displayed in the Blazor application. The default windows mapping is disabled. The level of authentication and the level of identification values are displayed in the UI. When clicking the Weather tab, a HTTP request is sent to the API using the DPoP access token.

DPoP API requires passkeys user authentication

The API uses the following Nuget packages to implement the JWT and DPoP security requirements.

  • Microsoft.AspNetCore.Authentication.JwtBearer
  • Duende.AspNetCore.Authentication.JwtBearer

The AddJwtBearer method is used to validate the DPoP token together with the Duende client library extensions. The ApiResource is validated as well as the standard DPoP requirements.

builder.Services.AddAuthentication("Bearer")
    .AddJwtBearer(options =>
    {
        options.Authority = "https://localhost:5001";
        options.Audience = "dpop-api";

        options.TokenValidationParameters.ValidateAudience = true;
        options.TokenValidationParameters.ValidateIssuer = true;
        options.TokenValidationParameters.ValidAudience = "dpop-api";

        options.MapInboundClaims = false;
        options.TokenValidationParameters.ValidTypes = ["at+jwt"];
    });

// layers DPoP onto the "token" scheme above
builder.Services.ConfigureDPoPTokensForScheme("Bearer", opt =>
{
    opt.ValidationMode = ExpirationValidationMode.IssuedAt; // IssuedAt is the default.
});

builder.Services.AddAuthorization();

builder.Services.AddSingleton<IAuthorizationHandler, AuthzLoaLoiHandler>();

builder.Services.AddAuthorizationBuilder()
    .AddPolicy("authz_checks", policy => policy
        .RequireAuthenticatedUser()
        .AddRequirements(new AuthzLoaLoiRequirement()));

The AuthzLoaLoiHandler is used to validate the loa and later the loi claims. The API returns a 403 if the user that acquired the access token did not use a phishing resistant authentication method.

using Microsoft.AspNetCore.Authorization;

public class AuthzLoaLoiHandler : AuthorizationHandler<AuthzLoaLoiRequirement>
{
    protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, 
         AuthzLoaLoiRequirement requirement)
    {
        var loa = context.User.FindFirst(c => c.Type == "loa");
        var loi = context.User.FindFirst(c => c.Type == "loi");

        if (loa is null || loi is null)
        {
            return Task.CompletedTask;
        }

        // Lets require passkeys to use this API
        // DPoP is required to use the API
        if (loa.Value != "loa.400")
        {
            return Task.CompletedTask;
        }

        context.Succeed(requirement);

        return Task.CompletedTask;
    }
}

Links

https://github.com/dotnet/aspnetcore/issues/64881

https://openid.net/specs/openid-connect-eap-acr-values-1_0-final.html

https://datatracker.ietf.org/doc/html/rfc8176

https://learn.microsoft.com/en-us/aspnet/core/security/authentication/claims

https://damienbod.com/2025/07/02/implement-asp-net-core-openid-connect-with-keykloak-to-implement-level-of-authentication-loa-requirements/



Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories