Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149912 stories
·
33 followers

Episode 504 - Feeling Demotivated In Your Career and How to Fix It w/ Emma Bostian

1 Share

if you want to check out all the things ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠torc.dev⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ has going on head to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠linktr.ee/taylordesseyn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ for more information on how to get plugged in!





Download audio: https://anchor.fm/s/ce6260/podcast/play/111853331/https%3A%2F%2Fd3ctxlq1ktw2nl.cloudfront.net%2Fstaging%2F2025-10-28%2Fc22ae8f4-529b-3be6-447b-cf9856fe93ea.mp3
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

From Cloud Native To AI Native: Where Are We Going?

1 Share
At KubeCon + CloudNativeCon in Atlanta, the panel of experts - Kate Goldenring of Fermyon Technologies, Idit Levine of Solo.io, Shaun O'Meara of Mirantis, Sean O'Dell of Dynatrace and James Harmison of Red Hat -explored whether the cloud native era has evolved into an AI native era — and what that shift means for infrastructure, security and development practices.

Has the cloud native era now fully morphed into the AI-native era? If so, what does that mean for the future of both cloud native and AI technology? These are the questions a panel of experts took up at KubeCon + CloudNativeCon North America in Atlanta earlier this month.

The occasion was one of The New Stack’s signature pancake breakfasts, sponsored by Dynatrace. TNS Founder and Publisher Alex Williams probed the panelists’ current obsessions in this time of fast-moving change.

For Jonathan Bryce, the new executive director of the Cloud Native Computing Foundation, inference is claiming a lot of his attention these days.

“What are the future AI-native companies going to look like? Because it’s not all going to be chatbots,” Bryce said. “If you just look at the fundamentals and how you build towards every form of AI productivity, you have to have models where you’re taking a large dataset, turning it into intelligence, and then you have to have the inference layer where you’re serving those models to answer questions, make predictions.

“And at some level, we sort of have skipped that layer,” he added, because the attention is now focused on chatbots and agents.

“Personally, I’ve always been a plumber, an infrastructure guy, and inference is my obsession.”

Inference is coming to the fore as organizations depend more on edge computing and on personalizing websites, said Kate Goldenring, senior software engineer at Fermyon Technologies. WebAssembly, the technology Fermyon focuses on, can help users who are finding they now need to make “extra hops,” as she put it, because of the new need for rapid inferencing.

“There [are] interfaces out there where you can basically package up your model with your WebAssembly component and then deploy that to some hardware with the GPU and directly do inferencing and other types of AI compute, and have that all bundled and secure,” Goldenring noted.

“Whenever you get a new technology, the next question is, how do we use it really, really quickly? And then the following question is, how [do] we do it securely? And WebAssembly provides the opportunity to do that by sandboxing those executions as well.”

Observability and Infrastructure

The issue of security brings up observability. The tsunami of data that AI uses and generates has major implications for how we approach observability in the AI-native era, according to panelist Sean O’Dell, principal product marketing manager at Dynatrace.

“If you’ve been training your data in a predictive manner for eight, nine, 10 years now, we have the ability to add a [large language model] and intelligence on top and over inference in that situation,” O’Dell said.

That “value add” carries pros and cons, he said. “It’s very nice to be able to at least say we have this information from an observability perspective. However, on the other side, it’s a lot of data. So now there’s a fundamental shift of, what do I need to get the right information about an end user?

Among the biggest differences between the cloud native and the AI-native eras is in infrastructure, suggested Shaun O’Meara, CTO of Mirantis. “One of the key things that keep forgetting about all of this, the stuff has to run somewhere,” he said. “We have to orchestrate the infrastructure that all of these components run on top of.”

A big trend he’s noticing, he said, “is we’re moving away from the abstraction that we were beginning to accept as normal in cloud native. You know, we go to a public cloud. We run our workloads. We have no idea what infrastructure is underneath that. With … workloads [running on GPUs], we have to be aware of the deep infrastructure,” including network speed and performance.

“One of the things that behooves us as we start to look at all of these great tools that we’re running on top of these platforms, to remember to run them securely, to be efficient, to manage infrastructure efficiently.”

This, O’Meara said, “is going to be one of the key challenges of the next few years. We have a power problem. We’re running out of power to run these data centers, and we’re building them as fast as we can. We have to manage that infrastructure efficiently.”

Check out the full recording to hear how the panel digs into the questions, opportunities and challenges the “AI native” era will bring.

The post From Cloud Native To AI Native: Where Are We Going? appeared first on The New Stack.

Read the whole story
alvinashcraft
15 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Vertical Slice Architecture: Where Does the Shared Logic Live?

1 Share

Have you ever thought, "How far can I really push Postgres with text?". Watch Aiven's Elephant in the Room livestream for live-coding, real-world examples, and practical insights on how developers can harness PostgreSQL's native text-search capabilities to build faster, smarter, and more efficient applications. Access the playback here.

Move faster and reduce risk. Teleport's vault-free PAM cuts access provisioning time by 10x by removing static credentials and manual tickets, using short-lived certificates and zero-trust, just-in-time access. Leave the vault behind — start for free.

Vertical Slice Architecture (VSA) seems like a breath of fresh air when you first encounter it. You stop jumping between seven layers to add a single field. You delete the dozens of projects in your solution. You feel liberated.

But when you start implementing more complex features, the cracks begin to show.

You build a CreateOrder slice. Then UpdateOrder. Then GetOrder. Suddenly, you notice the repetition. The address validation logic is in three places. The pricing algorithm is needed by both Cart and Checkout.

You feel the urge to create a Common project or SharedServices folder. This is the most critical moment in your VSA adoption.

Choose wrong, and you'll reintroduce the coupling you were trying to escape. Choose right, and you maintain the independence that makes VSA worthwhile.

Here's how I approach shared code in Vertical Slice Architecture.

The Guardrails vs. The Open Road

To understand why this is hard, we need to look at what we left behind.

Clean Architecture provides strict guardrails. It tells you exactly where code lives: Entities go in Domain, interfaces go in Application, implementations go in Infrastructure. It's safe. It prevents mistakes, but it also prevents shortcuts when they're appropriate.

Vertical Slice Architecture removes the guardrails. It says, "Organize code by feature, not technical concern". This gives you speed and flexibility, but it shifts the burden of discipline onto you.

So what can you do about it?

The Trap: The "Common" Junk Drawer

The path of least resistance is to create a project (or folder) named Shared, Common, or Utils.

This is almost always a mistake.

Imagine a Common.Services project with an OrderCalculationService class. It has a method for cart totals (used by Cart), another for historical revenue (used by Reporting), and a helper for invoice formatting (used by Invoices). Three unrelated concerns. Three different change frequencies. One class coupling them all together.

A Common project inevitably becomes a junk drawer for anything you can't be bothered to name properly. It creates a tangled web of dependencies where unrelated features are coupled together because they happen to use the same helper method.

You've reintroduced the very coupling you tried to escape.

The Decision Framework

When I hit a potential sharing situation, I ask three questions:

1. Is this infrastructural or domain?

Infrastructure (database contexts, logging, HTTP clients) almost always gets shared. Domain concepts need more scrutiny.

2. How stable is this concept?

If it changes once a year, share it. If it changes with every feature request, keep it local.

3. Am I past the "Rule of Three"?

Duplicating the same code once is fine. However, creating three duplicates should raise an eyebrow. Don't abstract until you hit three.

We solve this by refactoring our code. Let's look at some examples.

The Three Tiers of Sharing

Instead of binary "Shared vs. Not Shared," think in three tiers.

Tier 1: Technical Infrastructure (Share Freely)

Pure plumbing that affects all slices equally: logging adapters, database connection factories, auth middleware, the Result pattern, validation pipelines.

Centralize this in a Shared.Kernel or Infrastructure project. Note that this can also be a folder within your solution. It rarely changes due to business requirements.

// ✅ Good Sharing: Technical Kernel
public readonly record struct Result
{
    public bool IsSuccess { get; }
    public string Error { get; }

    private Result(bool isSuccess, string error)
    {
        IsSuccess = isSuccess;
        Error = error;
    }

    public static Result Success() => new(true, string.Empty);
    public static Result Failure(string error) => new(false, error);
}

Tier 2: Domain Concepts (Share and Push Logic Down)

This is one of the best places to share logic. Instead of scattering business rules across slices, push them into entities and value objects.

Here's an example:

// ✅ Good Sharing: Entity with Business Logic
public class Order
{
    public Guid Id { get; private set; }
    public OrderStatus Status { get; private set; }
    public List<OrderLine> Lines { get; private set; }

    public bool CanBeCancelled() => Status == OrderStatus.Pending;

    public Result Cancel()
    {
        if (!CanBeCancelled())
        {
            return Result.Failure("Only pending orders can be cancelled.");
        }

        Status = OrderStatus.Cancelled;
        return Result.Success();
    }
}

Now CancelOrder, GetOrder, and UpdateOrder all use the same business rules. The logic lives in one place.

This implies an important concept: different vertical slices can share the same domain model.

Tier 3: Feature-Specific Logic (Keep It Local)

Logic shared between related slices, like CreateOrder and UpdateOrder, doesn't need to go global. Create a Shared folder (there's an exception to every rule) within the feature:

📂 Features
└──📂 Orders
    ├──📂 CreateOrder
    ├──📂 UpdateOrder
    ├──📂 GetOrder
    └──📂 Shared
        ├──📄 OrderValidator.cs
        └──📄 OrderPricingService.cs

This also has a hiddene benefit. If you delete the Orders feature, the shared logic goes with it. No zombie code left behind.

Let's explore some advanced scenarios most people overlook.

Cross-Feature Sharing

What about sharing code between unrelated features in Vertical Slice Architecture?

The CreateOrder slice needs to check if a customer exists. GenerateInvoice needs to calculate tax. Orders and Customers both need to format notification messages.

This doesn't fit neatly into a feature's Shared folder. So where does it go?

First, ask: do you actually need to share?

Most cross-feature "sharing" is just data access in disguise.

If CreateOrder needs customer data, it queries the database directly. It doesn't call into the Customers feature. Each slice owns its data access. The Customer entity is shared (it lives in Domain), but there's no shared service between them.

When you genuinely need shared logic, ask what it is:

  • Domain logic (business rules, calculations) → Domain/Services
  • Infrastructure (external APIs, formatting) → Infrastructure/Services
// Domain/Services/TaxCalculator.cs
public class TaxCalculator
{
    public decimal CalculateTax(Address address, decimal subtotal)
    {
        var rate = GetTaxRate(address.State, address.Country);
        return subtotal * rate;
    }
}

Both CreateOrder and GenerateInvoice can use it without coupling to each other.

Before creating any cross-feature service, ask: could this logic live on a domain entity instead? Most "shared business logic" is actually data access, domain logic that belongs on an entity, or premature abstraction.

If you need to trigger a side effect in another feature, I recommend using messaging and events. Alternatively, the feature you want to call into can explore a facade (public API) for that operation.

When Duplication Is the Right Call

Sometimes "shared" code isn't actually shared. It just looks that way.

// Features/Orders/GetOrder
public record GetOrderResponse(Guid Id, decimal Total, string Status);

// Features/Orders/CreateOrder
public record CreateOrderResponse(Guid Id, decimal Total, string Status);

They're identical. The temptation to create a SharedOrderDto is overwhelming. Resist it.

Next week, GetOrder needs a tracking URL. But CreateOrder happens before shipping, so there's no URL yet. If you'd shared the DTO, you'd now have a nullable property that's confusingly empty half the time.

Duplication is cheaper than the wrong abstraction.

The Practical Structure

Here's what a mature Vertical Slice Architecture project looks like:

📂 src
└──📂 Features
│   ├──📂 Orders
│   │   ├──📂 CreateOrder
│   │   ├──📂 UpdateOrder
│   │   └──📂 Shared          # Order-specific sharing
│   ├──📂 Customers
│   │   ├──📂 GetCustomer
│   │   └──📂 Shared          # Customer-specific sharing
│   └──📂 Invoices
│       └──📂 GenerateInvoice
└──📂 Domain
│   ├──📂 Entities
│   ├──📂 ValueObjects
│   └──📂 Services            # Cross-feature domain logic
└──📂 Infrastructure
│   ├──📂 Persistence
│   └──📂 Services
└──📂 Shared
    └──📂 Behaviors
  • Features — Self-contained slices. Each owns its request/response models.
  • Features/[Name]/Shared — Local sharing between related slices.
  • Domain — Entities, value objects, and domain services. Shared business logic lives here.
  • Infrastructure — Technical concerns.
  • Shared — Cross-cutting behaviors only.

The Rules

After building several systems this way, here's what I've landed on:

  1. Features own their request/response models. No exceptions.

  2. Push business logic into the domain. Entities and value objects are the best place to share business rules.

  3. Keep feature-family sharing local. If only Order slices need it, keep it in Features/Orders/Shared (feel free to find a better name than Shared).

  4. Infrastructure is shared by default. Database contexts, HTTP clients, logging. These are technical concerns.

  5. Apply the Rule of Three. Don't extract until you have three real usages with identical, stable logic.

Takeaway

Vertical Slice Architecture asks: "What feature does this belong to?"

The shared code question is really asking: "What do I do when the answer is multiple features?"

Acknowledge that some concepts genuinely span features. Give them a home based on their nature (domain, infrastructure, or cross-cutting behavior). Resist the urge to share everything just because you could.

The goal isn't zero duplication. It's code that's easy to change when requirements change.

And requirements always change.

Thanks for reading.

And stay awesome!




Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Automatically Signing a Windows EXE with Azure Trusted Signing, dotnet sign, and GitHub Actions

1 Share

WindowsEdgeLight on a SurfaceMac Tahoe (in Beta as of the time of this writing) has this new feature called Edge Light that basically puts a bright picture of an Edge Light around your screen and basically uses the power of OLED to give you a virtual ring light. So I was like, why can't we also have nice things? I wrote (vibed, with GitHub Copilot and Claude Sonnet 4.5) a Windows Edge Light App (source code at https://github.com/shanselman/WindowsEdgeLight and you can get the latest release here https://github.com/shanselman/WindowsEdgeLight/releases or the app will check for new releases and autoupdate with Updatum).

However, as is with all suss loose executables on the internet, when you run random stuff you'll often get the Window Defender 'new phone, who dis' warning which is scary. After several downloads and no viruses or complaints, my executable will eventually gain reputation with the Windows Defender Smart Screen service, but having a Code Signing Certificate is said to help with that. However, code signing certs are expensive and a hassle to manage and renew.

Someone told me that Azure Trusted Signing was somewhat less of a hassle - it's less, but it's still non-trivial. I read this post from Rick (his blog is gold and has been for years) earlier in the year and some of it was super useful and other stuff has been made simpler over time.

I wrote 80% of this blog post, but since I just spent an hour getting code signing to work and GitHub Copilot was going through and logging everything I did, I did use Claude 4.5 to help organize some of this. I have reviewed it all and re-written parts I didn't like, so any mistakes are mine.

Azure Trusted Signing is Microsoft's cloud-based code signing service that:

  • No hardware tokens - Everything happens in the cloud
  • Automatic certificate management - Certificates are issued and renewed automatically
  • GitHub Actions integration - Sign during your CI/CD pipeline. I used GH Actions.
  • Kinda Affortable - About $10/month for small projects. I would like it if this were $10 a year. This is cheaper than a yearly cert, but it'll add up after a while so I'm always looking for cheaper/easier options.
  • Trusted by Windows - Uses the same certificate authority as Microsoft's own apps, so you should get your EXE trusted faster

Prerequisites

Before starting, you'll need:

  1. Azure subscription
  2. Azure CLI - Install from here
  3. Identity validation documents - Driver's license or passport for individual developers. Note that I'm in the US, so your mileage may vary but I basically set up the account, scanned a QR code, took a picture of my license, then did a selfie, then waited.
  4. Windows PC - For local signing (optional) but I ended up using the dotnet sign tool. There are
  5. GitHub repository - For automated signing (optional)

Part 1: Setting Up Azure Trusted Signing

Step 1: Register the Resource Provider

First, I need to enable the Azure Trusted Signing service in my subscription. This can be done in the Portal, or at the CLI.

# Login to Azure
az login
# Register the Microsoft.CodeSigning resource provider
az provider register --namespace Microsoft.CodeSigning
# Wait for registration to complete (takes 2-3 minutes)
az provider show --namespace Microsoft.CodeSigning --query "registrationState"

Wait until the output shows "Registered".

Step 2: Create a Trusted Signing Account

Now create the actual signing account. You can do this via Azure Portal or CLI.

Option A: Azure Portal (Easier for first-timers)

  1. Go to Azure Portal
  2. Search for "Trusted Signing Accounts"
  3. Click Create
  4. Fill in:
    • Subscription: Your subscription
    • Resource Group: Create new or use existing (e.g., "MyAppSigning")
    • Account Name: A unique name (e.g., "myapp-signing")
    • Region: Choose closest to you (e.g., "West US 2")
    • SKU: Basic (sufficient for most apps)
  5. Click Review + Create, then Create

Option B: Azure CLI (Faster if you are a CLI person or like to drive stick shift)

# Create a resource group
az group create --name MyAppSigning --location westus2
# Create the Trusted Signing account
az trustedsigning create \
  --resource-group MyAppSigning \
  --account-name myapp-signing \
  --location westus2 \
  --sku-name Basic

Important: Note your region endpoint. Common ones are:

  • East US: https://eus.codesigning.azure.net/
  • West US 2: https://wus2.codesigning.azure.net/
  • Your specific region: Check in Azure Portal under your account's Overview page

I totally flaked on this and messed around for 10 min before I realized that this URL matters and is specific to your account. Remember this endpoint.

Step 3: Complete Identity Validation

This is the most important step. Microsoft needs to verify you're a real person/organization.

  1. In Azure Portal, go to your Trusted Signing Account
  2. Click Identity validation in the left menu
  3. Click Add identity validation
  4. Choose validation type:
    • Individual: For solo developers (uses driver's license/passport)
    • Organization: For companies (uses business registration documents)
  5. For Individual validation:
    • Upload a clear photo of your government-issued ID
    • Provide your full legal name (must match ID exactly)
    • Provide your email address
  6. Submit and wait for approval

Approval Time:

  • Individual: Usually 1-3 business days
  • Organization: 3-5 business days
  • Me: This took about 4 hours, so again, YMMV. I used my personal account and my personal Azure (don't trust MSFT folks with unlimited Azure credits, I pay for my own) so they didn't know it was me. I went through the regular line, not the Pre-check line LOL.

You'll receive an email when approved. You cannot sign any code until this is approved.

Step 4: Create a Certificate Profile

Once your identity is validated, create a certificate profile. This is what actually issues the signing certificates.

  1. In your Trusted Signing Account, click Certificate profiles
  2. Click Add certificate profile
  3. Fill in:
    • Profile name: Descriptive name (e.g., "MyAppProfile")
    • Profile type: Choose Public Trust (required to prevent SmartScreen)
    • Identity validation: Select your approved identity
    • Certificate type: Code Signing
  4. Click Add

Important: Only "Public Trust" profiles prevent SmartScreen warnings. "Private Trust" is for internal apps only. This took me a second to realize also as it's not an intuitive name.

Step 5: Verify Your Setup

# List your Trusted Signing accounts
az trustedsigning show \
  --resource-group MyAppSigning \
  --account-name myapp-signing
# Should show status: "Succeeded"

Write down these values - you'll need them later:

  • Account Name: myapp-signing
  • Certificate Profile Name: MyAppProfile
  • Endpoint URL: https://wus2.codesigning.azure.net/ (or your region)
  • Subscription ID: Found in Azure Portal
  • Resource Group: MyAppSigning

Part 2: Local Code Signing

Now let's sign an executable on your my machine. You don't NEED to do this, but I wanted to try it locally to avoid a bunch of CI/CD runs, and I wanted to right-click the EXE and see the cert in Properties before I took it all to the cloud. The nice part about this was that I didn't need to mess with any certificates.

Step 1: Assign Yourself the Signing Role

You need permission to actually use the signing service.

Option A: Azure Portal

  1. Go to your Trusted Signing Account
  2. Click Access control (IAM)
  3. Click AddAdd role assignment
  4. Search for and select Trusted Signing Certificate Profile Signer. This is important. I searched for "code" and found nothing. Search for "Trusted"
  5. Click Next
  6. Click Select members and find your user account
  7. Click Select, then Review + assign

Option B: Azure CLI

# Get your user object ID
$userId = az ad signed-in-user show --query id -o tsv
# Assign the role
az role assignment create \
  --role "Trusted Signing Certificate Profile Signer" \
  --assignee-object-id $userId \
  --scope /subscriptions/YOUR_SUBSCRIPTION_ID/resourceGroups/MyAppSigning/providers/Microsoft.CodeSigning/codeSigningAccounts/myapp-signing

Replace YOUR_SUBSCRIPTION_ID with your actual subscription ID.

Step 2: Login with the Correct Scope

This is crucial - you need to login with the specific codesigning scope.

# Logout first to clear old tokens
az logout
# Login with codesigning scope
az login --use-device-code --scope "https://codesigning.azure.net/.default"

This will give you a code to enter at https://microsoft.com/devicelogin. Follow the prompts.

Why device code flow? Because Azure CLI's default authentication can conflict with Visual Studio credentials in my experience. Device code flow is more reliable for code signing.

Step 3: Download the Sign Tool

Option A: Install Globally (Recommended for regular use)

# Install as a global tool (available everywhere)
dotnet tool install --global --prerelease sign
# Verify installation
sign --version

Option B: Install Locally (Project-specific)

# Install to current directory
dotnet tool install --tool-path . --prerelease sign
# Use with .\sign.exe

Which should I use?

  • Global: If you'll sign multiple projects or sign frequently
  • Local: If you want to keep the tool with a specific project or don't want it in your PATH

Step 4: Sign Your Executable

Note again that code signing URL is specific to you. The tscp is your Trusted Signing Certificate Profile name and the tsa is your Trusted Signing Account name. I set *.exe to sign all the EXEs in the folder and note that the -b base directory is an absolute path, not a relative one. For me it was d:\github\WindowsEdgeLight\publish, and your mileage will vary.

# Navigate to your project folder
cd C:\MyProject
# Sign the executable
.\sign.exe code trusted-signing `
  -b "C:\MyProject\publish" `
  -tse "https://wus2.codesigning.azure.net" `
  -tscp "MyAppProfile" `
  -tsa "myapp-signing" `
  *.exe `
  -v Information

Parameters explained:

  • -b: Base directory containing files to sign
  • -tse: Trusted Signing endpoint (your region)
  • -tscp: Certificate profile name
  • -tsa: Trusted Signing account name
  • *.exe: Pattern to match files to sign
  • -v: Verbosity level (Trace, Information, Warning, Error)

Expected output:

info: Signing WindowsEdgeLight.exe succeeded.
Completed in 2743 ms.

Step 5: Verify the Signature

You can do this in PowerShell:

# Check the signature
Get-AuthenticodeSignature ".\publish\MyApp.exe" | Format-List
# Look for:
# Status: Valid
# SignerCertificate: CN=Your Name, O=Your Name, ...
# TimeStamperCertificate: Should be present

Right-click the EXEPropertiesDigital Signatures tab:

  • You should see your signature
  • "This digital signature is OK"

Common Local Signing Issues

I hit all of these lol

Issue: "Please run 'az login' to set up account"

  • Cause: Not logged in with the right scope
  • Fix: Run az logout then az login --use-device-code --scope "https://codesigning.azure.net/.default"

Issue: "403 Forbidden"

  • Cause: Wrong endpoint, account name, or missing permissions
  • Fix:
    • Verify endpoint matches your region (wus2, eus, etc.)
    • Verify account name is exact (case-sensitive)
    • Verify you have "Trusted Signing Certificate Profile Signer" role

Issue: "User account does not exist in tenant"

  • Cause: Azure CLI trying to use Visual Studio credentials
  • Fix: Use device code flow (see Step 2)

Part 3: Automated Signing with GitHub Actions

This is where the magic happens. I want to automatically sign every release. I'm using GitVersion so I just need to tag a commit and GitHub Actions will kick off a run. You can go look at a real run in detail at https://github.com/shanselman/WindowsEdgeLight/actions/runs/19775054123

Step 1: Create a Service Principal

GitHub Actions needs its own identity to sign code. We'll create a service principal (like a robot account). This is VERY different than your local signing setup.

Important: You need Owner or User Access Administrator role on your subscription to do this. If you don't have it, ask your Azure admin or a friend.

# Create service principal with signing permissions
az ad sp create-for-rbac \
  --name "MyAppGitHubActions" \
  --role "Trusted Signing Certificate Profile Signer" \
  --scopes /subscriptions/YOUR_SUBSCRIPTION_ID/resourceGroups/MyAppSigning/providers/Microsoft.CodeSigning/codeSigningAccounts/myapp-signing \
  --json-auth

This outputs JSON like this:

{
  "clientId": "12345678-1234-1234-1234-123456789abc",
  "clientSecret": "super-secret-value-abc123",
  "tenantId": "87654321-4321-4321-4321-cba987654321",
  "subscriptionId": "abcdef12-3456-7890-abcd-ef1234567890"
}

SAVE THESE VALUES IMMEDIATELY! You can't retrieve the clientSecret again. This is super important.

Alternative: Azure Portal Method

If CLI doesn't work:

  1. Azure PortalApp registrationsNew registration
  2. Name: "MyAppGitHubActions"
  3. Click Register
  4. Copy the Application (client) ID - this is AZURE_CLIENT_ID
  5. Copy the Directory (tenant) ID - this is AZURE_TENANT_ID
  6. Go to Certificates & secretsNew client secret
  7. Description: "GitHub Actions"
  8. Expiration: 24 months (max)
  9. Click Add and immediately copy the Value - this is AZURE_CLIENT_SECRET
  10. Go to your Trusted Signing Account → Access control (IAM)
  11. Add role assignmentTrusted Signing Certificate Profile Signer
  12. Select members → Search for "MyAppGitHubActions"
  13. Review + assign

Step 2: Add GitHub Secrets

Go to your GitHub repository:

  1. SettingsSecrets and variablesActions
  2. Click New repository secret for each:
  • AZURE_CLIENT_ID - From service principal output or App registration
  • AZURE_CLIENT_SECRET - From service principal output or Certificates & secrets
  • AZURE_TENANT_ID - From service principal output or App registration
  • AZURE_SUBSCRIPTION_ID - Azure Portal → Subscriptions

Security Note: These secrets are encrypted and never visible in logs. Only your workflow can access them. You'll never see them again.

Step 3: Update Your GitHub Workflow

This is a little confusing as it's YAML, which is Satan's markup, but it's what we have sunk to as a society.

Note the dotnet-version below. Yours might be 8 or 9, etc. Also, I am building both x64 and ARM versions and I am using GitVersion so if you want a more complete build.yml, you can go here https://github.com/shanselman/WindowsEdgeLight/blob/master/.github/workflows/build.yml I am also zipping mine up and prepping my releases so my loose EXE lives in a ZIP file.

Add signing steps to your .github/workflows/build.yml:

name: Build and Sign
on:
  push:
    tags:
      - 'v*'
  workflow_dispatch:
permissions:
  contents: write
jobs:
  build:
    runs-on: windows-latest
    
    steps:
    - name: Checkout code
      uses: actions/checkout@v4
      with:
        fetch-depth: 0
      
    - name: Setup .NET
      uses: actions/setup-dotnet@v4
      with:
        dotnet-version: '10.0.x'
        
    - name: Restore dependencies
      run: dotnet restore MyApp/MyApp.csproj
    - name: Build
      run: |
        dotnet publish MyApp/MyApp.csproj `
          -c Release `
          -r win-x64 `
          --self-contained
    # === SIGNING STEPS START HERE ===
    
    - name: Azure Login
      uses: azure/login@v2
      with:
        creds: '{"clientId":"${{ secrets.AZURE_CLIENT_ID }}","clientSecret":"${{ secrets.AZURE_CLIENT_SECRET }}","subscriptionId":"${{ secrets.AZURE_SUBSCRIPTION_ID }}","tenantId":"${{ secrets.AZURE_TENANT_ID }}"}'
    - name: Sign executables with Trusted Signing
      uses: azure/trusted-signing-action@v0
      with:
        azure-tenant-id: ${{ secrets.AZURE_TENANT_ID }}
        azure-client-id: ${{ secrets.AZURE_CLIENT_ID }}
        azure-client-secret: ${{ secrets.AZURE_CLIENT_SECRET }}
        endpoint: https://wus2.codesigning.azure.net/
        trusted-signing-account-name: myapp-signing
        certificate-profile-name: MyAppProfile
        files-folder: ${{ github.workspace }}\MyApp\bin\Release\net10.0-windows\win-x64\publish
        files-folder-filter: exe
        files-folder-recurse: true
        file-digest: SHA256
        timestamp-rfc3161: http://timestamp.acs.microsoft.com
        timestamp-digest: SHA256
    
    # === SIGNING STEPS END HERE ===
        
    - name: Create Release
      if: startsWith(github.ref, 'refs/tags/')
      uses: softprops/action-gh-release@v2
      with:
        files: MyApp/bin/Release/net10.0-windows/win-x64/publish/MyApp.exe
      env:
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Key points:

  • endpoint: Use YOUR region's endpoint (wus2, eus, etc.)
  • trusted-signing-account-name: Your account name (exact, case-sensitive)
  • certificate-profile-name: Your certificate profile name (exact, case-sensitive)
  • files-folder: Path to your compiled executables
  • files-folder-filter: File types to sign (exe, dll, etc.)
  • files-folder-recurse: Sign files in subfolders

Step 4: Test the Workflow

Now trigger the workflow. You have two options:

Option A: Manual Trigger (Safest for testing)

Since the workflow includes workflow_dispatch:, you can trigger it manually without creating a tag:

# Trigger manually via GitHub CLI
gh workflow run build.yml
# Or go to GitHub web UI:
# Actions tab → "Build and Sign" workflow → "Run workflow" button

This is ideal for testing because:

  • No tag required
  • Won't create a release
  • Can test multiple times
  • Easy to debug issues

Option B: Create a Tag (For actual releases)

# Make sure you're on your main branch with no uncommitted changes
git status
# Create and push a tag
git tag v1.0.0
git push origin v1.0.0

Use this when you're ready to create an actual release with signed binaries. This is what I am doing on my side.

Step 5: Monitor the Build

Watch the progress with GitHub CLI:

# See latest runs
gh run list --limit 5
# Watch a specific run
gh run watch
# View detailed status
gh run view --log

Or visit: https://github.com/YOUR_USERNAME/YOUR_REPO/actions

Look for:

  • Azure Login - Should complete in ~5 seconds
  • Sign executables with Trusted Signing - Should complete in ~10-30 seconds
  • Create Release - Your signed executable is now available in /releases in your GitHib project

Common GitHub Actions Issues

I hit a few of these, natch.

Issue: "403 Forbidden" during signing

  • Cause: Service principal doesn't have permissions
  • Fix:
    1. Go to Azure Portal → Trusted Signing Account → Access control (IAM)
    2. Verify "MyAppGitHubActions" has "Trusted Signing Certificate Profile Signer" role
    3. If not, add it manually

Issue: "No files matched the pattern"

  • Cause: Wrong files-folder path or build artifacts in wrong location
  • Fix:
    1. Add a debug step before signing: - run: Get-ChildItem -Recurse
    2. Find where your EXE is actually located
    3. Update files-folder to match

Issue: Secrets not working

  • Cause: Typo in secret name or value not saved
  • Fix:
    1. Verify secret names EXACTLY match (case-sensitive)
    2. Re-create secrets if unsure
    3. Make sure no extra spaces in values

Issue: "DefaultAzureCredential authentication failed"

  • Cause: Usually wrong tenant ID or client ID
  • Fix: Verify all 4 secrets are correct from service principal output

Part 4: Understanding the Certificate

Certificate Lifecycle

Azure Trusted Signing uses short-lived certificates (typically 3 days). This freaked me out but they say this is actually a security feature:

  • If a certificate is compromised, it expires quickly
  • You never manage certificate files or passwords
  • Automatic renewal - you don't have to do anything

But won't my signature break after 3 days?

No, it seems that's what timestamping is for. When you sign a file:

  1. Azure issues a 3-day certificate
  2. The file is signed with that certificate
  3. A timestamp server records "this file was signed on DATE"
  4. Even after the certificate expires, the signature remains valid because the timestamp proves it was signed when the certificate was valid

That's why both local and GitHub Actions signing include:

timestamp-rfc3161: http://timestamp.acs.microsoft.com

What the Certificate Contains

Your signed executable has a certificate with:

  • Subject: Your name (e.g., "CN=John Doe, O=John Doe, L=Seattle, S=Washington, C=US")
  • Issuer: Microsoft ID Verified CS EOC CA 01
  • Valid Dates: 3-day window
  • Key Size: 3072-bit RSA (very secure)
  • Enhanced Key Usage: Code Signing

Verify Certificate on Any Machine

# Using PowerShell
Get-AuthenticodeSignature "MyApp.exe" | Select-Object -ExpandProperty SignerCertificate | Format-List
# Using Windows UI
# Right-click EXE → Properties → Digital Signatures tab → Details → View Certificate

This whole thing took me about an hour to 75 minutes. It was detailed, but not deeply difficult. Misspellings, case-sensitivity, and a few account issues with Role-Based Access Control did slow me down. Hope this helps!

Used Resources

Written in November 2025 based on real-world implementation for WindowsEdgeLight. Your setup might vary slightly depending on Azure region and account type. Things change, be stoic.



© 2025 Scott Hanselman. All rights reserved.
    
Read the whole story
alvinashcraft
41 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Before Building Your AI Solution: Start with the RIGHT Problem

1 Share

According to a RAND report, private sector investment in AI grew 18 times more between 2013 and 2024, and over half of mid-sized companies have already deployed at least one AI model. Managers feel pressure to ‘do something with AI’, but most struggle to translate the leader’s ambition into action (AI transformation). The biggest mistake? Organisations focus more on using cool technology…

Source

Read the whole story
alvinashcraft
57 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Anthropic says it solved the long-running AI agent problem with a new multi-session Claude SDK

1 Share

Agent memory remains a problem that enterprises want to fix, as agents forget some instructions or conversations the longer they run. 

Anthropic believes it has solved this issue for its Claude Agent SDK, developing a two-fold solution that allows an agent to work across different context windows.

“The core challenge of long-running agents is that they must work in discrete sessions, and each new session begins with no memory of what came before,” Anthropic wrote in a blog post. “Because context windows are limited, and because most complex projects cannot be completed within a single window, agents need a way to bridge the gap between coding sessions.”

Anthropic engineers proposed a two-fold approach for its Agent SDK: An initializer agent to set up the environment, and a coding agent to make incremental progress in each session and leave artifacts for the next.  

The agent memory problem

Since agents are built on foundation models, they remain constrained by the limited, although continually growing, context windows. For long-running agents, this could create a larger problem, leading the agent to forget instructions and behave abnormally while performing a task. Enhancing agent memory becomes essential for consistent, business-safe performance. 

Several methods emerged over the past year, all attempting to bridge the gap between context windows and agent memory. LangChain’s LangMem SDK, Memobase and OpenAI’s Swarm are examples of companies offering memory solutions. Research on agentic memory has also exploded recently, with proposed frameworks like Memp and the Nested Learning Paradigm from Google offering new alternatives to enhance memory. 

Many of the current memory frameworks are open source and can ideally adapt to different large language models (LLMs) powering agents. Anthropic’s approach improves its Claude Agent SDK. 

How it works

Anthropic identified that even though the Claude Agent SDK had context management capabilities and “should be possible for an agent to continue to do useful work for an arbitrarily long time,” it was not sufficient. The company said in its blog post that a model like Opus 4.5 running the Claude Agent SDK can “fall short of building a production-quality web app if it’s only given a high-level prompt, such as 'build a clone of claude.ai.'” 

The failures manifested in two patterns, Anthropic said. First, the agent tried to do too much, causing the model to run out of context in the middle. The agent then has to guess what happened and cannot pass clear instructions to the next agent. The second failure occurs later on, after some features have already been built. The agent sees progress has been made and just declares the job done. 

Anthropic researchers broke down the solution: Setting up an initial environment to lay the foundation for features and prompting each agent to make incremental progress towards a goal, while still leaving a clean slate at the end. 

This is where the two-part solution of Anthropic's agent comes in. The initializer agent sets up the environment, logging what agents have done and which files have been added. The coding agent will then ask models to make incremental progress and leave structured updates. 

“Inspiration for these practices came from knowing what effective software engineers do every day,” Anthropic said. 

The researchers said they added testing tools to the coding agent, improving its ability to identify and fix bugs that weren’t obvious from the code alone. 

Future research

Anthropic noted that its approach is “one possible set of solutions in a long-running agent harness.” However, this is just the beginning stage of what could become a wider research area for many in the AI space. 

The company said its experiments to boost long-term memory for agents haven’t shown whether a single general-purpose coding agent works best across contexts or a multi-agent structure. 

Its demo also focused on full-stack web app development, so other experiments should focus on generalizing the results across different tasks.

“It’s likely that some or all of these lessons can be applied to the types of long-running agentic tasks required in, for example, scientific research or financial modeling,” Anthropic said. 



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories