Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148718 stories
·
33 followers

Dario’s Choice and Anthropic’s Future, Apple’s AI Devices, Netflix Loses WBD

1 Share

M.G. Siegler of Spyglass is back for our monthly tech news discussion. Siegler joins us to discuss the latest on the Pentagon’s clash with Anthropic, why OpenAI stepped in to take the deal, and what comes next for Anthropic and its CEO Dario Amodei. Tune in to hear what the “supply chain risk” label could mean and AI’s growing role in defense work. We also cover Apple’s rumored trio of AI devices, Siri’s latest delays, and the Netflix–Warner Bros. Discovery deal falling apart as Paramount jumps in.

---

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b


Learn more about your ad choices. Visit megaphone.fm/adchoices





Download audio: https://pdst.fm/e/tracking.swap.fm/track/t7yC0rGPUqahTF4et8YD/pscrb.fm/rss/p/traffic.megaphone.fm/AMPP3236570834.mp3?updated=1772488656
Read the whole story
alvinashcraft
14 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Finale & Friends (Friends)

1 Share

Adam and Jerod get into the news, Jerod officially retires from the pod (and Changelog), plus a bonus for our Changelog++ subs!

Join the discussion

Changelog++ members get a bonus 16 minutes at the end of this episode and zero ads. Join today!

Sponsors:

  • Augment Code – Adam loves “Auggie” – Augment Code’s CLI that brings Augment’s context engine and powerful AI reasoning anywhere your code goes. From building alongside you in the terminal to any part of your development workflow.
  • Squarespace – Turn your expertise into a business with the all-in-one platform for websites, services, and getting paid. Use code CHANGELOG to save 10% on your first website purchase.
  • Notion – Custom Agents that automate the busywork so your team can focus on real work. Try them free at notion.com/changelog

Featuring:

Show Notes:

Something missing or broken? PRs welcome!





Download audio: https://op3.dev/e/https://pscrb.fm/rss/p/https://cdn.changelog.com/uploads/friends/129/changelog--friends-129.mp3
Read the whole story
alvinashcraft
21 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Update to WinApp CLI and some GitHub releases - Developer News 09/2026

1 Share
From: Noraa on Tech
Duration: 1:28
Views: 0

Today we look at WinApp Cli 0.2.0 and the relase of macos 26 in GitHub Actions and more.

00:00 Intro
00:12 GitHub
00:44 Windows

-----

Links

GitHub
• GitHub Copilot CLI is now generally available - https://github.blog/changelog/2026-02-25-github-copilot-cli-is-now-generally-available/
• macos-26 is now generally available for GitHub-hosted runners - https://github.blog/changelog/2026-02-26-macos-26-is-now-generally-available-for-github-hosted-runners/
Windows
• Winapp CLI Release v0.2.0 - https://github.com/microsoft/winappCli/releases/tag/v0.2.0

-----

🐦X: https://x.com/theredcuber
🐙Github: https://github.com/noraa-junker
📃My website: https://noraajunker.ch

Read the whole story
alvinashcraft
28 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Azure Trusted Signing Revisited with Dotnet Sign

1 Share

Trusted Signing Banner

I wrote about how to get Azure Trusted Signing to work for my application signing tasks a while back and mentioned how frustrating that process was both on Azure as well as on the client for the actual signing process. Well, since then I'm happy to say that some additional tooling has come out from Microsoft that makes this process quite a bit easier and also the signing process considerably faster than my previous workflow.

This post is specifically about the client side of signing - the original post covers the Azure setup for your signing account and that hasn't changed much. In this post I'll discuss the new-ish dotnet sign tooling that now works with explicitly with Azure Trusted Signing.

The documentation for the client signing part of this is still nearly non-existent or, at minimum, non-discoverable, and LLMs absolutely give the wrong advice, because of it. The change of terminology in the tools isn't helping either, along with the outdated documentation. Why in the heck would you name something as obscure and non-relevant as artifact-signing instead of the obvious trusted-signing that already existed? But I disgress 😄

So, here's the skinny on what works with a heck of a lot less effort than what I described in my last post using SignTool—using dotnet sign instead.

Setting up Trusted Signing

You still need to set up your Azure Trusted Signing, so that part hasn't changed, although by now hopefully the process to set this up might be a little simpler than what I described in my previous post:

Once you have your Trusted Signing account set up and ready to go, what has changed for the better is the process of how you sign your binaries on the local machine, which is quite a bit easier and as it combines everything you need through the single set of tools - except for the Azure CLI which is required for authentication.

Local Signing: Enter dotnet sign

dotnet sign is a dotnet tool that you install via the .NET SDK. You need to have the .NET SDK installed to install dotnet sign, as well as the Azure CLI so you can authenticate your Azure account.

Prerequisites

Install .NET SDK

The .NET SDK is required in order to install the dotnet tool infrastructure required to install and run the dotnet sign tool.

You can install the SDK from here if you don't already have it on your dev machine

Install Azure CLI

The Azure CLI is required so that you can authenticate using Azure's oAuth Authentication flow.

dotnet tool and the .NET SDK

dotnet tool deployed applications use the locally installed .NET SDK to take advantage of the cross platform features of .NET without having to install any additional framework dependencies or having to create separate installers for each platform. A Dotnet tool will run on any of .NET's supported platforms assuming you use cross-platform compatible features only. It does this by automatically creating a platform specific executable for the tool application that dotnet tool install is run on. For this reason it's a convenient way to deploy tools across platforms and if you're in the .NET development eco-system in any way it's likely the .NET SDK is already installed.

You can install the Azure CLI in a number of ways but the easiest is probably via WinGet.

Logging into Azure CLI

Once the Azure CLI is installed you can then log in and set the subscription that your Certificate runs under:

az config set core.enable_broker_on_windows=false
az login
az account set --subscription "Pay-As-You-Go"

I've had lots of issues with the default Windows oAuth authentication using a Web Browser, the first configuration line:

az config set core.enable_broker_on_windows=false

provides more reliable authentication directly in the CLI. For me this was the only way I get it to work correctly. Your mileage may vary and you may not even need this. One thing to look out for is to make sure you choose the right subscription!

Install Dotnet Sign

Dotnet Sign is a dotnet tool installed executable, and in order to install it you need use the dotnet tool command that's part of the .NET SDK.

To install:

dotnet tool install -g --prerelease sign

The tool is currently in pre-release so the --prerelease switch is not optional or the package won't be found. You can also install locally into your project by omitting the -g switch which creates a fixed local copy instead of a global instance installed in the shared tools location.

Using Sign Code Artifact Signing

With all this in place you can now use the Sign command to sign your binaries. Specifically you'll use:

sign code artifact-signing

Note that although there's a trusted-signing option, it's now deprecated and you should use artifact-signing instead.

A full signing command for a binary file looks like this:

sign code artifact-signing  `
   --verbosity warning `
   --timestamp-url http://timestamp.digicert.com `
   --artifact-signing-endpoint https://eus.codesigning.azure.net/ `
   --artifact-signing-account MySigningAccount `
   --artifact-signing-certificate-profile MySignongCertificateProfile `  .\Distribution\MarkdownMonster.exe

You can specify multiple files and they will be batch sent together.

Signing is considerably faster than what I saw with my old SignTool based workflow, with signing times under a second for most files. Based on this speed, it looks like SignTool signs based on locally created hashes rather than uploading the entire file to the server for processing.

Unfortunately, this workflow does not support a metadata file like SignTool supports, so you have to specify all the Azure parameters explicitly. However, we'll fix that with the script provided below.

Putting it all together into a Signing Script

The basic syntax for signing is now simple enough. However, I need signing in a lot of different projects, so I need to reuse the functionality across these various projects and provide the script as part of the local build infrastructure.

The reasons to create a script for all this are:

  • Using metadata configuration files (like SignTool used to use)
  • Handling login optionally
  • Handling parameter errors and signing errors
  • Documentation for prerequisites and configuration in the script

Here's the Powershell script:

# File name: Signfile.ps1
# Prerequisites:  
# dotnet tool install -g --prerelease sign
# Azure CLI required for logging in optionally
# Support metadata: SignfileMetaData.json

param(
    [string]$file = "",
    [string]$file1 = "",
    [string]$file2 = "",
    [string]$file3 = "",
    [string]$file4 = "",
    [string]$file5 = "",
    [string]$file6 = "",
    [string]$file7 = "",
    [string]$file8 = "",
    [boolean]$login = $false
)
if (-not $file) {
    Write-Host "Usage: SignFile.ps1 -file <path to file to sign>"
    exit 1
}

if ($login) {
    az config set core.enable_broker_on_windows=false
    az login
    az account set --subscription "Pay-As-You-Go"
}

# SignfileMetadata.json is not checked in. Format:
# {
#   "Endpoint": "https://eus.codesigning.azure.net/",
#   "CodeSigningAccountName": "MySigningAccount",
#   "CertificateProfileName": "MySigningCertificateProfile"
# }

$metadata = Get-Content -Path "SignfileMetadata.json" -Raw | ConvertFrom-Json
$tsEndpoint = $metadata.Endpoint
$tsAccount = $metadata.CodeSigningAccountName
$tsCertProfile = $metadata.CertificateProfileName
$timeServer = "http://timestamp.digicert.com"

$signArgs = @(
    "--verbosity", "warning",
    "--timestamp-url", $timeServer,
    "--artifact-signing-endpoint", $tsEndpoint,
    "--artifact-signing-account", $tsAccount,
    "--artifact-signing-certificate-profile", $tsCertProfile
)

# Add file arguments at the end
foreach ($f in @($file, $file1, $file2, $file3, $file4, $file5, $file6, $file7, $file8)) {
    if (![string]::IsNullOrWhiteSpace($f)) {
        $signArgs += $f
    }
}

# Run signtool and capture the exit code
sign code artifact-signing $signArgs
$exitCode = $LASTEXITCODE

if ($exitCode -eq 0) {
    Write-Host "File(s) signed successfully." -ForegroundColor Green
    exit 0
} else {
    Write-Host "Signing failed with exit code: $exitCode" -ForegroundColor Red
    exit $exitCode
}

This script uses a separate, external configuration file for the Azure values required for code signing to work. I tend to check in the signing script into Git, while the metadata is locally created and not checked in which separates the two. Alternately you can use some other approach to keep the private data out of your repo - environment variables would also do the trick, but I prefer this explicit approach because it makes it easy to copy the data and script across multiple projects while still keeping the private data out of Git.

Here's what the metadata file looks like:

// SignfileMetadata.json
{
  "Endpoint": "https://eus.codesigning.azure.net/",
  "CodeSigningAccountName": "MySigningAccount",
  "CertificateProfileName": "MyCodeSignCertificateProfile"
}

So now can drop SignFile.ps and SignfileMetadata.json file into a folder and use it for signing.

In my build script I then have something like this to invoke the signing operation after my binaries have been built:

if ($nosign -eq $false) {    
    "Signing binaries..."
    .\signfile-dotnetsign.ps1 -file ".\Distribution\MarkdownMonster.exe" `
                    -file1 ".\Distribution\MarkdownMonsterArm64.exe" `
                    -file2 ".\Distribution\MarkdownMonster.dll" `
                    -file3 ".\Distribution\mm.exe" `
                    -file4 ".\Distribution\mmcli.exe" `
                    -login $false                   

    if ($LASTEXITCODE -ne 0) {
        Write-Host "Signing failed, exiting build script."
        exit $LASTEXITCODE
    }
}

and then again at the very end to sign the final setup distributable:

.\signfile-dotnetsign.ps1 `
	-file ".\Builds\CurrentRelease\MarkdownMonsterSetup.exe" `
    -login $false

The meta data file can be created in the output folder and should not be checked into Git repo. Alternately you could also change the code to use environment variables if that suits your workflow better.

Dotnet Sign is Much Better

I've been using this new workflow for a couple of weeks now, and it has made signing a lot faster than Signtool. It appears that binaries are locally hashed and only the hash data is sent to the server result in much faster processing times. Average processing with Signtool was around 5 seconds per file, it's now down under 1 second per file - a huge improvement in packaging performance.

Microsoft's own documentation now also uses the Digicert timestamp server instead of the Microsoft one that has been failing for me on a regular basis. Using the DigitCert timestamp server I haven't had any signing failures in the last two weeks.

Summary

So all of this is good news, as it's taken away some of the early growing pains of using Azure Trusted Signing and makes it much more usable and predictable to run on the client without having to jump through all sorts of hoops.

It still isn't as simple as it could be, especially if developers are not already using the .NET ecosystem, but if you're doing code signing for Authenticode, you're likely a Windows developer.

The last hurdle that I'd like to get going now is NuGet signing. But I'll leave that for another time... Onwards.

Resources

this post created and published with the Markdown Monster Editor

© Rick Strahl, West Wind Technologies, 2005-2026
Posted in Windows  Security  WPF  
Read the whole story
alvinashcraft
46 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Build a Loom Clone with Next.js 15 and Mux

1 Share

We just posted a course on the freeCodeCamp.org YouTube channel that will teach you how to build a professional screen recording platform from scratch. Using Next.js 15 and Mux, you will create a "Loom Clone" that handles everything from recording to AI-generated summaries. I created this course.

The main technologies used are Next.js and Mux. This project will help you understand how professional video sites actually work. The project features:

  • Browser-Based Recording: Capture your screen and microphone directly in the browser using standard media APIs.

  • Smart Uploads: Use "Direct Upload" to send videos straight to the cloud, which saves server bandwidth and memory.

  • AI Integration: Automatically transcribe audio via OpenAI’s Whisper model and generate video titles and tags using Mux AI.

  • Modern Video Tech: Learn how HLS and Adaptive Bitrate Streaming provide smooth playback by switching quality levels based on internet speed.

  • Professional Features: Add custom watermarks to your videos and create a dashboard with animated thumbnails.

Watch the full course on the freeCodeCamp.org YouTube channel (1-hour watch).



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

🎙️🤖 Real-Time AI Conversations in .NET — Local STT, TTS, VAD and LLM

1 Share

⚠ This blog post was created with the help of AI tools. Yes, I used a bit of magic from language models to organize my thoughts and automate the boring parts, but the geeky fun and the 🤖 in C# are 100% mine.

Hi 👋

What if you could build a real-time voice conversation app in .NET — speech-to-text, text-to-speech, voice activity detection, and LLM responses — all running locally on your machine?

That’s exactly what ElBruno.Realtime does.

🎥 Watch the full video here (coming soon)

Why I Built This

I’ve been building local AI tools for .NET for a while — local embeddingslocal TTS with VibeVoice and QwenTTS, and more. But what was missing was the glue: a framework that chains VAD → STT → LLM → TTS into a single, pluggable pipeline.

I wanted something that:

  • Follows Microsoft.Extensions.AI patterns (no proprietary abstractions)
  • Uses Dependency Injection like any modern .NET app
  • Lets you swap any component — Whisper for STT, Kokoro or QwenTTS for TTS, Foundry Local or Ollama for chat
  • Auto-downloads models on first run — no manual setup
  • Supports both one-shot and real-time streaming conversations

So I built it. 🚀

The Architecture

ElBruno.Realtime uses a three-layer architecture:

Your App
┌─────────────────────────────────────┐
│ RealtimeConversationPipeline │ ← Orchestration Layer
│ (Chains VAD → STT → LLM → TTS) │
└─────────────────────────────────────┘
↓ ↓ ↓ ↓
Silero Whisper Ollama Kokoro/Qwen/VibeVoice
VAD STT Chat TTS

Every component implements a standard interface — ISpeechToTextClient (from M.E.AI), ITextToSpeechClientIVoiceActivityDetectorIChatClient — so they’re independently replaceable.

Two processing modes:

  • ProcessTurnAsync — One-shot: give it a WAV file, get back transcription + AI response + audio
  • ConverseAsync — Streaming: pipe live microphone audio, get real-time events as IAsyncEnumerable<ConversationEvent>

NuGet Packages

PackageWhat it does
ElBruno.RealtimeCore pipeline + abstractions
ElBruno.Realtime.WhisperWhisper.net STT (GGML models)
ElBruno.Realtime.SileroVadSilero VAD via ONNX Runtime
ElBruno.KokoroTTS.RealtimeKokoro-82M TTS (~320 MB, fast)
ElBruno.QwenTTS.RealtimeQwenTTS (~5.5 GB, high quality)
ElBruno.VibeVoiceTTS.RealtimeVibeVoice TTS (~1.5 GB)

All models auto-download on first use. No manual steps. 📦

Show Me the Code

Minimal Console App — One-Shot Conversation

This is the simplest possible setup. Record a question, get an AI response with audio:

using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.AI;
var services = new ServiceCollection();
// Wire up the pipeline
services.AddPersonaPlexRealtime(opts =>
{
opts.DefaultSystemPrompt = "You are a helpful assistant. Keep responses brief.";
opts.DefaultLanguage = "en-US";
})
.UseWhisperStt("whisper-tiny.en") // 75 MB model, auto-downloaded
.UseSileroVad() // ~2 MB model
.UseKokoroTts(); // ~320 MB model
// Add any IChatClient — here we use Ollama
services.AddChatClient(
new OllamaChatClient(new Uri("http://localhost:11434"), "phi4-mini"));
var provider = services.BuildServiceProvider();
var conversation = provider.GetRequiredService<IRealtimeConversationClient>();
// Process a WAV file
using var audio = File.OpenRead("question.wav");
var turn = await conversation.ProcessTurnAsync(audio);
Console.WriteLine($"📝 You said: {turn.UserText}");
Console.WriteLine($"🤖 AI replied: {turn.ResponseText}");
Console.WriteLine($"⏱ Processing time: {turn.ProcessingTime.TotalMilliseconds:F0}ms");

That’s it. First run downloads models automatically. After that, everything runs locally.

Real-Time Streaming — Live Microphone

For real-time conversations, ConverseAsync gives you an IAsyncEnumerable<ConversationEvent> that streams events as they happen:

await foreach (var evt in conversation.ConverseAsync(
microphoneAudioStream,
new ConversationOptions
{
SystemPrompt = "You are a friendly voice assistant.",
SessionId = "user-123", // Per-user conversation history
EnableBargeIn = true, // Allow interrupting
MaxConversationHistory = 20,
}))
{
switch (evt.Kind)
{
case ConversationEventKind.SpeechDetected:
Console.WriteLine("🎤 Speech detected...");
break;
case ConversationEventKind.TranscriptionComplete:
Console.WriteLine($"📝 You: {evt.TranscribedText}");
break;
case ConversationEventKind.ResponseTextChunk:
Console.Write(evt.ResponseText); // Streams token by token
break;
case ConversationEventKind.ResponseAudioChunk:
// Play audio chunk in real-time
audioPlayer.EnqueueChunk(evt.ResponseAudio);
break;
case ConversationEventKind.ResponseComplete:
Console.WriteLine("\n✅ Response complete");
break;
}
}

The pipeline handles everything:

  1. Silero VAD detects when you start/stop speaking
  2. Whisper transcribes your speech
  3. Ollama generates a response (streamed)
  4. Kokoro/QwenTTS converts the response to audio (streamed)

All async. All streaming. All local.

ASP.NET Core API + SignalR

Want to expose this as a web API? Here’s the setup:

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddPersonaPlexRealtime(opts =>
{
opts.DefaultSystemPrompt = "You are a helpful assistant.";
})
.UseWhisperStt("whisper-tiny.en")
.UseSileroVad()
.UseKokoroTts();
builder.Services.AddChatClient(
new OllamaChatClient(new Uri("http://localhost:11434"), "phi4-mini"));
builder.Services.AddSignalR();
var app = builder.Build();
// REST endpoint for one-shot turns
app.MapPost("/api/conversation/turn", async (
HttpRequest request,
IRealtimeConversationClient conversation) =>
{
var form = await request.ReadFormAsync();
var audioFile = form.Files["audio"];
using var audioStream = audioFile!.OpenReadStream();
var turn = await conversation.ProcessTurnAsync(audioStream);
return Results.Ok(new
{
userText = turn.UserText,
responseText = turn.ResponseText,
processingTimeMs = turn.ProcessingTime.TotalMilliseconds,
});
});
app.Run();

And a SignalR hub for real-time streaming:

public class ConversationHub : Hub
{
private readonly IRealtimeConversationClient _conversation;
public ConversationHub(IRealtimeConversationClient conversation)
=> _conversation = conversation;
public async IAsyncEnumerable<ConversationEventDto> StreamConversation(
IAsyncEnumerable<byte[]> audioChunks,
string? systemPrompt = null)
{
await foreach (var evt in _conversation.ConverseAsync(
audioChunks,
new ConversationOptions { SystemPrompt = systemPrompt }))
{
yield return new ConversationEventDto
{
Kind = evt.Kind.ToString(),
TranscribedText = evt.TranscribedText,
ResponseText = evt.ResponseText,
Timestamp = evt.Timestamp,
};
}
}
}

Swap TTS Engines in One Line

One of the things I love about this design — changing the TTS engine is literally one line:

// Option 1: Kokoro — fast, ~320 MB
.UseKokoroTts(defaultVoice: "af_heart")
// Option 2: QwenTTS — high quality, ~5.5 GB
.UseQwenTts()
// Option 3: VibeVoice — balanced, ~1.5 GB
.UseVibeVoiceTts(defaultVoice: "Carter")

Same goes for STT — switch from tiny to base model for better accuracy:

// Fast (75 MB)
.UseWhisperStt("whisper-tiny.en")
// More accurate (142 MB)
.UseWhisperStt("whisper-base.en")

Models — All Auto-Downloaded

No manual model management. First run might take a moment to download, after that everything is cached locally:

ModelSizePurpose
Silero VAD v5~2 MBDetect when you’re speaking
Whisper tiny.en~75 MBFast speech-to-text
Whisper base.en~142 MBAccurate speech-to-text
Kokoro-82M~320 MBFast text-to-speech
VibeVoice~1.5 GBBalanced text-to-speech
QwenTTS~5.5 GBHigh-quality text-to-speech
Phi4-Mini (Ollama)~2.7 GBLLM chat (manual: ollama pull phi4-mini)

Models are cached at %LOCALAPPDATA%/ElBruno/Realtime/.

Per-User Sessions

The framework includes built-in conversation history with per-user session management:

var turn = await conversation.ProcessTurnAsync(
audioStream,
new ConversationOptions
{
SessionId = "user-456", // Each user gets their own history
MaxConversationHistory = 50, // Sliding window
SystemPrompt = "You remember context from our previous messages.",
});

InMemoryConversationSessionStore is the default — or inject your own IConversationSessionStore for Redis, database, etc.

What’s Next

I have a few things on my mind:

  • More STT engines (faster-whisper, Azure Speech)
  • WebRTC transport for browser-to-server streaming
  • .NET Aspire integration sample (scenario-03 is already in progress!)
  • Performance benchmarks across TTS engines
  • Full support for Foundry Local

Resources

Happy coding!

Greetings

El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno






Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories