Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150390 stories
·
33 followers

“Classic” .NET Domain Events with Wolverine and EF Core

1 Share

I was helping a new JasperFx Software client this week to best integrate a Domain Events strategy into their new Wolverine codebase. This client wanted to use the common model of using an EF Core DbContext to harvest domain events raised by different entities and relay those to Wolverine messaging with proper Wolverine transactional outbox support for system durability. As part of that assistance — and also to have some content for other Wolverine users trying the same thing later — I promised to write a blog post showing how I’d do this kind of integration myself with Wolverine and EF Core or at least consider a few options. To try to more permanently head this usage problem for other users, I went into mad scientist mode this evening and just rolled out a new Wolverine 5.6 with some important improvements to make this Domain Events pattern much easier to use in combination with EF Core.

Let’s start with some context about the general kind of approach I’m referring to with…

Typical .NET Approach with EF Core and MediatR

I’m largely basing all the samples in this post on Camron Frenzel’s Simple Domain Events with EFCore and MediatR. In his example there was a domain entity like this:

    // Base class that establishes the pattern for publishing
    // domain events within an entity
    public abstract class Entity : IEntity
    {     
        [NotMapped]
        private readonly ConcurrentQueue<IDomainEvent> _domainEvents = new ConcurrentQueue<IDomainEvent>();

        [NotMapped]
        public IProducerConsumerCollection<IDomainEvent> DomainEvents => _domainEvents;

        protected void PublishEvent(IDomainEvent @event)
        {
            _domainEvents.Enqueue(@event);
        }

        protected Guid NewIdGuid()
        {
            return MassTransit.NewId.NextGuid();
        }
    }

    public class BacklogItem : Entity
    {
        public Guid Id { get; private set; }

        [MaxLength(255)]
        public string Description { get; private set; }
        public virtual Sprint Sprint { get; private set; }
        public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;

        private BacklogItem() { }

        public BacklogItem(string desc)
        {
            this.Id = NewIdGuid();
            this.Description = desc;
        }
    
        public void CommitTo(Sprint s)
        {
            this.Sprint = s;
            this.PublishEvent(new BacklogItemCommitted(this, s));
        }
    }

Note the CommitTo() method that publishes a BacklogItemCommitted event that in his sample is published via MediatR with some customization of an EF Core DbContext like this from the referenced post with some comments that I added:

public override async Task<int> SaveChangesAsync(CancellationToken cancellationToken = default(CancellationToken))
{
    await _preSaveChanges();
    var res = await base.SaveChangesAsync(cancellationToken);
    return res;
}

private async Task _preSaveChanges()
{
    await _dispatchDomainEvents();
}

private async Task _dispatchDomainEvents()
{
    // Find any entity objects that were changed in any way
    // by the current DbContext, and relay them to MediatR
    var domainEventEntities = ChangeTracker.Entries<IEntity>()
        .Select(po => po.Entity)
        .Where(po => po.DomainEvents.Any())
        .ToArray();

    foreach (var entity in domainEventEntities)
    {
        // _dispatcher was an abstraction in his post
        // that was a light wrapper around MediatR
        IDomainEvent dev;
        while (entity.DomainEvents.TryTake(out dev))
            await _dispatcher.Dispatch(dev);
    }
}

The goal of this approach is to make DDD style entity types the entry point and governing “decider” of all business behavior and workflow and give these domain model types a way to publish event messages to the rest of the system for side effects in the system outside of the state of the entity. Like for example, maybe the backlog system has to publish a message to a Slack room about the back log item being added to the sprint. You sure as hell don’t want your domain entity to have to know about the infrastructure you use to talk to Slack or web services or whatever.

Mechanically, I’ve seen this typically done with some kind of Entity base class that either exposes a collection of published domain events like the sample above, or puts some kind of interface like this directly into the Entity objects:

// Just assume that this little abstraction
// eventually relays the event messages to Wolverine
// or whatever messaging tool you're using
public interface IEventPublisher
{
    void Publish<T>(T @event);
}

// Using a Nullo just so you don't have potential
// NullReferenceExceptions
public class NulloEventPublisher : IEventPublisher
{
    public void Publish<T>(T @event)
    {
        // Do nothing.
    }
}

public abstract class Entity
{
    public IEventPublisher Publisher { get; set; } = new NulloEventPublisher();
}

public class BacklogItem : Entity
{
    public Guid Id { get; private set; } = Guid.CreateVersion7();

    public string Description { get; private set; }
    
    // ZOMG, I forgot how annoying ORMs are. Use a document database
    // and stop worrying about making things virtual just for lazy loading
    public virtual Sprint Sprint { get; private set; }

    public void CommitTo(Sprint sprint)
    {
        Sprint = sprint;
        Publisher.Publish(new BackLotItemCommitted(Id, sprint.Id));
    }
}

In the approach of using the abstraction directly inside of your entity classes, you incur the extra overhead of connecting the Entity objects loaded out of EF Core with the implementation of your IEventPublisher interface at runtime. I’ll do a few thought experiments later in this post and try out a couple different alternatives.

Before going back to EF Core integration ideas, let me deviate into…

Idiomatic Critter Stack Usage

Forget EF Core for a second, let’s examine a possible usage with the full “Critter Stack” and use Marten for Event Sourcing instead. In this case, a command handler to add a backlog item to a sprint could look something like this (folks, I didn’t spend much time thinking about how a back log system would be built here):

public record BackLotItemCommitted(Guid SprintId);
public record CommitToSprint(Guid BacklogItemId, Guid SprintId);

// This is utilizing Wolverine's "Aggregate Handler Workflow" 
// which is the Critter Stack's flavor of the "Decider" pattern
public static class CommitToSprintHandler
{
    public static Events Handle(
        // The actual command
        CommitToSprint command,

        // Current state of the back log item, 
        // and we may decide to make the commitment here
        [WriteAggregate] BacklogItem item,

        // Assuming that Sprint is event sourced, 
        // this is just a read only view of that stream
        [ReadAggregate] Sprint sprint)
    {
        // Use the item & sprint to "decide" if 
        // the system can proceed with the commitment
        return [new BackLotItemCommitted(command.SprintId)];
    }
}

In the code above we’re appending the BackLotItemCommitted event to Marten that’s returned from the method. If you need to carry out side effects outside of the scope of this handler using that event as a message input, you have a couple options to have Wolverine relay that through any of its messaging through the event forwarding (faster, but un-ordered) or event subscriptions (strictly ordered, but that always means slower).

I should also say that if the events returned from the function above are also being forwarded as messages and not just being appended to the Marten event store, that messaging is completely integrated with Wolverine’s transactional outbox support. That’s a key differentiation all by itself from a similar MediatR based approach that doesn’t come with outbox support.

That’s it, that’s the whole handler, but here are some things I would want you to take away from that code sample above:

  • Yes, the business logic is embedded directly in the handler method instead of being buried in the BacklogItem or Sprint aggregates. We are very purposely going down a Functional Programming (adjacent? curious?) approach where the logic is primarily in pure “Decider” functions
  • I think the code above clearly shows the relationship between the system input (the CommitToSprint command message) and the potential side effects and changes in state of the system. This relative ease of reasoning about the code is of the utmost importance for system maintainability. We can look at the handler code and know that executing that message will potentially lead to events or event messages being published. I’m going to hit this point again from some of the other potential approaches because I think this is a vital point.
  • Testability of the business logic is easy with the pure function approach
  • There are no marker interfaces, Entity base classes, or jumping through layers. There’s no repository or factory
  • Yes, there is absolutely a little bit of “magic” up above, but you can get Wolverine to show you the exact generated code around your handler to explain what it’s doing

So enough of that, let’s start with some possible alternatives for Wolverine integration of domain events from domain entity objects with EF Core.

Relay Events from Your Entity Subclass to Wolverine

Switching back to EF Core integration, let’s look at a possible approach to teach Wolverine how to scrape domain events for publishing from your own custom Event or IEvent layer supertype like this one that we’ll put behind our BackLogItem type:

// Of course, if you're into DDD, you'll probably 
// use many more marker interfaces than I do here, 
// but you do you and I'll do me in throwaway sample code
public abstract class Entity
{
    public List<object> Events { get; } = new();

    public void Publish(object @event)
    {
        Events.Add(@event);
    }
}

public class BacklogItem : Entity
{
    public Guid Id { get; private set; }

    public string Description { get; private set; }
    public virtual Sprint Sprint { get; private set; }
    public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;
    
    public void CommitTo(Sprint sprint)
    {
        Sprint = sprint;
        Publish(new BackLotItemCommitted(Id, sprint.Id));
    }
}

Let’s utilize this a little bit within a Wolverine handler, first with explicit code:

public static class CommitToSprintHandler
{
    public static async Task HandleAsync(
        CommitToSprint command,
        ItemsDbContext dbContext)
    {
        var item = await dbContext.BacklogItems.FindAsync(command.SprintId);
        var sprint = await dbContext.Sprints.FindAsync(command.SprintId);
        
        // This method would cause an event to be published within
        // the BacklogItem object here that we need to gather up and
        // relay to Wolverine later
        item.CommitTo(sprint);
        
        // Wolverine's transactional middleware handles 
        // everything around SaveChangesAsync() and transactions
    }
}

Or a little bit cleaner with some Wolverine “magic” with Wolverine’s declarative persistence support if you’re so inclined:

public static class CommitToSprintHandler
{
    public static IStorageAction<BacklogItem> Handle(
        CommitToSprint command,
        
        // There's a naming convention here about how
        // Wolverine "knows" the id for the BacklogItem
        // from the incoming command
        [Entity] BacklogItem item,
        [Entity] Sprint sprint
        )
    {
        // This method would cause an event to be published within
        // the BacklogItem object here that we need to gather up and
        // relay to Wolverine later
        item.CommitTo(sprint);

        // This is necessary to "tell" Wolverine to put transactional middleware around the handler
        // Just taking in the right DbContext type as a dependency
        // work work just as well if you don't like the Wolverine
        // magic
        return Storage.Update(item);
    }
}

Now, let’s add some Wolverine configuration to just make this pattern work:

builder.Host.UseWolverine(opts =>
{
    // Setting up Sql Server-backed message storage
    // This requires a reference to Wolverine.SqlServer
    opts.PersistMessagesWithSqlServer(connectionString, "wolverine");

    // Set up Entity Framework Core as the support
    // for Wolverine's transactional middleware
    opts.UseEntityFrameworkCoreTransactions();
    
    // THIS IS A NEW API IN Wolverine 5.6!
    opts.PublishDomainEventsFromEntityFrameworkCore<Entity>(x => x.Events);

    // Enrolling all local queues into the
    // durable inbox/outbox processing
    opts.Policies.UseDurableLocalQueues();
});

In the Wolverine configuration above, the EF Core transactional middleware now “knows” how to scrape out possible domain events from the active DbContext.ChangeTracker and publish them through Wolverine. Moreover, the EF Core transactional middleware is doing all the operation ordering for you so that the events are enqueued as outgoing messages as part of the transaction and potentially persisted to the transactional inbox or outbox (depending on configuration) before the transaction is committed.

To make this as clear as possible, this approach is completely reliant on the EF Core transactional middleware.

Oh, and also note that this domain event “scraping” is also supported and tested with the IDbContextOutbox<T> service if you want to use this in application code outside of Wolverine message handlers or HTTP endpoints.

This approach could also support the thread safe approach that the sample from the first section used in the future, but I’m dubious that that’s necessary.

If I were building a system that embeds domain event publishing directly in domain model entity classes, I would prefer this approach. But, let’s talk about another option that will not require any changes to Wolverine…

Relay Events from Entity to Wolverine Cascading Messages

In this approach, which I’m granting that some people won’t like at all, we’ll simply pipe the event messages from the domain entity right to Wolverine and utilize Wolverine’s cascading message feature.

This time I’m going to change the BacklogItem entity class to something like this:

public class BacklogItem 
{
    public Guid Id { get; private set; }

    public string Description { get; private set; }
    public virtual Sprint Sprint { get; private set; }
    public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;
    
    // The exact return type isn't hugely important here
    public object[] CommitTo(Sprint sprint)
    {
        Sprint = sprint;
        return [new BackLotItemCommitted(Id, sprint.Id)];
    }
}

With the handler signature:

public static class CommitToSprintHandler
{
    public static object[] Handle(
        CommitToSprint command,
        
        // There's a naming convention here about how
        // Wolverine "knows" the id for the BacklogItem
        // from the incoming command
        [Entity] BacklogItem item,
        [Entity] Sprint sprint
        )
    {
        return item.CommitTo(sprint);
    }
}

The approach above let’s you make the handler be a single pure function which is always great for unit testing, eliminates the need to do any customization of the DbContext type, makes it unnecessary to bother with any kind of IEventPublisher interface, and let’s you keep the logic for what event messages should be raised completely in your domain model entity types.

I’d also argue that this approach makes it more clear to later developers that “hey, additional messages may be published as part of handling the CommitToSprint command” and I think that’s invaluable. I’ll harp on this more later, but I think the traditional, MediatR-flavored approach to domain events from the first example at the top makes application code harder to reason about and therefore more buggy over time.

Embedding IEventPublisher into the Entities

Lastly, let’s move to what I think is my least favorite approach that I will from this moment be recommending against for any JasperFx clients but is now completely supported by Wolverine 5.6+! Let’s use an IEventPublisher interface like this:

// Just assume that this little abstraction
// eventually relays the event messages to Wolverine
// or whatever messaging tool you're using
public interface IEventPublisher
{
    void Publish<T>(T @event) where T : IDomainEvent;
}

// Using a Nullo just so you don't have potential
// NullReferenceExceptions
public class NulloEventPublisher : IEventPublisher
{
    public void Publish<T>(T @event) where T : IDomainEvent
    {
        // Do nothing.
    }
}

public abstract class Entity
{
    public IEventPublisher Publisher { get; set; } = new NulloEventPublisher();
}

public class BacklogItem : Entity
{
    public Guid Id { get; private set; } = Guid.CreateVersion7();

    public string Description { get; private set; }
    
    // ZOMG, I forgot how annoying ORMs are. Use a document database
    // and stop worrying about making things virtual just for lazy loading
    public virtual Sprint Sprint { get; private set; }

    public void CommitTo(Sprint sprint)
    {
        Sprint = sprint;
        Publisher.Publish(new BackLotItemCommitted(Id, sprint.Id));
    }
}

Now, on to a Wolverine implementation for this pattern. You’ll need to do just a couple things. First, add this line of configuration to Wolverine, and note there are no generic arguments here:

// This will set you up to scrape out domain events in the
// EF Core transactional middleware using a special service
// I'm just about to explain
opts.PublishDomainEventsFromEntityFrameworkCore();

Now, build a real implementation of that IEventPublisher interface above:

public class EventPublisher(OutgoingDomainEvents Events) : IEventPublisher
{
    public void Publish<T>(T e) where T : IDomainEvent
    {
        Events.Add(e);
    }
}

OutgoingDomainEvents is a service from the WolverineFx.EntityFrameworkCore Nuget that is registered as Scoped by the usage of the EF Core transactional middleware. Next, register your custom IEventPublisher with the Scoped lifecycle:

opts.Services.AddScoped<IEventPublisher, EventPublisher>();

How you wire up IEventPublisher to your domain entities getting loaded out of the your EF Core DbContext? Frankly, I don’t want to know. Maybe a repository abstraction around your DbContext types? Dunno. I hate that kind of thing in code, but I perfectly trust *you* to do that and to not make me see that code.

What’s important is that within a message handler or HTTP endpoint, if you resolve the IEventPublisher through DI and use the EF Core transactional middleware, the domain events published to that interface will be piped correctly into Wolverine’s active messaging context.

Likewise, if you are using IDbContextOutbox<T>, the domain events published to IEventPublisher will be correctly piped to Wolverine if you:

  1. Pull both IEventPublisher and IDbContextOutbox<T> from the same scoped service provider (nested container in Lamar / StructureMap parlance)
  2. Call IDbContextOutbox<T>.SaveChangesAndFlushMessagesAsync()

So, we’re going to have to do some sleight of hand to keep your domain entities synchronous

Last note, in unit testing you might use a stand in “Spy” like this:

public class RecordingEventPublisher : OutgoingMessages, IEventPublisher
{
    public void Publish<T>(T @event)
    {
        Add(@event);
    }
}

Summary

I have always hated this Domain Events pattern and much prefer the full “Critter Stack” approach with the Decider pattern and event sourcing. But, Wolverine is picking up a lot more users who combine it with EF Core (and JasperFx deeply appreciates these customers!) and I know damn well that there will be more and more demand for this pattern as people with more traditional DDD backgrounds and used to more DI-reliant tools transition to Wolverine. Now was an awfully good time to plug this gap.

If it was me, I would also prefer having an Entity just store published domain events on itself and depend on Wolverine “scraping” these events out of the DbContext change tracking so you don’t have to do any kind of gymnastics and extra layering to attach some kind of IEventPublisher to your Entity types.

Lastly, if you’re comparing this straight up to the MediatR approach, just keep in mind that this is not an oranges to oranges comparison because Wolverine also needs to correctly utilize its transactional outbox for resiliency, which is a feature that MediatR does not provide.



Read the whole story
alvinashcraft
16 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

.NET SDK 6: Unifying the model and preparing for the future of FHIR

1 Share

The Firely .NET SDK has grown alongside FHIR for more than a decade. As the standard expands and real-world implementations become more complex, the SDK needs to evolve with it. SDK 6 is the next step in that evolution, introducing the most significant architectural updates we’ve ever made. 

This release introduces a unified model for working with FHIR data, adds far more flexibility for handling dealing with unknown or incorrect data, and lays the foundation for upcoming versions of FHIR, including R6. 

SDK 6 brings together everything we’ve learned over the years, and it finally unifies two worlds that have lived side by side for far too long: POCOs and ITypedElement. 

In this post, I’ll walk through why we made these changes and how they will make your work with FHIR simpler, safer, and more future-proof.

For years, the SDK had two different ways of processing data: 

  • POCOs: statically generated .NET classes that are easy to use but limited when the data is incomplete, incorrect, or contains new elements.
  • ITypedElement: a more flexible, dictionary-based structure with its own metadata and type system used for FhirPath and Validation. 

Both had their strengths, but maintaining both created real challenges: 

  • Two type systems
  • Two parsing and serialization paths
  • Two navigation models
  • Repeated object creation and re-parsing 

And in real implementations, developers often had to convert resources back and forth to evaluate FhirPath, run validation, or troubleshoot issues. Sometimes the behavior even differed depending on which API you used. Those inconsistencies only showed up when your users found them, which is never a fun experience for anyone.

Most developers prefer the simplicity of POCOs. I do too. 

But POCOs alone did not provide the flexibility to work with unknown or incorrect data. SDK 6 finally resolves this by unifying the entire model behind the scenes. 

In SDK 6, POCOs remain the main way developers work with FHIR resources, but now they have the flexibility that used to exist only in ITypedElement. 

We introduced a new internal representation called PocoNode. It bridges the gap between the familiarity of POCOs and the dynamic nature of ITypedElement needed for validation, navigation, and FhirPath evaluation. 

This unification means: 

  • Validation works directly on POCOs
  • FhirPath evaluation works directly on POCOs
  • Error reporting is clearer and more precise
  • Navigation through resource trees behaves predictably
  • Developers no longer switch mental models halfway through their code
  • The SDK no longer maintains two separate architectures

One representation. One behavior. One set of expectations. 

In other words: POCO is still king, but now POCO can do everything the dynamic model could do. 

Anyone who has worked with FHIR in the real world knows that data does not always follow the rules. You encounter: 

  • Unknown or misspelled elements
  • Wrong or unexpected datatypes
  • Lists where a single value was expected
  • Fields from older or newer FHIR versions
  • Experimental or custom extensions

Earlier versions of the SDK often had to drop this content or fail the parse. That caused lost information, confusing errors, or extra cleanup layers that nobody really wanted to build. 

SDK 6 introduces overflow, a structured place to store anything that does not fit into the POCO model. Instead of discarding unknown or unexpected content, the SDK preserves it. 

This gives you: 

  • More forgiving parsing
  • Reliable round-tripping across versions
  • Clearer debugging
  • Safer ingestion of mixed-version data
  • Less data loss overall 

If your systems interact with multiple FHIR versions or legacy implementations, overflow will make a noticeable difference. 

With the data model unified, the parsing infrastructure could finally evolve too. SDK 6 introduces parsers that work with overflow and have a more flexible approach to what is considered “valid data”. 

By using these new capabilities, the parser now comes with several built-in modes that you can extend: 

  • STRICT, report everything we can
  • DEFAULT, safe and validation-aware
  • RECOVERABLE, accept anything as long as no data-loss occurs
  • BACKWARDSCOMPATIBLE, ideal for mixed versions
  • OSTRICH, ignore almost everything and just parse it anyway 

These parsing personalities help developers choose how strict or permissive their workflows should be, without needing extra glue code. And because everything runs through one shared model, the behavior of the SDK is far more predictable. 

FHIR continues to evolve quickly. SDK 6 is designed to evolve with it. FHIR R6 introduces non-core resources that do not have static POCO generators. Older SDK versions could not handle these. SDK 6 can. 

The flexibility over overflow, combined with the metadata-first architecture allows the SDK to process: 

  • Experimental resources
  • Custom resources
  • R6 modules without generated classes
  • StructureDefinitions that describe concepts not yet modeled in code 

This flexibility helps developers adopt new FHIR features faster and with fewer workarounds. 

Beyond the architectural changes, SDK 6 includes a number of modernization updates: 

  • Full support for .NET 8 and .NET Standard 2.1 (and removes outdated platforms)
  • Nullable annotations across POCOs for better compiler warnings
  • Async-only FHIRClient for clearer usage
  • Cleanup of obsolete methods
  • Common datatypes (Address, Duration, HumanName, Ratio) now moved into Base
  • More consistent equality and comparison logic 

This makes the SDK safer, cleaner, and more in line with modern .NET development practices.

With SDK 6, you can expect: 

  • A simpler codebase: No more juggling POCOs and ITypedElement models.
  • Better resilience in real-world scenarios: Messy, cross-version, or partially invalid data no longer causes hard failures or data loss.
  • A more powerful validation experience: POCO-based validation is now richer and more accurate.
  • Future readiness: Support for R6 and custom or experimental resources is built in.
  • Cleaner, modern APIs: A clearer, more predictable developer experience. 

SDK 6 is a major step forward. Not just for Firely’s SDK, but for anyone building on top of the FHIR standard.  

Working on this SDK for more than ten years has taught me a lot about what works and what does not. Our .NET SDK 6 release has been an opportunity to bring those lessons together, simplify long-standing complexity, and prepare the codebase for what comes next. 

My hope is that this release makes your day-to-day work easier, especially when dealing with real-world data and evolving FHIR specifications. 

As always, your feedback is welcome. This SDK grows with the community. 

The post .NET SDK 6: Unifying the model and preparing for the future of FHIR appeared first on Firely.

Read the whole story
alvinashcraft
33 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

dotnet-1.0.0-preview.251204.1

1 Share

What's Changed

Full Changelog: python-1.0.0b251120...dotnet-1.0.0-preview.251204.1

Read the whole story
alvinashcraft
58 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Windows 11 is getting a modern Windows Run (Win+R) built with WinUI 3, and it might get new features

1 Share

Microsoft is testing a new Windows Run, but don’t worry. It won’t replace the existing legacy Windows Run dialog that we all have grown up using. It’s here to stay, but you’ll be able to switch to a new Windows Run, which has Fluent Design and rounded corners, aka “modern” design. This optional Windows Run could also get new features at some point.

Unlike Windows Search, Run is mostly used to directly run something, such as cmd, regedit, or services.msc, etc. It doesn’t search your PC, as Run expects a file name or command. I personally use Windows Run as my text field for CTRL+ V, CTRL + and CTRL+C, but that’s a story for another day.

Run looks legacy and ignores dark mode because it’s based on very old UI code that Microsoft has never fully modernized. It’s a classic Win32 dialog using old system controls, and the code is probably 20+ years old because it first debuted in Windows 95. This could change soon.

Windows Run modern UI

As pointed out by Phantom on X, Microsoft is testing a WinUI 3-based Windows Run in Windows Server preview builds, and it’s also coming to consumer builds. This hides the original legacy Run dialog only when you manually toggle the “UI” upgrade.

Windows Run WinUI 3 design

Microsoft is not redesigning the existing Windows Run. Instead, it has built a modern variant that runs separately and is completely optional, at least for now.

Don’t freak out… new Windows Run is optional

Windows Advanced Settings

If you want to use the new Run, you will need to turn it on from Settings > System > Advanced settings. It’s toggled off by default because Microsoft understands some people are going to hate it, but my understanding is that this Run overhaul is going to be more than just applying Fluent Design principles.

Windows Latest understands that Microsoft is rebuilding Run in WinUI 3 as an optional “advanced” setting because it plans to introduce new features. I don’t think it’s going to be Copilot, but this Run might handle developer-related tasks better.

Windows Run modern design

Now, that’s an assumption on my end, and it’s based on the simple fact that the new Run is part of ‘Advanced Settings.’

For those unaware, Advanced Settings are meant for developers. It has controls for Virtual Workspaces, Windows Sandbox, GitHub integration in File Explorer, the “End task” button for the Windows taskbar, and more. Also, Microsoft does not make “UI” upgrades an optional change.

Is Windows 11 finally heading in a good direction with ‘design’?

Windows 3.1 UI in Windows 11

If you are wondering how bad the state of design is in Windows 11, remember that we still have dialogs from Windows 3.1. Windows 3.1 was released in April 1992. It was a 16-bit operating system and one of the first with a GUI. Granted, it was one of the greatest products at that point in time, but how do we still have it in Windows 11?

Unlike macOS, Windows is complex, and it’s supposed to maintain compatibility for decades-old software. Hell, you can even run an app built for Windows 98 on Windows 10 or even 11. This shows how versatile the operating system is, but that has its own cons, and one of those is how outdated it might look.

Windows 11 has slowly progressed, and we have a modern Task Manager and right-click menu, but you don’t see the same treatment for dialogs like Run.

Microsoft’s attempt to modernize the Windows Run feature could be just the beginning of a larger plan.

Windows Run with dark mode

Microsoft is not abandoning those who prefer the legacy Run dialog, as the Windows 11 update also includes a dark-themed Run that looks and runs like before.

The post Windows 11 is getting a modern Windows Run (Win+R) built with WinUI 3, and it might get new features appeared first on Windows Latest

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Viral rant on why ‘everyone in Seattle hates AI’ strikes a nerve, sparks debate over city’s tech vibe

1 Share
(Photo by Patty Zavala on Unsplash)

Does everyone in Seattle hate AI?

That’s one of the surprising questions to arise this week in response to a viral blog post penned by Jonathon Ready, a former Microsoft engineer who recently left the tech giant to pursue his own startup.

In the post, Ready describes showing off his AI-powered mapping project, Wanderfugl, to engineers around the world. Everywhere from Tokyo to San Francisco, people are curious. In Seattle, “instant hostility the moment they heard ‘AI,'” he said.

“Bring up AI in a Seattle coffee shop now and people react like you’re advocating asbestos,” he wrote.

The culprit, Ready argues, is the Big Tech AI experience — specifically, Microsoft’s. Based on conversations with former colleagues and his own time at the company, he describes a workplace where AI became the only career-safe territory amid widespread layoffs, and everyone was forced to use Copilot tools that were often worse than doing the work manually.

The result, Ready says, is a kind of learned helplessness: smart people coming to believe that AI is both pointless and beyond their reach.

His post drew hundreds of comments on Hacker News and other responses on LinkedIn. Some felt he hit the nail on the head. Trey Causey, former head of AI ethics at Indeed, said he could relate, recalling that he would avoid volunteering the “AI” part of his job title in conversations with Seattle locals. He speculated the city might be the epicenter of anti-AI sentiment among major U.S. tech hubs.

But others said the piece paints with too broad a brush. Seattle tech vet Marcelo Calbucci argues the divide isn’t geographic but cultural — between burned-out Big Tech employees and energized founders. He pointed to layoffs that doubled workloads even as AI demand increased, creating stress levels beyond simple burnout.

“If you hang out with founders and investors in Seattle, the energy is completely different,” Calbucci wrote.

Seattle venture capitalist Chris DeVore was more dismissive, calling Ready’s post “clickbait-y” and criticizing what he saw as a conflation of the experiences of Big Tech individual contributors with Seattle’s startup ecosystem.

That dovetails with GeekWire’s recent story about “a tale of two Seattles in the age of AI”: a corporate city shell-shocked by massive job cuts, and a startup city brimming with excitement about new tools.

Ryan Brush, a director at Salesforce, put forth an intriguing theory: that any anti-AI sentiment in Seattle can be traced to the city’s “undercurrent of anti-authority thinking that goes way back,” from grunge music to the WTO protests.

“Seattle has a long memory for being skeptical of systems that centralize power and extract from individuals,” Brush commented. “And a lot of what we see with AI today (the scale of data collection, how concentrated it is in a few big companies) might land differently here than it does elsewhere.”

Ready ends his post by concluding that Seattle still has world-class talent — but unlike San Francisco, it has lost the conviction that it can change the world.

In our story earlier this year — Can Seattle own the AI era? — we asked investors and founders to weigh the city’s startup ecosystem potential. Many community leaders shared optimism, in part due to the density of engineering talent that’s crucial to building AI-native companies.

But, as we later reported, Seattle lacks superstar AI startups that are easy to find in the Bay Area — despite being home to hyperscalers such as Microsoft and Amazon, as well as world-class research institutions (University of WashingtonAllen Institute for AI) and substantial Silicon Valley outposts.

Is it because Seattle “hates AI”? That seems like a bit of a stretch. But this week’s discussion is certainly another reminder of the evolving interplay between Seattle’s tech corporations, talent, and startup activity in the AI era.

Related: Seattle is poised for massive AI innovation impact — but could use more entrepreneurial vibes

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Claude Opus 4.5 Lands in GitHub Copilot for Visual Studio and VS Code

1 Share
GitHub Copilot users can now select Anthropic's Claude Opus 4.5 model in chat across Visual Studio Code and Visual Studio (plus several other IDEs) during a new public preview.
Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories