Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149915 stories
·
33 followers

Microsoft begins rolling out GPT 5.1 to Copilot on Windows 11 along with new “Labs” feature

1 Share

GPT 5.1 shipped on November 12, 2025, and it’s now rolling out to Copilot on Windows, including for those who use Microsoft’s AI without a paid subscription. That makes sense as GPT 5.1 is also free on ChatGPT, but Copilot gives you access to GPT 5.1’s Thinking without a subscription.

Microsoft told me that GPT 5.1 is being rolled out on Copilot gradually, and it won’t show up right away. You don’t have to log out, create a new account or update the Copilot app on Windows or anywhere else to see the new GPT 5.1. It’s a server-side rollout, and I’m only seeing it in the Copilot app.

GPT 5.1 in Copilot

I’m also subscribed to Microsoft 365, but I don’t think that helps with the access, as it only increases a couple of limits.Copilot with GPT 5.1

GPT 5.1 is the default “smart” mode in Copilot, but if you switch to Think Deeper, Copilot will use GPT 5.1 in ‘Thinking mode.’ Now, this is only available on ChatGPT if you pay at least $20 for the subscription. That’s something you’d get for free with Copilot, but we really don’t know if it’s always GPT 5.1 Thinking and if it is, what is the juice (thinking) level?

Copilot on Windows now has Labs, but some features are still missing

Windows 11’s Copilot is actually one of the rare native apps. It’s using WinUI 3 for most of the interface, including the conversation, compose box, etc. But you’d still find WebView2 being for “Pages.” Microsoft is still unable to bring all features to Copilot on Windows 11, but we’ve reasons to believe that this could change in the future.

Microsoft is testing Copilot Labs for the Windows desktop app, and one of the early features is ‘Vision.’ Eventually, Microsoft plans to bring Copilot 3D or audio expressions as a native feature to Copilot. Right now, these shortcuts redirect to microsoft.copilot.com in your default browser.

“Copilot Labs is now accessible directly from the Windows desktop app! Vision is already available in-app, and for experiments that require a browser—like 3D, Audio Expressions and Portraits—you’ll be smoothly redirected to their respective sites,” Microsoft officials confirmed.

The next lab feature in Copilot could be ‘Actions.’ With Copilot Actions in Copilot for Windows, you can allow Copilot to act on files stored in one of the local drives.

Windows 11 Agent Workspace
Image Courtesy: WindowsLatest.com

Microsoft says it’s using the ‘Agent Workspace‘ feature, which allows AI agents like Copilot to access personal files or folders locally stored on the PC. The Agent Workspace is a bit similar to Windows Sandbox, as it creates a special environment for agents, like Copilot Actions.

The post Microsoft begins rolling out GPT 5.1 to Copilot on Windows 11 along with new “Labs” feature appeared first on Windows Latest

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Intel could finally return to Apple computers in 2027

1 Share

Will Apple turn to Intel for production of its M-series chips in 2027? That’s what supply chain analyst Ming-Chi Kuo predicted on X Friday. Citing his latest industry surveys, Kuo says that Intel’s chances of becoming Apple’s latest “advanced-node supplier… has improved significantly” in recent weeks.

Any deal with Intel would be significant considering the chipmaker famously missed out on supplying its own processors for the original iPhone. Apple now has a deal with Taiwan-based TSMC to supply silicon chips for its iPhone, iPad and Mac products.

Kuo says that Apple has a non-disclosure agreement with Intel to acquire the company’s 18AP PDK 0.9.1GA chips. At this point, the company is waiting on Intel to deliver the PDK 1.0/1.1 kit, which is supposed to arrive in the first quarter of 2026. If everything stays on track, Intel could start shipping Apple’s lowest-end M-series processor, built on the 18AP advanced node, sometime in the second or third quarter of 2027, Kuo says. But that timing still depends on how smoothly things go once Apple actually gets the PDK 1.0/1.1 kit.

Kuo theorizes that a deal with Intel could help Apple demonstrate to the Trump administration that its committed to “buying American” by rerouting its supply chain to include more US-based companies. For Intel, a deal could signal that the company’s worst days are passed. “Looking ahead, the 14A node and beyond could capture more orders from Apple and other tier-one customers, turning Intel’s long-term outlook more positive,” Kuo writes.

Could Apple strike a deal with Intel? And what would happen if it decided to use the chipmaker’s 18AP processors for its entry-level M-series?

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The 4 DIMM problem (Friends)

1 Share

Our old friend Lars Wikman returns to the show to discuss Linux distro hopping, Elixir, Nerves, embedded systems, home automation with Home Assistant, karate, and more.

Join the discussion

Changelog++ members save 8 minutes on this episode because they made the ads disappear. Join today!

Sponsors:

  • Tiger Data – Postgres for Developers, devices, and agents The data platform trusted by hundreds of thousands from IoT to Web3 to AI and more.
  • Augment Code – Developer AI that uses deep understanding of your large codebase and how you build software to deliver personalized code suggestions and insights. Augment provides relevant, contextualized code right in your IDE or Slack. It transforms scattered knowledge into code or answers, eliminating time spent searching docs or interrupting teammates.
  • Depot10x faster builds? Yes please. Build faster. Waste less time. Accelerate Docker image builds, and GitHub Actions workflows. Easily integrate with your existing CI provider and dev workflows to save hours of build time.
  • Framer – Design and publish in one place. Get started free at framer.com/design, code CHANGELOG for a free month of Pro.

Featuring:

Show Notes:

Something missing or broken? PRs welcome!





Download audio: https://op3.dev/e/https://cdn.changelog.com/uploads/friends/119/changelog--friends-119.mp3
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Episode 504 - Feeling Demotivated In Your Career and How to Fix It w/ Emma Bostian

1 Share

if you want to check out all the things ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠torc.dev⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ has going on head to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠linktr.ee/taylordesseyn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ for more information on how to get plugged in!





Download audio: https://anchor.fm/s/ce6260/podcast/play/111853331/https%3A%2F%2Fd3ctxlq1ktw2nl.cloudfront.net%2Fstaging%2F2025-10-28%2Fc22ae8f4-529b-3be6-447b-cf9856fe93ea.mp3
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

From Cloud Native To AI Native: Where Are We Going?

1 Share
At KubeCon + CloudNativeCon in Atlanta, the panel of experts - Kate Goldenring of Fermyon Technologies, Idit Levine of Solo.io, Shaun O'Meara of Mirantis, Sean O'Dell of Dynatrace and James Harmison of Red Hat -explored whether the cloud native era has evolved into an AI native era — and what that shift means for infrastructure, security and development practices.

Has the cloud native era now fully morphed into the AI-native era? If so, what does that mean for the future of both cloud native and AI technology? These are the questions a panel of experts took up at KubeCon + CloudNativeCon North America in Atlanta earlier this month.

The occasion was one of The New Stack’s signature pancake breakfasts, sponsored by Dynatrace. TNS Founder and Publisher Alex Williams probed the panelists’ current obsessions in this time of fast-moving change.

For Jonathan Bryce, the new executive director of the Cloud Native Computing Foundation, inference is claiming a lot of his attention these days.

“What are the future AI-native companies going to look like? Because it’s not all going to be chatbots,” Bryce said. “If you just look at the fundamentals and how you build towards every form of AI productivity, you have to have models where you’re taking a large dataset, turning it into intelligence, and then you have to have the inference layer where you’re serving those models to answer questions, make predictions.

“And at some level, we sort of have skipped that layer,” he added, because the attention is now focused on chatbots and agents.

“Personally, I’ve always been a plumber, an infrastructure guy, and inference is my obsession.”

Inference is coming to the fore as organizations depend more on edge computing and on personalizing websites, said Kate Goldenring, senior software engineer at Fermyon Technologies. WebAssembly, the technology Fermyon focuses on, can help users who are finding they now need to make “extra hops,” as she put it, because of the new need for rapid inferencing.

“There [are] interfaces out there where you can basically package up your model with your WebAssembly component and then deploy that to some hardware with the GPU and directly do inferencing and other types of AI compute, and have that all bundled and secure,” Goldenring noted.

“Whenever you get a new technology, the next question is, how do we use it really, really quickly? And then the following question is, how [do] we do it securely? And WebAssembly provides the opportunity to do that by sandboxing those executions as well.”

Observability and Infrastructure

The issue of security brings up observability. The tsunami of data that AI uses and generates has major implications for how we approach observability in the AI-native era, according to panelist Sean O’Dell, principal product marketing manager at Dynatrace.

“If you’ve been training your data in a predictive manner for eight, nine, 10 years now, we have the ability to add a [large language model] and intelligence on top and over inference in that situation,” O’Dell said.

That “value add” carries pros and cons, he said. “It’s very nice to be able to at least say we have this information from an observability perspective. However, on the other side, it’s a lot of data. So now there’s a fundamental shift of, what do I need to get the right information about an end user?

Among the biggest differences between the cloud native and the AI-native eras is in infrastructure, suggested Shaun O’Meara, CTO of Mirantis. “One of the key things that keep forgetting about all of this, the stuff has to run somewhere,” he said. “We have to orchestrate the infrastructure that all of these components run on top of.”

A big trend he’s noticing, he said, “is we’re moving away from the abstraction that we were beginning to accept as normal in cloud native. You know, we go to a public cloud. We run our workloads. We have no idea what infrastructure is underneath that. With … workloads [running on GPUs], we have to be aware of the deep infrastructure,” including network speed and performance.

“One of the things that behooves us as we start to look at all of these great tools that we’re running on top of these platforms, to remember to run them securely, to be efficient, to manage infrastructure efficiently.”

This, O’Meara said, “is going to be one of the key challenges of the next few years. We have a power problem. We’re running out of power to run these data centers, and we’re building them as fast as we can. We have to manage that infrastructure efficiently.”

Check out the full recording to hear how the panel digs into the questions, opportunities and challenges the “AI native” era will bring.

The post From Cloud Native To AI Native: Where Are We Going? appeared first on The New Stack.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Vertical Slice Architecture: Where Does the Shared Logic Live?

1 Share

Have you ever thought, "How far can I really push Postgres with text?". Watch Aiven's Elephant in the Room livestream for live-coding, real-world examples, and practical insights on how developers can harness PostgreSQL's native text-search capabilities to build faster, smarter, and more efficient applications. Access the playback here.

Move faster and reduce risk. Teleport's vault-free PAM cuts access provisioning time by 10x by removing static credentials and manual tickets, using short-lived certificates and zero-trust, just-in-time access. Leave the vault behind — start for free.

Vertical Slice Architecture (VSA) seems like a breath of fresh air when you first encounter it. You stop jumping between seven layers to add a single field. You delete the dozens of projects in your solution. You feel liberated.

But when you start implementing more complex features, the cracks begin to show.

You build a CreateOrder slice. Then UpdateOrder. Then GetOrder. Suddenly, you notice the repetition. The address validation logic is in three places. The pricing algorithm is needed by both Cart and Checkout.

You feel the urge to create a Common project or SharedServices folder. This is the most critical moment in your VSA adoption.

Choose wrong, and you'll reintroduce the coupling you were trying to escape. Choose right, and you maintain the independence that makes VSA worthwhile.

Here's how I approach shared code in Vertical Slice Architecture.

The Guardrails vs. The Open Road

To understand why this is hard, we need to look at what we left behind.

Clean Architecture provides strict guardrails. It tells you exactly where code lives: Entities go in Domain, interfaces go in Application, implementations go in Infrastructure. It's safe. It prevents mistakes, but it also prevents shortcuts when they're appropriate.

Vertical Slice Architecture removes the guardrails. It says, "Organize code by feature, not technical concern". This gives you speed and flexibility, but it shifts the burden of discipline onto you.

So what can you do about it?

The Trap: The "Common" Junk Drawer

The path of least resistance is to create a project (or folder) named Shared, Common, or Utils.

This is almost always a mistake.

Imagine a Common.Services project with an OrderCalculationService class. It has a method for cart totals (used by Cart), another for historical revenue (used by Reporting), and a helper for invoice formatting (used by Invoices). Three unrelated concerns. Three different change frequencies. One class coupling them all together.

A Common project inevitably becomes a junk drawer for anything you can't be bothered to name properly. It creates a tangled web of dependencies where unrelated features are coupled together because they happen to use the same helper method.

You've reintroduced the very coupling you tried to escape.

The Decision Framework

When I hit a potential sharing situation, I ask three questions:

1. Is this infrastructural or domain?

Infrastructure (database contexts, logging, HTTP clients) almost always gets shared. Domain concepts need more scrutiny.

2. How stable is this concept?

If it changes once a year, share it. If it changes with every feature request, keep it local.

3. Am I past the "Rule of Three"?

Duplicating the same code once is fine. However, creating three duplicates should raise an eyebrow. Don't abstract until you hit three.

We solve this by refactoring our code. Let's look at some examples.

The Three Tiers of Sharing

Instead of binary "Shared vs. Not Shared," think in three tiers.

Tier 1: Technical Infrastructure (Share Freely)

Pure plumbing that affects all slices equally: logging adapters, database connection factories, auth middleware, the Result pattern, validation pipelines.

Centralize this in a Shared.Kernel or Infrastructure project. Note that this can also be a folder within your solution. It rarely changes due to business requirements.

// ✅ Good Sharing: Technical Kernel
public readonly record struct Result
{
    public bool IsSuccess { get; }
    public string Error { get; }

    private Result(bool isSuccess, string error)
    {
        IsSuccess = isSuccess;
        Error = error;
    }

    public static Result Success() => new(true, string.Empty);
    public static Result Failure(string error) => new(false, error);
}

Tier 2: Domain Concepts (Share and Push Logic Down)

This is one of the best places to share logic. Instead of scattering business rules across slices, push them into entities and value objects.

Here's an example:

// ✅ Good Sharing: Entity with Business Logic
public class Order
{
    public Guid Id { get; private set; }
    public OrderStatus Status { get; private set; }
    public List<OrderLine> Lines { get; private set; }

    public bool CanBeCancelled() => Status == OrderStatus.Pending;

    public Result Cancel()
    {
        if (!CanBeCancelled())
        {
            return Result.Failure("Only pending orders can be cancelled.");
        }

        Status = OrderStatus.Cancelled;
        return Result.Success();
    }
}

Now CancelOrder, GetOrder, and UpdateOrder all use the same business rules. The logic lives in one place.

This implies an important concept: different vertical slices can share the same domain model.

Tier 3: Feature-Specific Logic (Keep It Local)

Logic shared between related slices, like CreateOrder and UpdateOrder, doesn't need to go global. Create a Shared folder (there's an exception to every rule) within the feature:

📂 Features
└──📂 Orders
    ├──📂 CreateOrder
    ├──📂 UpdateOrder
    ├──📂 GetOrder
    └──📂 Shared
        ├──📄 OrderValidator.cs
        └──📄 OrderPricingService.cs

This also has a hiddene benefit. If you delete the Orders feature, the shared logic goes with it. No zombie code left behind.

Let's explore some advanced scenarios most people overlook.

Cross-Feature Sharing

What about sharing code between unrelated features in Vertical Slice Architecture?

The CreateOrder slice needs to check if a customer exists. GenerateInvoice needs to calculate tax. Orders and Customers both need to format notification messages.

This doesn't fit neatly into a feature's Shared folder. So where does it go?

First, ask: do you actually need to share?

Most cross-feature "sharing" is just data access in disguise.

If CreateOrder needs customer data, it queries the database directly. It doesn't call into the Customers feature. Each slice owns its data access. The Customer entity is shared (it lives in Domain), but there's no shared service between them.

When you genuinely need shared logic, ask what it is:

  • Domain logic (business rules, calculations) → Domain/Services
  • Infrastructure (external APIs, formatting) → Infrastructure/Services
// Domain/Services/TaxCalculator.cs
public class TaxCalculator
{
    public decimal CalculateTax(Address address, decimal subtotal)
    {
        var rate = GetTaxRate(address.State, address.Country);
        return subtotal * rate;
    }
}

Both CreateOrder and GenerateInvoice can use it without coupling to each other.

Before creating any cross-feature service, ask: could this logic live on a domain entity instead? Most "shared business logic" is actually data access, domain logic that belongs on an entity, or premature abstraction.

If you need to trigger a side effect in another feature, I recommend using messaging and events. Alternatively, the feature you want to call into can explore a facade (public API) for that operation.

When Duplication Is the Right Call

Sometimes "shared" code isn't actually shared. It just looks that way.

// Features/Orders/GetOrder
public record GetOrderResponse(Guid Id, decimal Total, string Status);

// Features/Orders/CreateOrder
public record CreateOrderResponse(Guid Id, decimal Total, string Status);

They're identical. The temptation to create a SharedOrderDto is overwhelming. Resist it.

Next week, GetOrder needs a tracking URL. But CreateOrder happens before shipping, so there's no URL yet. If you'd shared the DTO, you'd now have a nullable property that's confusingly empty half the time.

Duplication is cheaper than the wrong abstraction.

The Practical Structure

Here's what a mature Vertical Slice Architecture project looks like:

📂 src
└──📂 Features
│   ├──📂 Orders
│   │   ├──📂 CreateOrder
│   │   ├──📂 UpdateOrder
│   │   └──📂 Shared          # Order-specific sharing
│   ├──📂 Customers
│   │   ├──📂 GetCustomer
│   │   └──📂 Shared          # Customer-specific sharing
│   └──📂 Invoices
│       └──📂 GenerateInvoice
└──📂 Domain
│   ├──📂 Entities
│   ├──📂 ValueObjects
│   └──📂 Services            # Cross-feature domain logic
└──📂 Infrastructure
│   ├──📂 Persistence
│   └──📂 Services
└──📂 Shared
    └──📂 Behaviors
  • Features — Self-contained slices. Each owns its request/response models.
  • Features/[Name]/Shared — Local sharing between related slices.
  • Domain — Entities, value objects, and domain services. Shared business logic lives here.
  • Infrastructure — Technical concerns.
  • Shared — Cross-cutting behaviors only.

The Rules

After building several systems this way, here's what I've landed on:

  1. Features own their request/response models. No exceptions.

  2. Push business logic into the domain. Entities and value objects are the best place to share business rules.

  3. Keep feature-family sharing local. If only Order slices need it, keep it in Features/Orders/Shared (feel free to find a better name than Shared).

  4. Infrastructure is shared by default. Database contexts, HTTP clients, logging. These are technical concerns.

  5. Apply the Rule of Three. Don't extract until you have three real usages with identical, stable logic.

Takeaway

Vertical Slice Architecture asks: "What feature does this belong to?"

The shared code question is really asking: "What do I do when the answer is multiple features?"

Acknowledge that some concepts genuinely span features. Give them a home based on their nature (domain, infrastructure, or cross-cutting behavior). Resist the urge to share everything just because you could.

The goal isn't zero duplication. It's code that's easy to change when requirements change.

And requirements always change.

Thanks for reading.

And stay awesome!




Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories