Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153223 stories
·
33 followers

ReactOS Unifies Installation Media, Introduces GUI Installer and New ATA Driver

1 Share
jeditobe writes: Developers of ReactOS told Phoronix that the project has introduced a unified BootCD, replacing its previously separate installation media and LiveCD images. The new image combines the traditional text-mode installer with a LiveCD mode in a single medium. Within this unified BootCD, the updated LiveCD mode now includes an option to launch a first-stage GUI installer. The graphical interface is intended to make installation more approachable for new users compared to the long-standing text-based setup process. In a separate development, the project has also merged a new ATA storage driver that has been in progress since early 2024. The plug-and-play aware storage stack supports SATA, PATA, ATAPI, AHCI, and even SCSI devices, potentially expanding the range of hardware on which ReactOS can successfully boot. Following recent improvements to graphics driver support, the project continues to make incremental progress across core subsystems, though its long development timeline remains a point of discussion. Will these usability and hardware compatibility improvements be enough to broaden ReactOS adoption beyond its current niche? Please note that all new features are not present in version 0.4.15 and are available for testing in the latest nightly test builds.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
38 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Kubernetes v1.36: Server-Side Sharded List and Watch

1 Share

As Kubernetes clusters grow to tens of thousands of nodes, controllers that watch high-cardinality resources like Pods face a scaling wall. Every replica of a horizontally scaled controller receives the full stream of events from the API server, paying the CPU, memory, and network cost to deserialize everything, only to discard the objects it is not responsible for. Scaling out the controller does not reduce per-replica cost; it multiplies it.

Kubernetes v1.36 introduces server-side sharded list and watch as an alpha feature (KEP-5866). With this feature enabled, the API server filters events at the source so that each controller replica receives only the slice of the resource collection it owns.

The problem with client-side sharding

Some controllers, such as kube-state-metrics, already support horizontal sharding. Each replica is assigned a portion of the keyspace and discards objects that do not belong to it. While this works functionally, it does not reduce the volume of data flowing from the API server:

  • N replicas x full event stream: every replica deserializes and processes every event, then throws away what it does not need.
  • Network bandwidth scales with replicas, not with shard size.
  • CPU spent on deserialization is wasted for the discarded fraction.

Server-side sharded list and watch solves this by moving the filtering upstream into the API server. Each replica tells the API server which hash range it owns, and the API server only sends matching events.

How it works

The feature adds a shardSelector field to ListOptions. Clients specify a hash range using the shardRange() function:

shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000')

The API server computes a deterministic 64-bit FNV-1a hash of the specified field and returns only objects whose hash falls within the range [start, end). This applies to both list responses and watch event streams. The hash function produces the same result across all API server instances, so the feature is safe to use with multiple API server replicas.

Currently supported field paths are object.metadata.uid and object.metadata.namespace.

Using sharded watches in controllers

Controllers typically use informers to list and watch resources. To shard the workload, each replica injects the shardSelector into the ListOptions used by its informers via WithTweakListOptions:

import (
 metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 "k8s.io/client-go/informers"
)

shardSelector := "shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000')"

factory := informers.NewSharedInformerFactoryWithOptions(client, resyncPeriod,
 informers.WithTweakListOptions(func(opts *metav1.ListOptions) {
 opts.ShardSelector = shardSelector
 }),
)

For a 2-replica deployment, the selectors split the hash space in half:

// Replica 0: lower half of the hash space
"shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000')"

// Replica 1: upper half of the hash space
"shardRange(object.metadata.uid, '0x8000000000000000', '0x10000000000000000')"

A single replica can also cover non-contiguous ranges using ||:

"shardRange(object.metadata.uid, '0x0000000000000000', '0x4000000000000000') || " +
 "shardRange(object.metadata.uid, '0x8000000000000000', '0xc000000000000000')"

Verifying server support

When the API server honors a shard selector, the list response includes a shardInfo field in the response metadata that echoes back the applied selector:

{
 "kind": "PodList",
 "apiVersion": "v1",
 "metadata": {
 "resourceVersion": "10245",
 "shardInfo": {
 "selector": "shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000')"
 }
 },
 "items": [...]
}

If shardInfo is absent, the server did not honor the shard selector and the client received the complete, unfiltered collection. In this case, the client should be prepared to handle the full result set, for example by applying client-side filtering to discard objects outside its assigned shard range.

Getting involved

This feature is in alpha and requires enabling the ShardedListAndWatch feature gate on the API server. We are looking for feedback from controller authors and operators running large clusters.

If you have questions or feedback, join the #sig-api-machinery channel on Kubernetes Slack.

Read the whole story
alvinashcraft
53 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

WW 982: Don't Lick the Manta Rays - Breaking Down Microsoft's Earnings

1 Share

Microsoft's earnings report went out last week, and the company spent a lot on AI in the quarter. Microsoft updates its customers on what it's done to address Windows 11 problems. And Xbox kills Copilot plans for the console.

Microsoft Earnings

  • Microsoft announced that it earned a net income of $31.8 billion on revenues of $82.9 billion in the previous quarter.
  • Windows: 1.6 billion monthly active devices, a focus on quality after years of enshittification - but revenues from PC makers were down 2 percent YOY.
  • Microsoft Edge "has taken share for 20 consecutive quarters," which isn't supported by the evidence.
  • Bing "monthly active users reached one billion for the first time," raising questions about how Microsoft defines the term "user."
  • Xbox: "The team is recommitting to our core fans and players, and shaping the future of play," new records for monthly active Xbox users and game streaming hours.
  • AI: Capex spending in the quarter was $32 billion, down from previous quarter as previously described, but up 49 percent YOY.

More earnings

  • Apple, Google/Alphabet, and Amazon.
  • AMD - Up because of AI datacenter.
  • Qualcomm - Plus, Intel just hired away a key Qualcomm exec.

Windows

  • Microsoft shares an update about what it's done to address Windows 11 pain points so far.
  • Marcus Ash is one of the good guys.
  • Some of this is happening in Insider, some is rolling out to retail.
  • Windows Insider Program and Windows Update improvements we discussed last week - two primary channels in WIP now.
  • Simplifying AI experiences - fewer Copilot icons (Notepad, etc.).
  • File Explorer improvements - performance, fewer hangs, better polish and consistency.
  • Widgets - Feed will be off by default, fewer interruptions, no hover activate.
  • System performance - Smaller memory footprint, more aggressive RAM restoration, and more.
  • Soon: Taskbar updates, Start updates, and more to share at Build in June.
  • Week D update arrives with a peek at May's Patch Tuesday.
  • Major: Xbox Mode, AI agents on the Taskbar are the first two big features of 2026.
  • Minor: Also adds File Explorer improvements, new haptic feedback effects, touch keyboard improvements, and more.
  • Shocking new report that Microsoft Edge is incredibly insecure should surprise no one.

AI

  • Microsoft Agent 365 Platform is out of preview, supports local AI agents and Copilot Cowork Agent arrives on mobile with plugin support.
  • Microsoft launches a Legal AI Agent in Word.
  • Apple's plan to open up to multiple third-party AIs is a good one.
  • Canonical's plan to add AI to Ubuntu is also good, but you're never going to believe what happened next.

Xbox and Gaming

  • Asha Sharma reorgs Xbox, kills Copilot on the console.
  • Forza Horizon 6, more coming to Game Pass in May.
  • Xbox April Update is out with updates for all platforms.
  • Next Call of Duty will not ship on Xbox One, PS4.
  • Age of Empires II: Definitive Edition is coming to the Mac for some reason.
  • And finally, with the Supreme Court refusing to block the implementation of the ruling in Epic v. Apple, Microsoft's Xbox game store for mobile is one step closer to happening.

Tips and picks

  • Tip of the week: Embrace inconvenience.
  • App pick of the week: Windows Defender.
  • RunAs Radio this week: Securing Active Directory with Spencer Alessi.
  • Brown liquor pick of the week: Stalk & Barrel Whisky.

These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/982

Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell

Sponsors:





Download audio: https://pdst.fm/e/pscrb.fm/rss/p/mgln.ai/e/294/cdn.twit.tv/megaphone/ww_982/ARML3541053303.mp3
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

.NET Nanoframework with José Simões

1 Share
Ready to go nano? Carl and Richard talk to José Simões about the open source .NET nanoFramework - a community-driven project to provide .NET for embedded systems. José talks about the evolution from the .NET microFramework, to something even smaller, while at the same time, microcontrollers have gotten much more powerful. The conversation looks beyond the hobbyist and educational uses of these systems into commercial IoT applications. The development cycle is one you'll recognize, working in Visual Studio (or Visual Studio Code) and executing against an emulator, or to the actual controller via USB. And yes, you can set breakpoint in the controller!



Download audio: https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/71899697/dotnetrocks_2001_dot_net_nanoframework.mp3
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

The most important part of the Microsoft + Anthropic Cowork deal is not the model.

1 Share

And almost nobody is talking about it.

About 6 months ago, Anthropic launched “Cowork” an AI agent system designed to work alongside you across apps, devices, and workflows. It shook the SaaS market. Microsoft stock is down 20% since the announcement.

Then Microsoft announced a partnership with Anthropic to license it's Cowork stack. (details in comments)

At first, most people assumed this was just another “we licensed a model” deal.

But the deeper you look, the more interesting it gets.

Because Microsoft didn’t just appear to license the LLM.

They appear to have integrated the entire agentic interaction layer, the orchestration, delegation, and multi-step workflow experience.

And now the timelines are getting hard to ignore:

→ Anthropic adds mobile task delegation

→ Weeks later Microsoft announces phone-based Copilot Cowork flows

→ Anthropic pushes persistent agent workflows

→ Microsoft rolls out long-running Copilot tasks

→ Anthropic experiments with “computer use”

→ Microsoft expands Copilot actions, plug-ins and adds a Marketplace.

We may look back at the Microsoft + Anthropic deal as the moment the industry quietly shifted from:

“Who has the smartest agent?”

to

“Who owns the AI operating layer for work?

#Cowork #Copilot #Anthropic #HealthcareAI

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Skills for Dart and Flutter

1 Share
Introducing prepackaged Dart and Flutter Skills!

Improving AI with domain expertise

AI agents are generalists, but when it comes to professional Flutter development, “general” isn’t enough. To build production-grade apps, you need an assistant that understands the nuance of localization, the latest Dart language features, and how to add integration tests.

Today, we’re introducing Agent Skills for Flutter and Dart — a new way to give your AI tools domain-specific expertise.

Beyond the knowledge gap

One of the primary challenges in AI development is the “knowledge gap.” Flutter and Dart can launch new features more quickly than LLMs can update their fixed training data. As a part of how we are thinking about AI, we are looking for ways to not only address the knowledge gap but also ensure the agent applies that knowledge to achieve the task accurately and efficiently following the most optimal workflows.

A little over a year ago, Model Context Protocols (MCP) were the way to provide more AI domain-specific expertise. While MCP gives an agent access to specialized tools, an Agent Skill teaches the agent how to use those tools for a specific task. Think of it this way: MCP provides the hammer and nails (the tools), while a Skill provides the blueprint and the professional know-how to build the house.

Skills improve context efficiency through “progressive disclosure”. This is similar to how deferred loading works in Flutter, where apps can load libraries when needed, coding agents load Skills when they are relevant to what you’re trying to do .

For Flutter and Dart, these Skills provide tailored instructions for common workflows, and enhance the tools provided in the Dart MCP server to reduce the knowledge gap, which improves accuracy and lowers token usage.

A task-oriented approach

Our early experimentation revealed that Skills that only provide documentation don’t add as much value as we initially assumed. Since Flutter’s comprehensive and well written documentation is open-sourced, modern models are already highly capable of finding relevant information for most questions and tasks.

So, we pivoted to creating Skills that are “task-oriented”. Every skill in our GitHub Flutter Skills or Dart Skills repositories focuses on developer tasks like building adaptive layouts- by providing instructions for agents to reliably complete the task. We have conducted extensive manual evaluations to define our initial set of launched skills, and are working on an automated evaluation pipeline that we will share soon.

Using the Skills

To start using these Skills in your workflow, first install the Skill set in your project directory:

npx skills add flutter/skills - skill '*' - agent universal
npx skills add dart-lang/skills - skill '*' - agent universal

You will be asked to select the Skills you want to install. Pick all or select the specific ones you might find most useful.

Then choose the agent that you prefer to develop with.

Now, prompt your AI agent as usual. Here are 5 ways you can use these Skills today:

Skill #1: flutter-add-integration-test

Configures Flutter Driver for app interaction and converts MCP actions into permanent integration tests.

Add an integration test for the checkout flow in my app

Skill #2: flutter-setup-localiztion

Adds localization support to your Flutter project

Set up localization in my app

Skill #3: flutter-build-responsive-layout

Uses LayoutBuilder, MediaQuery, or Expanded/Flexible to create a layout that adapts to different screen sizes.

Ensure that the checkout screen uses repsonsive layout

Skill #4: dart-use-pattern-matching

Refactors code to use Dart’s pattern matching language capabilities where appropriate

Refactor my code so that it uses pattern matching where possible

Skill #5: dart-collect-coverage

Uses the coverage package to collect unit test coverage and generate an LCOV report.

Collect test coverage for my project

For more prompt examples, check out the readme Flutter Skills or Dart Skills repositories on GitHub.

Tell us what you think

These initial core Skills, designed to handle the most common Flutter development hurdles, are just the beginning. We want to build the future of AI-assisted development with you, our community. As you use these Skills and create new ones for your projects, file issues (Dart Skills repo, Flutter Skills repo), and let us know what additional work you’d like to see. We look forward to helping improve your productivity as you use these Skills!


Introducing Skills for Dart and Flutter was originally published in Flutter on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories