Read more of this story at Slashdot.
Read more of this story at Slashdot.
As Kubernetes clusters grow to tens of thousands of nodes, controllers that watch high-cardinality resources like Pods face a scaling wall. Every replica of a horizontally scaled controller receives the full stream of events from the API server, paying the CPU, memory, and network cost to deserialize everything, only to discard the objects it is not responsible for. Scaling out the controller does not reduce per-replica cost; it multiplies it.
Kubernetes v1.36 introduces server-side sharded list and watch as an alpha feature (KEP-5866). With this feature enabled, the API server filters events at the source so that each controller replica receives only the slice of the resource collection it owns.
Some controllers, such as kube-state-metrics, already support horizontal sharding. Each replica is assigned a portion of the keyspace and discards objects that do not belong to it. While this works functionally, it does not reduce the volume of data flowing from the API server:
Server-side sharded list and watch solves this by moving the filtering upstream into the API server. Each replica tells the API server which hash range it owns, and the API server only sends matching events.
The feature adds a shardSelector field to ListOptions. Clients specify a
hash range using the shardRange() function:
shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000')
The API server computes a deterministic 64-bit
FNV-1a
hash of the specified field and returns only objects whose hash falls within the
range [start, end). This applies to both list responses and watch event
streams. The hash function produces the same result across all API server
instances, so the feature is safe to use with multiple API server replicas.
Currently supported field paths are object.metadata.uid and
object.metadata.namespace.
Controllers typically use informers to list and watch resources. To shard the
workload, each replica injects the shardSelector into the ListOptions used
by its informers via WithTweakListOptions:
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/informers"
)
shardSelector := "shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000')"
factory := informers.NewSharedInformerFactoryWithOptions(client, resyncPeriod,
informers.WithTweakListOptions(func(opts *metav1.ListOptions) {
opts.ShardSelector = shardSelector
}),
)
For a 2-replica deployment, the selectors split the hash space in half:
// Replica 0: lower half of the hash space
"shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000')"
// Replica 1: upper half of the hash space
"shardRange(object.metadata.uid, '0x8000000000000000', '0x10000000000000000')"
A single replica can also cover non-contiguous ranges using ||:
"shardRange(object.metadata.uid, '0x0000000000000000', '0x4000000000000000') || " +
"shardRange(object.metadata.uid, '0x8000000000000000', '0xc000000000000000')"
When the API server honors a shard selector, the list response includes a
shardInfo field in the response metadata that echoes back the applied
selector:
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "10245",
"shardInfo": {
"selector": "shardRange(object.metadata.uid, '0x0000000000000000', '0x8000000000000000')"
}
},
"items": [...]
}
If shardInfo is absent, the server did not honor the shard selector and the
client received the complete, unfiltered collection. In this case, the client
should be prepared to handle the full result set, for example by applying
client-side filtering to discard objects outside its assigned shard range.
This feature is in alpha and requires enabling the ShardedListAndWatch feature
gate on the API server. We are looking for feedback from controller authors and
operators running large clusters.
If you have questions or feedback, join the #sig-api-machinery channel on
Kubernetes Slack.
Microsoft's earnings report went out last week, and the company spent a lot on AI in the quarter. Microsoft updates its customers on what it's done to address Windows 11 problems. And Xbox kills Copilot plans for the console.
Microsoft Earnings
More earnings
Windows
AI
Xbox and Gaming
Tips and picks
These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/982
Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell
Sponsors:
And almost nobody is talking about it.
About 6 months ago, Anthropic launched “Cowork” an AI agent system designed to work alongside you across apps, devices, and workflows. It shook the SaaS market. Microsoft stock is down 20% since the announcement.
Then Microsoft announced a partnership with Anthropic to license it's Cowork stack. (details in comments)
At first, most people assumed this was just another “we licensed a model” deal.
But the deeper you look, the more interesting it gets.
Because Microsoft didn’t just appear to license the LLM.
They appear to have integrated the entire agentic interaction layer, the orchestration, delegation, and multi-step workflow experience.
And now the timelines are getting hard to ignore:
→ Anthropic adds mobile task delegation
→ Weeks later Microsoft announces phone-based Copilot Cowork flows
→ Anthropic pushes persistent agent workflows
→ Microsoft rolls out long-running Copilot tasks
→ Anthropic experiments with “computer use”
→ Microsoft expands Copilot actions, plug-ins and adds a Marketplace.
We may look back at the Microsoft + Anthropic deal as the moment the industry quietly shifted from:
“Who has the smartest agent?”
to
“Who owns the AI operating layer for work?
#Cowork #Copilot #Anthropic #HealthcareAI

AI agents are generalists, but when it comes to professional Flutter development, “general” isn’t enough. To build production-grade apps, you need an assistant that understands the nuance of localization, the latest Dart language features, and how to add integration tests.
Today, we’re introducing Agent Skills for Flutter and Dart — a new way to give your AI tools domain-specific expertise.
One of the primary challenges in AI development is the “knowledge gap.” Flutter and Dart can launch new features more quickly than LLMs can update their fixed training data. As a part of how we are thinking about AI, we are looking for ways to not only address the knowledge gap but also ensure the agent applies that knowledge to achieve the task accurately and efficiently following the most optimal workflows.
A little over a year ago, Model Context Protocols (MCP) were the way to provide more AI domain-specific expertise. While MCP gives an agent access to specialized tools, an Agent Skill teaches the agent how to use those tools for a specific task. Think of it this way: MCP provides the hammer and nails (the tools), while a Skill provides the blueprint and the professional know-how to build the house.
Skills improve context efficiency through “progressive disclosure”. This is similar to how deferred loading works in Flutter, where apps can load libraries when needed, coding agents load Skills when they are relevant to what you’re trying to do .
For Flutter and Dart, these Skills provide tailored instructions for common workflows, and enhance the tools provided in the Dart MCP server to reduce the knowledge gap, which improves accuracy and lowers token usage.
Our early experimentation revealed that Skills that only provide documentation don’t add as much value as we initially assumed. Since Flutter’s comprehensive and well written documentation is open-sourced, modern models are already highly capable of finding relevant information for most questions and tasks.
So, we pivoted to creating Skills that are “task-oriented”. Every skill in our GitHub Flutter Skills or Dart Skills repositories focuses on developer tasks like building adaptive layouts- by providing instructions for agents to reliably complete the task. We have conducted extensive manual evaluations to define our initial set of launched skills, and are working on an automated evaluation pipeline that we will share soon.
To start using these Skills in your workflow, first install the Skill set in your project directory:
npx skills add flutter/skills - skill '*' - agent universal
npx skills add dart-lang/skills - skill '*' - agent universal
You will be asked to select the Skills you want to install. Pick all or select the specific ones you might find most useful.
Then choose the agent that you prefer to develop with.
Now, prompt your AI agent as usual. Here are 5 ways you can use these Skills today:
Skill #1: flutter-add-integration-test
Configures Flutter Driver for app interaction and converts MCP actions into permanent integration tests.
Add an integration test for the checkout flow in my app
Skill #2: flutter-setup-localiztion
Adds localization support to your Flutter project
Set up localization in my app
Skill #3: flutter-build-responsive-layout
Uses LayoutBuilder, MediaQuery, or Expanded/Flexible to create a layout that adapts to different screen sizes.
Ensure that the checkout screen uses repsonsive layout
Skill #4: dart-use-pattern-matching
Refactors code to use Dart’s pattern matching language capabilities where appropriate
Refactor my code so that it uses pattern matching where possible
Skill #5: dart-collect-coverage
Uses the coverage package to collect unit test coverage and generate an LCOV report.
Collect test coverage for my project
For more prompt examples, check out the readme Flutter Skills or Dart Skills repositories on GitHub.
These initial core Skills, designed to handle the most common Flutter development hurdles, are just the beginning. We want to build the future of AI-assisted development with you, our community. As you use these Skills and create new ones for your projects, file issues (Dart Skills repo, Flutter Skills repo), and let us know what additional work you’d like to see. We look forward to helping improve your productivity as you use these Skills!
Introducing Skills for Dart and Flutter was originally published in Flutter on Medium, where people are continuing the conversation by highlighting and responding to this story.