Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152781 stories
·
33 followers

Microsoft puts a price on its voluntary retirement program

1 Share
The night sky over Microsoft’s headquarters campus in Redmond, Wash. (GeekWire File Photo / Todd Bishop)

Microsoft will take a $900 million charge in its current quarter for its one-time voluntary retirement program, the company disclosed in its earnings report Wednesday.

Just to put that in context, it’s roughly equal to one day of revenue for the company at its current rate. Microsoft brought in $82.9 billion in its most recent quarter, ended March 31. 

The retirement program is part of a broader reshaping of Microsoft’s workforce, which numbered 228,000 employees globally as of mid-2025, the last publicly released count.

Without giving specific numbers, Microsoft said Wednesday that its headcount declined year-over-year in the most recent quarter and will decline again in fiscal 2027, which starts in July.

On the earnings conference call, CFO Amy Hood said the company is focused on “building high performing teams that operate with pace and agility.”

At the same time, Microsoft plans to spend more than $40 billion on capital expenditures in the current quarter — a new record — primarily on data centers and AI infrastructure. 

The voluntary retirement program, announced last week, is open to U.S. employees at the senior director level and below whose age and years of service add up to 70 or more. 

Eligible employees are expected to get details May 7. They’ll have 30 days to decide.

At last count, Microsoft had about 125,000 U.S. employees, and the company has said about 7% are eligible for the program, so that would translate into about 8,750 eligible employees. 

The program will include a financial payout and extended healthcare, but Microsoft hasn’t disclosed the specific terms of the plan yet.

The $900 million charge, split between $350 million in cost of revenue and $550 million in operating expenses, reflects the company’s estimate of the cost, including its assumptions about how many employees will accept, which is also number it hasn’t disclosed.

The program has sparked a range of reactions on LinkedIn and other social networks. Some employees and HR professionals have praised it as a more humane alternative to layoffs, noting that it gives longtime employees a choice rather than a pink slip.

But others have warned that Microsoft risks losing experienced engineers and leaders who built the systems the company still depends on, and some eligible employees have noted that being told they qualify for “retirement” in their late 40s doesn’t feel like a benefit.

Read the whole story
alvinashcraft
48 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft says it has over 20M paid Copilot users, and they really are using it

1 Share
Despite the lingering perception that no one really uses Copilot, Microsoft said on Wednesday that the number of users and engagement is growing.
Read the whole story
alvinashcraft
48 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Kubernetes v1.36: Tiered Memory Protection with Memory QoS

1 Share

On behalf of SIG Node, we are pleased to announce updates to the Memory QoS feature (alpha) in Kubernetes v1.36. Memory QoS uses the cgroup v2 memory controller to give the kernel better guidance on how to treat container memory. It was first introduced in v1.22 and updated in v1.27. In Kubernetes v1.36, we're introducing: opt-in memory reservation, tiered protection by QoS class, observability metrics, and kernel-version warning for memory.high.

What's new in v1.36

Opt-in memory reservation with memoryReservationPolicy

v1.36 separates throttling from reservation. Enabling the feature gate turns on memory.high throttling (the kubelet sets memory.high based on memoryThrottlingFactor, default 0.9), but memory reservation is now controlled by a separate kubelet configuration field:

  • None (default): no memory.min or memory.low is written. Throttling via memory.high still works.
  • TieredReservation: the kubelet writes tiered memory protection based on the Pod's QoS class:

Guaranteed Pods get hard protection via memory.min. For example, a Guaranteed Pod requesting 512 MiB of memory results in:

$ cat /sys/fs/cgroup/kubepods.slice/kubepods-pod6a4f2e3b_1c9d_4a5e_8f7b_2d3e4f5a6b7c.slice/memory.min
536870912

The kernel will not reclaim this memory under any circumstances. If it cannot honor the guarantee, it invokes the OOM killer on other processes to free pages.

Burstable Pods get soft protection via memory.low. For the same 512 MiB request on a Burstable Pod:

$ cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b3c7d2e_4f5a_6b7c_9d1e_3f4a5b6c7d8e.slice/memory.low
536870912

The kernel avoids reclaiming this memory under normal pressure, but may reclaim it if the alternative is a system-wide OOM.

BestEffort Pods get neither memory.min nor memory.low. Their memory remains fully reclaimable.

Comparison with v1.27 behavior

In earlier versions, enabling the MemoryQoS feature gate immediately set memory.min for every container with a memory request. memory.min is a hard reservation that the kernel will not reclaim, regardless of memory pressure.

Consider a node with 8 GiB of RAM where Burstable Pod requests total 7 GiB. In earlier versions, that 7 GiB would be locked as memory.min, leaving little headroom for the kernel, system daemons, or BestEffort workloads and increasing the risk of OOM kills.

With v1.36 tiered reservation, those Burstable requests map to memory.low instead of memory.min. Under normal pressure, the kernel still protects that memory, but under extreme pressure it can reclaim part of it to avoid system-wide OOM. Only Guaranteed Pods use memory.min, which keeps hard reservation lower.

With memoryReservationPolicy in v1.36, you can enable throttling first, observe workload behavior, and opt into reservation when your node has enough headroom.

Observability metrics

Two alpha-stability metrics are exposed on the kubelet /metrics endpoint:

Metric Description
kubelet_memory_qos_node_memory_min_bytes Total memory.min across Guaranteed Pods
kubelet_memory_qos_node_memory_low_bytes Total memory.low across Burstable Pods

These are useful for capacity planning. If kubelet_memory_qos_node_memory_min_bytes is creeping toward your node's physical memory, you know hard reservation is getting tight.

$ curl -sk https://localhost:10250/metrics | grep memory_qos
# HELP kubelet_memory_qos_node_memory_min_bytes [ALPHA] Total memory.min in bytes for Guaranteed pods
kubelet_memory_qos_node_memory_min_bytes 5.36870912e+08
# HELP kubelet_memory_qos_node_memory_low_bytes [ALPHA] Total memory.low in bytes for Burstable pods
kubelet_memory_qos_node_memory_low_bytes 2.147483648e+09

Kernel version check

On kernels older than 5.9, memory.high throttling can trigger the kernel livelock issue. The bug was fixed in kernel 5.9. In v1.36, when the feature gate is enabled, the kubelet checks the kernel version at startup and logs a warning if it is below 5.9. The feature continues to work — this is informational, not a hard block.

How Kubernetes maps Memory QoS to cgroup v2

Memory QoS uses four cgroup v2 memory controller interfaces:

  • memory.max: hard memory limit — unchanged from previous versions
  • memory.min: hard memory protection — with TieredReservation, set only for Guaranteed Pods
  • memory.low: soft memory protection — set for Burstable Pods with TieredReservation
  • memory.high: memory throttling threshold — unchanged from previous versions

The following table shows how Kubernetes container resources map to cgroup v2 interfaces when memoryReservationPolicy: TieredReservation is configured. With the default memoryReservationPolicy: None, no memory.min or memory.low values are set.

QoS Class memory.min memory.low memory.high memory.max
Guaranteed Set to requests.memory
(hard protection)
Not set Not set
(requests == limits, so throttling is not useful)
Set to limits.memory
Burstable Not set Set to requests.memory
(soft protection)
Calculated based on
formula with throttling factor
Set to limits.memory
(if specified)
BestEffort Not set Not set Calculated based on
node allocatable memory
Not set

Cgroup hierarchy

cgroup v2 requires that a parent cgroup's memory protection is at least as large as the sum of its children's. The kubelet maintains this by setting memory.min on the kubepods root cgroup to the sum of all Guaranteed and Burstable Pod memory requests, and memory.low on the Burstable QoS cgroup to the sum of all Burstable Pod memory requests. This way the kernel can enforce the per-container and per-pod protection values correctly.

The kubelet manages pod-level and QoS-class cgroups directly using the runc libcontainer library, while container-level cgroups are managed by the container runtime (containerd or CRI-O).

How do I use it?

Prerequisites

  1. Kubernetes v1.36 or later
  2. Linux with cgroup v2. Kernel 5.9 or higher is recommended — earlier kernels work but may experience the livelock issue. You can verify cgroup v2 is active by running mount | grep cgroup2.
  3. A container runtime that supports cgroup v2 (containerd 1.6+, CRI-O 1.22+)

Configuration

To enable Memory QoS with tiered protection:

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
 MemoryQoS: true
memoryReservationPolicy: TieredReservation # Options: None (default), TieredReservation
memoryThrottlingFactor: 0.9 # Optional: default is 0.9

If you want memory.high throttling without memory protection, omit memoryReservationPolicy or set it to None:

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
 MemoryQoS: true
memoryReservationPolicy: None  # This is the default

How can I learn more?

Getting involved

This feature is driven by SIG Node. If you are interested in contributing or have feedback, you can find us on Slack (#sig-node), the mailing list, or at the regular SIG Node meetings. Please file bugs at kubernetes/kubernetes and enhancement proposals at kubernetes/enhancements.

Read the whole story
alvinashcraft
49 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

PPP 507 | Why Smart Teams Still Fail, with Stephen Shapiro

1 Share

Summary

In this episode, Andy talks with Stephen Shapiro, innovation expert and author of You're Not Playing With a Full Deck: Why the People Who Drive You Crazy Are Your Unfair Advantage. Stephen's journey starts with a costly failure: a $30 million innovation project at Accenture that fell apart, not from a lack of talent, but because everyone on the team thought the same way.

Out of that failure came a framework built around a familiar metaphor: a deck of cards. Stephen introduces four distinct personality styles tied to the four suits and explains why teams missing certain suits are setting themselves up to struggle, even when everyone is smart and capable. In this conversation, you'll hear why unanimous agreement is actually a warning sign, how strengths can quietly sabotage performance when overplayed, and why the people who drive you crazy may be exactly who your team needs. Andy and Stephen also explore what the rise of AI means for the uniquely human qualities that only certain suits can provide.

If you're looking for a fresh, practical framework to build stronger teams and unlock better results, this episode is for you!

Sound Bites

  • "We were smart people. We had smart people on the team, and we somehow failed miserably."
  • "I realized I was the problem. And it wasn't just me, it was the way we constructed the team."
  • "Anytime you have everybody agreeing, that's a warning sign."
  • "I actually think the bigger enemy of innovation is, 'Wow, this is a great idea!' because then what ends up happening is we believe it's a great idea."
  • "It's less of a personality test and more of an opportunity to just stimulate some conversation that typically doesn't happen inside of organizations."
  • "Left to their own devices, diverse teams perform terribly."
  • "So it's not just diversity, it's diversity plus appreciation."
  • "I try to make it very clear to AI: don't agree with me!"
  • "Part of this is who are we really versus who did we become?"
  • "There's a difference between a strength and a strong suit. A strength means you're good at it. A strong suit means you're good at it and it energizes you because it's who you are at your core."

Chapters

  • 00:00 Introduction
  • 01:25 Start of Interview
  • 01:37 When Teaming Started Going Wrong
  • 02:52 Recognizing the Real Root Cause
  • 03:38 Choosing Your Team Members
  • 04:45 Similarity vs. Genuine Trust
  • 06:00 A Real-World Team Turnaround
  • 07:51 Overcoming Resistance to Difference
  • 09:04 The Origin of the Card-Based Framework
  • 10:47 When Strengths Become Liabilities
  • 13:10 Warning Signs of Strengths Gone Wild
  • 16:03 Meeting Personalities and How to Balance Them
  • 22:00 How AI Changes the Human Equation on Teams
  • 23:45 Which Personality Suits Are Hardest for AI to Replace
  • 24:53 How Stephen Uses AI in His Own Work
  • 26:18 Applying the Framework Outside of Work
  • 29:42 End of Interview
  • 30:20 Andy Comments After the Interview
  • 33:36 Outtakes

Learn More

You can learn more about Stephen and his work at StephenShapiro.com/fulldeck.

For more learning on this topic, check out:

  • Episode 286 with Ruth Pearce. Ruth wrote a book about the power of character strengths, and she definitely comes at it through the lens of project managers. Check out episode 286 to learn more.
  • Episode 283 with Tom Rath. Tom is the StrengthsFinder guy and it's an engaging discussion that goes beyond personality to what he thinks is the most important question you need to be asking.
  • Episode 489 with Martin Dubin. It's an intriguing discussion about blind spots that, if you haven't already listened to, I highly recommend.

Chat with PMeLa

You can chat directly with PMeLa—the podcast's AI persona—to get episode recommendations and answers to your project management and leadership questions. Visit PeopleAndProjectsPodcast.com/PMeLa to chat with her.

Join Us for LEAD52

I know you want to be a more confident leader–that's why you listen to this podcast. LEAD52 is a global community of people like you who are committed to transforming their ability to lead and deliver. It's 52 weeks of leadership learning, delivered right to your inbox, taking less than 5 minutes a week. And it's all for free. Learn more and sign up at GetLEAD52.com. Thanks!

Thank you for joining me for this episode of The People and Projects Podcast!

Talent Triangle: Power Skills

Topics: Team Building, Leadership, Cognitive Diversity, Collaboration, Innovation, Project Management, Meeting Effectiveness, Personality Frameworks, AI, Human Potential, Self-Awareness, Strengths, Organizational Culture

The following music was used for this episode:

Music: Summer Awakening by Frank Schroeter
License (CC BY 4.0): https://filmmusic.io/standard-license

Music: Synthiemania by Frank Schroeter
License (CC BY 4.0): https://filmmusic.io/standard-license





Download audio: https://traffic.libsyn.com/secure/peopleandprojectspodcast/507-StephenShapiro.mp3?dest-id=107017
Read the whole story
alvinashcraft
49 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Why Tori Westerhoff says we should talk to strangers

1 Share

Tori Westerhoff joins Scott to explore the intersection of AI, human psychology, and personal growth. As people increasingly use LLMs for introspection and decision-making, Tori argues that we're missing the diversity of thought that comes from community, even particularly random encounters with strangers. She reveals her own practice: a daily noon reminder to talk to strangers. "If you sycophant yourself, you're never going to grow," she explains. The conversation delves into how LLMs can create echo chambers of thought, and why the randomness of human connection, even just someone on the same bus, helps us update our mental frames and break out of programmed decision-making paradigms.





Download audio: https://r.zen.ai/r/cdn.simplecast.com/media/audio/transcoded/75c667ea-2739-4306-96be-e15097ef0853/24832310-78fe-4898-91be-6db33696c4ba/episodes/audio/group/70add26f-aedb-405a-9f44-063761313d97/group-item/9fbc60ae-4690-4697-a7b1-58b79be8f229/128_default_tc.mp3?aid=rss_feed&feed=gvtxUiIf
Read the whole story
alvinashcraft
49 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Realtime PostgreSQL: From Data Connect to SQL Connect

1 Share
Read the whole story
alvinashcraft
49 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories