Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152639 stories
·
33 followers

European Firms Hit Hiring Brakes Over AI and Slowing Growth

1 Share
European hiring momentum is cooling as slower growth and accelerating AI adoption make both employers and workers more cautious. DW.com reports: [Angelika Reich, leadership adviser at the executive recruitment firm Spencer Stuart] noted how Europe's labor market has "cooled down" and how "fewer job vacancies and a tougher economic climate naturally make employees more cautious about switching jobs." Despite remaining resilient, the 21-member eurozone's labor market is projected to grow more slowly this year, at 0.6% compared with 0.7% in 2025, according to the European Central Bank (ECB). Although that drop seems tiny, each 0.1 percentage point difference amounts to about 163,000 fewer new jobs being created. Just three years ago, the eurozone created some 2.76 million new jobs while growing at a robust rate of 1.7%. Migration has also played a major role in shaping Europe's labor supply, helping to ease acute worker shortages and support job growth in many countries. However, net migration is now stabilizing or falling. In Germany, more than one in three companies plans to cut jobs this year, according to the Cologne-based IW economic think tank. The Bank of France expects French unemployment to climb to 7.8%, while in the UK, two-thirds of economists questioned by The Times newspaper think unemployment could rise to as high as 5.5% from the current 5.1%. Unemployment in Poland, the European Union's growing economic powerhouse, is edging higher, reaching 5.6% in November compared to 5% a year earlier. Romania and the Czech Republic are also seeing similar upticks in joblessness. The softening of the labor market has prompted new terms like the Great Hesitation, where companies think twice about hiring and workers are cautious about quitting stressful jobs, and Career Cushioning, quietly preparing a backup plan in case of layoffs.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Kubernetes ConfigMap Revisions with Pulumi

1 Share

ConfigMaps in Kubernetes don’t have built-in revision support, which can create challenges when deploying applications with canary strategies. When using Argo Rollouts with AWS Spot instances, ConfigMap deletions during canary deployments can cause older pods to fail when they try to reload configuration. We solved this by implementing a custom ConfigMap revision system using Pulumi’s ConfigMapPatch and Kubernetes owner references.

The Problem

When deploying applications to Kubernetes using canary strategies with Argo Rollouts, we encountered a specific challenge:

  1. Pulumi ConfigMap replacement behavior: By default, when a ConfigMap’s data changes, Pulumi may replace it rather than update it in place, which for auto-named ConfigMaps results in a new generated name (suffix).
  2. Canary deployment issues: During canary deployments, the old ConfigMap gets deleted, but older pods (especially on AWS Spot instances that can be replaced during canary) may fail to reload
  3. No native revision support: Neither Kubernetes nor Pulumi natively supports ConfigMap revisions like they do for deployments

The solution: ConfigMap revisions with owner references

Our solution leverages Kubernetes’ garbage collection mechanism by using owner references to tie ConfigMaps to ReplicaSets created during canary deployments.

Key components

  1. Pulumi’s ConfigMapPatch: Patches existing ConfigMaps with owner references
  2. ReplicaSet Owner References: Links ConfigMaps to ReplicaSets for automatic cleanup
  3. Kubernetes Garbage Collection: Automatically cleans up ConfigMaps when ReplicaSets are deleted
  4. Retain on Delete: Protects ConfigMaps from immediate deletion during Pulumi updates

Implementation

Here’s how we implemented this solution in our rollout component:

import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";
import * as k8sClient from "@kubernetes/client-node";

interface RolloutComponentArgs {
 namespace: string;
 configMapPatch?: boolean;
 kubeconfig: pulumi.Output<any>;
 configMapName: pulumi.Output<string>;
 rolloutSpec: k8s.types.input.apiextensions.CustomResourceArgs["spec"];
}

export class ConfigMapRevisionRollout extends pulumi.ComponentResource {
 public readonly rollout: k8s.apiextensions.CustomResource;

 constructor(
 name: string,
 args: RolloutComponentArgs,
 opts?: pulumi.ComponentResourceOptions
 ) {
 super("pulumi:component:ConfigMapRevisionRollout", name, {}, opts);

 // Create the Argo Rollout using CustomResource
 this.rollout = new k8s.apiextensions.CustomResource(
 `${name}-rollout`,
 {
 apiVersion: "argoproj.io/v1alpha1",
 kind: "Rollout",
 metadata: {
 name: name,
 namespace: args.namespace,
 },
 spec: args.rolloutSpec,
 },
 { parent: this, ...opts }
 );

 // Apply ConfigMap revision patching if enabled
 if (args.configMapPatch) {
 this.setupConfigMapRevisions(name, args);
 }

 this.registerOutputs({
 rollout: this.rollout,
 });
 }

 private setupConfigMapRevisions(name: string, args: RolloutComponentArgs): void {
 pulumi
 .all([args.kubeconfig, args.configMapName])
 .apply(async ([kubeconfig, configMapName]) => {
 try {
 // Create Server-Side Apply enabled provider
 const ssaProvider = new k8s.Provider(`${name}-ssa-provider`, {
 kubeconfig: JSON.stringify(kubeconfig),
 enableServerSideApply: true,
 });

 // Wait for rollout to stabilize and create ReplicaSets
 await this.waitForRolloutStabilization();

 // Get ReplicaSets associated with this rollout
 const replicaSets = await this.getAssociatedReplicaSets(
 args.namespace,
 configMapName,
 kubeconfig
 );

 if (replicaSets.length === 0) {
 pulumi.log.warn("No ReplicaSets found for ConfigMap patching");
 return;
 }

 // Create owner references for the ConfigMap
 const ownerReferences = replicaSets.map(rs => ({
 apiVersion: "apps/v1",
 kind: "ReplicaSet",
 name: rs.metadata?.name!,
 uid: rs.metadata?.uid!,
 controller: false,
 blockOwnerDeletion: false,
 }));

 // Patch the ConfigMap with owner references
 new k8s.core.v1.ConfigMapPatch(
 `${configMapName}-revision-patch`,
 {
 metadata: {
 name: configMapName,
 namespace: args.namespace,
 ownerReferences: ownerReferences,
 annotations: {
 "pulumi.com/patchForce": "true",
 "configmap.kubernetes.io/revision-managed": "true",
 },
 },
 },
 {
 provider: ssaProvider,
 retainOnDelete: true,
 parent: this,
 }
 );

 pulumi.log.info(
 `Successfully patched ConfigMap ${configMapName} with ${ownerReferences.length} owner references`
 );
 } catch (error) {
 pulumi.log.error(`Failed to setup ConfigMap revisions: ${error}`);
 throw error;
 }
 });
 }

 private async waitForRolloutStabilization(): Promise<void> {
 // Wait for rollout to create and stabilize ReplicaSets
 // In production, consider using a more sophisticated polling mechanism
 await new Promise(resolve => setTimeout(resolve, 10000));
 }

 private async getAssociatedReplicaSets(
 namespace: string,
 configMapName: string,
 kubeconfig: any
 ): Promise<k8sClient.V1ReplicaSet[]> {
 const kc = new k8sClient.KubeConfig();
 kc.loadFromString(JSON.stringify(kubeconfig));

 const appsV1Api = kc.makeApiClient(k8sClient.AppsV1Api);

 try {
 const response = await appsV1Api.listNamespacedReplicaSet(
 namespace,
 undefined, // pretty
 false, // allowWatchBookmarks
 undefined, // continue
 undefined, // fieldSelector
 `configMap=${configMapName}` // labelSelector
 );

 return response.body.items;
 } catch (error) {
 pulumi.log.error(`Failed to list ReplicaSets: ${error}`);
 return [];
 }
 }
}

How it works

  1. Rollout Creation: When a new rollout is created, Argo Rollouts generates new ReplicaSets for the canary deployment
  2. ConfigMap Patching: Our code waits for the ReplicaSet creation, then patches the ConfigMap with owner references pointing to these ReplicaSets
  3. Garbage Collection: Kubernetes automatically tracks the relationship between ConfigMaps and ReplicaSets
  4. Automatic Cleanup: When ReplicaSets are cleaned up (based on the default 10 revision history), their associated ConfigMaps are also garbage collected

Benefits

  • Revision Control: ConfigMaps now have revision-like behavior tied to ReplicaSet history
  • Automatic Cleanup: No manual intervention needed for ConfigMap cleanup
  • Canary Safety: Old ConfigMaps remain available during canary deployments until ReplicaSets are cleaned up
  • Spot Instance Resilience: Pods that get replaced during canary deployments can still access their original ConfigMaps

Configuration options

interface RolloutComponentArgs {
 namespace: string;
 configMapPatch?: boolean;
 kubeconfig: pulumi.Output<any>;
 configMapName: pulumi.Output<string>;
 rolloutSpec: k8s.types.input.apiextensions.CustomResourceArgs["spec"];
}

To enable this feature in your rollout:

import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";

// Create your EKS cluster
const cluster = new k8s.Provider("k8s-provider", {
 kubeconfig: clusterKubeconfig,
});

// Create ConfigMap
const appConfig = new k8s.core.v1.ConfigMap("app-config", {
 metadata: {
 name: "my-app-config",
 namespace: "default",
 labels: {
 app: "my-app",
 configMap: "my-app-config", // Important for ReplicaSet selection
 },
 },
 data: {
 "app.properties": "key=value\nother=setting",
 },
}, { provider: cluster });

// Create rollout with ConfigMap revision management
const rollout = new ConfigMapRevisionRollout("my-app", {
 namespace: "default",
 configMapPatch: true,
 kubeconfig: clusterKubeconfig,
 configMapName: appConfig.metadata.name,
 rolloutSpec: {
 replicas: 3,
 selector: {
 matchLabels: { app: "my-app" },
 },
 template: {
 metadata: {
 labels: { app: "my-app" },
 },
 spec: {
 containers: [{
 name: "app",
 image: "nginx:latest",
 volumeMounts: [{
 name: "config",
 mountPath: "/etc/config",
 }],
 }],
 volumes: [{
 name: "config",
 configMap: {
 name: appConfig.metadata.name,
 },
 }],
 },
 },
 strategy: {
 canary: {
 maxSurge: 1,
 maxUnavailable: 0,
 steps: [
 { setWeight: 20 },
 { pause: { duration: "1m" } },
 { setWeight: 50 },
 { pause: { duration: "2m" } },
 ],
 },
 },
 },
});

Key dependencies

The solution uses several key packages:

  • @pulumi/kubernetes: For Kubernetes resources and ConfigMapPatch
  • @kubernetes/client-node: For direct Kubernetes API access
  • Argo Rollouts CRDs installed in your cluster

Conclusion

This approach gives us ConfigMap revision functionality that doesn’t exist natively in Kubernetes or Pulumi. By leveraging Kubernetes’ garbage collection mechanism and Pulumi’s patching capabilities, we created a robust solution for managing ConfigMap lifecycles during canary deployments.

The solution is particularly valuable when:

  • Running canary deployments with Argo Rollouts
  • Using AWS Spot instances that can be replaced during deployments
  • Needing automatic cleanup of old ConfigMaps without manual intervention
  • Wanting to maintain configuration availability for older pods during deployment transitions

This pattern can be extended to other scenarios where you need revision control for Kubernetes resources that don’t natively support it.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Even Linus Torvalds Is Vibe Coding Now

1 Share
Linus Torvalds has started experimenting with vibe coding, using Google's Antigravity AI to generate parts of a small hobby project called AudioNoise. "In doing so, he has become the highest-profile programmer yet to adopt this rapidly spreading, and often mocked, AI-driven programming," writes ZDNet's Steven Vaughan-Nichols. Fro the report: [I]t's a trivial program called AudioNoise -- a recent side project focused on digital audio effects and signal processing. He started it after building physical guitar pedals, GuitarPedal, to learn about audio circuits. He now gives them as gifts to kernel developers and, recently, to Bill Gates. While Torvalds hand-coded the C components, he turned to Antigravity for a Python-based audio sample visualizer. He openly acknowledges that he leans on online snippets when working in languages he knows less well. Who doesn't? [...] In the project's README file, Torvalds wrote that "the Python visualizer tool has been basically written by vibe-coding," describing how he "cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualiser." The remark underlines that the AI-generated code met his expectations well enough that he did not feel the need to manually re-implement it. Further reading: Linus Torvalds Says Vibe Coding is Fine For Getting Started, 'Horrible Idea' For Maintenance

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Meta plans to lay off hundreds of metaverse employees this week

1 Share

Meta's Reality Labs team is expected to lose around 10 percent of its staff, with layoffs concentrated on the division's metaverse employees, as reported by The New York Times. The layoffs are apparently a side effect of Meta's AI ambitions, which are pulling focus away from its virtual reality division.

According to the Times, Meta's chief technology officer, Andrew Bosworth, called a meeting for Wednesday that he "urged staff to attend in person," saying it will be the "most important" meeting of the year. Bosworth oversees the Reality Labs division, which employs about 15,000 people. Unfortunately, layoffs to Meta's VR team may not come …

Read the full story at The Verge.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Pulls the Plug On Its Free, Two-Decade-Old Windows Deployment Toolkit

1 Share
Microsoft has abruptly retired the Microsoft Deployment Toolkit, a free platform that IT administrators have relied on to deploy Windows operating systems and applications for more than two decades. The retirement, reports the Register, came with "immediate" notice, meaning no more fixes, support, security patches, or updates, and the download packages may be removed from official distribution channels.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Why It Is Difficult To Resize Windows on MacOS 26

1 Share
The dramatically larger corner radius Apple introduced in macOS 26 Tahoe has pushed the invisible resize hit target for windows mostly outside the window itself -- roughly 75% of the 19Ã--19 pixel clickable area now lies beyond the visible boundary. In previous macOS versions, about 62% of that resize target would fall inside the window corner. Apple removed the visible resize grippy-strip from window corners in Mac OS X 10.7 Lion in July 2011. The visual indicator had served two purposes: showing users where to click and signaling whether a window could be resized at all. Users since then have relied on muscle memory and the reasonable assumption that clicking near the inside corner would initiate a resize. DaringFireball's John Gruber advice: don't upgrade to macOS 26, or downgrade if you already have. he wrote Monday: "Why suffer willingly with a user interface that presents you with absurdities like window resizing affordances that are 75 percent outside the window?"

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories