Read more of this story at Slashdot.
Read more of this story at Slashdot.
ConfigMaps in Kubernetes don’t have built-in revision support, which can create challenges when deploying applications with canary strategies. When using Argo Rollouts with AWS Spot instances, ConfigMap deletions during canary deployments can cause older pods to fail when they try to reload configuration. We solved this by implementing a custom ConfigMap revision system using Pulumi’s ConfigMapPatch and Kubernetes owner references.
When deploying applications to Kubernetes using canary strategies with Argo Rollouts, we encountered a specific challenge:
Our solution leverages Kubernetes’ garbage collection mechanism by using owner references to tie ConfigMaps to ReplicaSets created during canary deployments.
Here’s how we implemented this solution in our rollout component:
import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";
import * as k8sClient from "@kubernetes/client-node";
interface RolloutComponentArgs {
namespace: string;
configMapPatch?: boolean;
kubeconfig: pulumi.Output<any>;
configMapName: pulumi.Output<string>;
rolloutSpec: k8s.types.input.apiextensions.CustomResourceArgs["spec"];
}
export class ConfigMapRevisionRollout extends pulumi.ComponentResource {
public readonly rollout: k8s.apiextensions.CustomResource;
constructor(
name: string,
args: RolloutComponentArgs,
opts?: pulumi.ComponentResourceOptions
) {
super("pulumi:component:ConfigMapRevisionRollout", name, {}, opts);
// Create the Argo Rollout using CustomResource
this.rollout = new k8s.apiextensions.CustomResource(
`${name}-rollout`,
{
apiVersion: "argoproj.io/v1alpha1",
kind: "Rollout",
metadata: {
name: name,
namespace: args.namespace,
},
spec: args.rolloutSpec,
},
{ parent: this, ...opts }
);
// Apply ConfigMap revision patching if enabled
if (args.configMapPatch) {
this.setupConfigMapRevisions(name, args);
}
this.registerOutputs({
rollout: this.rollout,
});
}
private setupConfigMapRevisions(name: string, args: RolloutComponentArgs): void {
pulumi
.all([args.kubeconfig, args.configMapName])
.apply(async ([kubeconfig, configMapName]) => {
try {
// Create Server-Side Apply enabled provider
const ssaProvider = new k8s.Provider(`${name}-ssa-provider`, {
kubeconfig: JSON.stringify(kubeconfig),
enableServerSideApply: true,
});
// Wait for rollout to stabilize and create ReplicaSets
await this.waitForRolloutStabilization();
// Get ReplicaSets associated with this rollout
const replicaSets = await this.getAssociatedReplicaSets(
args.namespace,
configMapName,
kubeconfig
);
if (replicaSets.length === 0) {
pulumi.log.warn("No ReplicaSets found for ConfigMap patching");
return;
}
// Create owner references for the ConfigMap
const ownerReferences = replicaSets.map(rs => ({
apiVersion: "apps/v1",
kind: "ReplicaSet",
name: rs.metadata?.name!,
uid: rs.metadata?.uid!,
controller: false,
blockOwnerDeletion: false,
}));
// Patch the ConfigMap with owner references
new k8s.core.v1.ConfigMapPatch(
`${configMapName}-revision-patch`,
{
metadata: {
name: configMapName,
namespace: args.namespace,
ownerReferences: ownerReferences,
annotations: {
"pulumi.com/patchForce": "true",
"configmap.kubernetes.io/revision-managed": "true",
},
},
},
{
provider: ssaProvider,
retainOnDelete: true,
parent: this,
}
);
pulumi.log.info(
`Successfully patched ConfigMap ${configMapName} with ${ownerReferences.length} owner references`
);
} catch (error) {
pulumi.log.error(`Failed to setup ConfigMap revisions: ${error}`);
throw error;
}
});
}
private async waitForRolloutStabilization(): Promise<void> {
// Wait for rollout to create and stabilize ReplicaSets
// In production, consider using a more sophisticated polling mechanism
await new Promise(resolve => setTimeout(resolve, 10000));
}
private async getAssociatedReplicaSets(
namespace: string,
configMapName: string,
kubeconfig: any
): Promise<k8sClient.V1ReplicaSet[]> {
const kc = new k8sClient.KubeConfig();
kc.loadFromString(JSON.stringify(kubeconfig));
const appsV1Api = kc.makeApiClient(k8sClient.AppsV1Api);
try {
const response = await appsV1Api.listNamespacedReplicaSet(
namespace,
undefined, // pretty
false, // allowWatchBookmarks
undefined, // continue
undefined, // fieldSelector
`configMap=${configMapName}` // labelSelector
);
return response.body.items;
} catch (error) {
pulumi.log.error(`Failed to list ReplicaSets: ${error}`);
return [];
}
}
}
interface RolloutComponentArgs {
namespace: string;
configMapPatch?: boolean;
kubeconfig: pulumi.Output<any>;
configMapName: pulumi.Output<string>;
rolloutSpec: k8s.types.input.apiextensions.CustomResourceArgs["spec"];
}
To enable this feature in your rollout:
import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";
// Create your EKS cluster
const cluster = new k8s.Provider("k8s-provider", {
kubeconfig: clusterKubeconfig,
});
// Create ConfigMap
const appConfig = new k8s.core.v1.ConfigMap("app-config", {
metadata: {
name: "my-app-config",
namespace: "default",
labels: {
app: "my-app",
configMap: "my-app-config", // Important for ReplicaSet selection
},
},
data: {
"app.properties": "key=value\nother=setting",
},
}, { provider: cluster });
// Create rollout with ConfigMap revision management
const rollout = new ConfigMapRevisionRollout("my-app", {
namespace: "default",
configMapPatch: true,
kubeconfig: clusterKubeconfig,
configMapName: appConfig.metadata.name,
rolloutSpec: {
replicas: 3,
selector: {
matchLabels: { app: "my-app" },
},
template: {
metadata: {
labels: { app: "my-app" },
},
spec: {
containers: [{
name: "app",
image: "nginx:latest",
volumeMounts: [{
name: "config",
mountPath: "/etc/config",
}],
}],
volumes: [{
name: "config",
configMap: {
name: appConfig.metadata.name,
},
}],
},
},
strategy: {
canary: {
maxSurge: 1,
maxUnavailable: 0,
steps: [
{ setWeight: 20 },
{ pause: { duration: "1m" } },
{ setWeight: 50 },
{ pause: { duration: "2m" } },
],
},
},
},
});
The solution uses several key packages:
@pulumi/kubernetes: For Kubernetes resources and ConfigMapPatch@kubernetes/client-node: For direct Kubernetes API accessThis approach gives us ConfigMap revision functionality that doesn’t exist natively in Kubernetes or Pulumi. By leveraging Kubernetes’ garbage collection mechanism and Pulumi’s patching capabilities, we created a robust solution for managing ConfigMap lifecycles during canary deployments.
The solution is particularly valuable when:
This pattern can be extended to other scenarios where you need revision control for Kubernetes resources that don’t natively support it.
Read more of this story at Slashdot.
Meta's Reality Labs team is expected to lose around 10 percent of its staff, with layoffs concentrated on the division's metaverse employees, as reported by The New York Times. The layoffs are apparently a side effect of Meta's AI ambitions, which are pulling focus away from its virtual reality division.
According to the Times, Meta's chief technology officer, Andrew Bosworth, called a meeting for Wednesday that he "urged staff to attend in person," saying it will be the "most important" meeting of the year. Bosworth oversees the Reality Labs division, which employs about 15,000 people. Unfortunately, layoffs to Meta's VR team may not come …
Read more of this story at Slashdot.
Read more of this story at Slashdot.