Read more of this story at Slashdot.
Read more of this story at Slashdot.
In Kubernetes v1.36, Declarative Validation for Kubernetes native types has reached General Availability (GA).
For users, this means more reliable, predictable, and better-documented APIs. By moving to a declarative model, the project also unlocks the future ability to publish validation rules via OpenAPI and integrate with ecosystem tools like Kubebuilder. For contributors and ecosystem developers, this replaces thousands of lines of handwritten validation code with a unified, maintainable framework.
This post covers why this migration was necessary, how the declarative validation framework works, and what new capabilities come with this GA release.
For years, the validation of Kubernetes native APIs relied almost entirely on handwritten Go code. If a field needed to be bounded by a minimum value, or if two fields needed to be mutually exclusive, developers had to write explicit Go functions to enforce those constraints.
As the Kubernetes API surface expanded, this approach led to several systemic issues:
The solution proposed by SIG API Machinery was Declarative Validation: using Interface Definition Language (IDL) tags (specifically +k8s: marker tags) directly within types.go files to define validation rules.
validation-genAt the core of the declarative validation feature is a new code generator called validation-gen. Just as Kubernetes uses generators for deep copies, conversions, and defaulting, validation-gen parses +k8s: tags and automatically generates the corresponding Go validation functions.
These generated functions are then registered seamlessly with the API scheme. The generator is designed as an extensible framework, allowing developers to plug in new "Validators" by describing the tags they parse and the Go logic they should produce.
The declarative validation framework introduces a comprehensive suite of marker tags that provide rich validation capabilities highly optimized for Go types. For a full list of supported tags, check out the official documentation. Here is a catalog of some of the most common tags you will now see in the Kubernetes codebase:
+k8s:optional, +k8s:required+k8s:minimum=0, +k8s:maximum=100, +k8s:maxLength=16, +k8s:format=k8s-short-name+k8s:listType=map, +k8s:listMapKey=type+k8s:unionMember, +k8s:unionDiscriminator+k8s:immutable, +k8s:update=[NoSet, NoModify, NoClear]Example Usage:
type ReplicationControllerSpec struct {
// +k8s:optional
// +k8s:minimum=0
Replicas *int32 `json:"replicas,omitempty"`
}
By placing these tags directly above the field definitions, the constraints are self-documenting and immediately visible to anyone reading the type definitions.
One of the most substantial outcomes of this work is that validation ratcheting is now a standard, ambient part of the API. In the past, if we needed to tighten validation, we had to first add handwritten ratcheting code, wait a release, and then tighten the validation to avoid breaking existing objects.
With declarative validation, this safety mechanism is built-in. If a user updates an existing object, the validation framework compares the incoming object with the oldObject. If a specific field's value is semantically equivalent to its prior state (i.e., the user didn't change it), the new validation rule is bypassed. This "ambient ratcheting" means we can loosen or tighten validation immediately and in the least disruptive way possible.
kube-api-linterReaching GA required absolute confidence in the generated code, but our vision extends beyond just validation. Declarative validation is a key part of a comprehensive approach to making API review easier, more consistent, and highly scalable.
By moving validation rules out of opaque Go functions and into structured markers, we are empowering tools like kube-api-linter. This linter can now statically analyze API types and enforce API conventions automatically, significantly reducing the manual burden on SIG API Machinery reviewers and providing immediate feedback to contributors.
With the release of Kubernetes v1.36, Declarative Validation graduates to General Availability (GA). As a stable feature, the associated DeclarativeValidation feature gate is now enabled by default. It has become the primary mechanism for adding new validation rules to Kubernetes native types.
Looking forward, the project is committed to adopting declarative validation even more extensively. This includes migrating the remaining legacy handwritten validation code for established APIs and requiring its use for all new APIs and new fields. This ongoing transition will continue to shrink the codebase's complexity while enhancing the consistency and reliability of the entire Kubernetes API surface.
Beyond the core migration, declarative validation also unlocks an exciting future for the broader ecosystem. Because validation rules are now defined as structured markers rather than opaque Go code, they can be parsed and reflected in the OpenAPI schemas published by the Kubernetes API server. This paves the way for tools like kubectl, client libraries, and IDEs to perform rich client-side validation before a request is ever sent to the cluster. The same declarative framework can also be consumed by ecosystem tools like Kubebuilder, enabling a more consistent developer experience for authors of Custom Resource Definitions (CRDs).
The migration to declarative validation is an ongoing effort. While the framework itself is GA, there is still work to be done migrating older APIs to the new declarative format.
If you are interested in contributing to the core of Kubernetes API Machinery, this is a fantastic place to start. Check out the validation-gen documentation, look for issues tagged with sig/api-machinery, and join the conversation in the #sig-api-machinery and #sig-api-machinery-dev-tools channels on Kubernetes Slack (for an invitation, visit https://slack.k8s.io/). You can also attend the SIG API Machinery meetings to get involved directly.
A huge thank you to everyone who helped bring this feature to GA:
And the many others across the Kubernetes community who contributed along the way.
Welcome to the declarative future of Kubernetes validation!
When you ask Copilot generic questions (“How should we log errors?” “What’s our API versioning approach?”), the model will often respond with reasonable defaults. But reasonable defaults are not the same as your standards.
Copilot Spaces solve the context problem by allowing you to attach a curated set of sources (files, folders, repos, PRs/issues, uploads, free text) plus explicit instructions—so Copilot answers in the context of your team’s rules and artifacts. Spaces can be shared with your team and stay updated as the underlying GitHub content changes—so your “best practices coach” stays evergreen.
Here’s the mental model:
Engineering Knowledge Base Repo: A dedicated repo containing your standards as Markdown (coding style, architecture decisions, security rules, testing conventions, examples, templates).
Copilot Space: “Engineering Standards Coach”: A Space that attaches the knowledge base repo (or key folders/files within it), optionally your main application repo(s), and a short set of “rules of engagement” (instructions).
In-repo reinforcement (optional but powerful): Custom instruction files (repo-wide + path-specific) and prompt files (slash commands) inside your production repos to standardize behavior and workflows.
Create a repo such as:
A practical starter structure:
engineering-knowledge-base/
README.md
standards/
coding-style.md
logging.md
error-handling.md
performance.md
security/
secure-coding.md
secrets.md
threat-modeling.md
architecture/
overview.md
adr/
0001-service-boundaries.md
0002-api-versioning.md
testing/
unit-testing.md
integration-testing.md
contract-testing.md
templates/
pr-review-checklist.md
api-design-checklist.md
definition-of-done.md
Tip: Keep these docs opinionated, concrete, and example-heavy—Copilot works best when it can point to specific patterns rather than abstract principles.
Create a space, name it, choose an owner (personal or organization), then add sources and instructions. Inside the Space, add two types of context: Instructions (how Copilot should behave) and Sources (your actual code and docs).
Example instructions you can paste:
You are the Engineering Standards Coach for this organization.
Goals:
- Recommend solutions that follow our standards in the attached knowledge base.
- When proposing code, align with our logging, error-handling, security, and testing guidelines.
- When uncertain, ask for the missing repo context or point to the exact standard that applies.
Output format:
- Start with the standard(s) you are applying (with a link or filename reference).
- Then provide the recommended implementation.
- Include a lightweight checklist for reviewers.
Attach:
Spaces are excellent for curated context and team-wide “ask me anything about our standards.” But you can reinforce consistency directly inside each repo by adding custom instruction files.
Create: your-app-repo/.github/copilot-instructions.md
# Repository Copilot Instructions
## Tech stack
- Language: TypeScript (strict)
- Framework: Node.js + Express
- Testing: Jest
- Lint/format: ESLint + Prettier
## Engineering rules
- Use structured logging as defined in /docs/logging.md
- Never log secrets or raw tokens
- Prefer small, composable functions
- All new endpoints must include: input validation, authz checks, unit tests, and consistent error handling
## Build & test
- Install: npm ci
- Test: npm test
- Lint: npm run lint
Create: your-app-repo/.github/instructions/security.instructions.md
---
applyTo: "**/*.ts"
---
# Security rules (TypeScript)
- Never introduce dynamic SQL construction; use parameterized queries only.
- Any new external HTTP call must enforce timeouts and retry policy.
- Any auth logic must include negative tests.
To standardize repeatable workflows like code review, test scaffolding, or endpoint scaffolding, create prompt files (slash commands) as .prompt.md files—commonly in .github/prompts/. Engineers invoke them manually in chat by typing /.
Create: your-app-repo/.github/prompts/standards-code-review.prompt.md
---
description: Review code against our org standards (security, perf, style, tests)
---
You are a senior engineer performing a standards-based review.
Use these checks:
1) Security: input validation, authz, secrets, injection risks
2) Reliability: error handling, retries/timeouts, edge cases
3) Observability: structured logs, metrics, tracing where relevant
4) Testing: required coverage, negative tests, naming conventions
5) Style: follow repository rules in .github/copilot-instructions.md
Output:
- Summary (2-3 lines)
- Issues (severity: BLOCKER/REQUIRED/SUGGESTION)
- Suggested patch snippets (only where confident)
- “Ready to merge?” verdict
Now any engineer can type /standards-code-review and get the same structured output every time, without rewriting the prompt.
Ask inside the Space: “Summarize our service architecture and coding conventions for onboarding. Link the key docs.”
Ask in the Space: “We’re adding endpoint X. Generate a plan that follows our API versioning ADR and error-handling standard.”
In the repo, run the prompt file: /standards-code-review.
About GitHub Copilot Spaces: https://docs.github.com/en/copilot/concepts/context/spaces
Creating GitHub Copilot Spaces: https://docs.github.com/en/copilot/how-tos/provide-context/use-copilot-spaces/create-copilot-spaces
Adding custom instructions for GitHub Copilot: https://docs.github.com/en/copilot/how-tos/copilot-cli/customize-copilot/add-custom-instructions
Use custom instructions in VS Code: https://code.visualstudio.com/docs/copilot/customization/custom-instructions
Use prompt files in VS Code: https://code.visualstudio.com/docs/copilot/customization/prompt-files
Once you implement this pattern, you get a virtuous cycle: teams encode standards as Markdown; Copilot Spaces ground answers in those standards; prompt files and instruction files standardize execution; and code reviews shift from style policing to design and correctness.
Raspberry Pi Connect lets you access your Raspberry Pi devices remotely from anywhere, straight from a web browser. Since we last wrote about Connect, we’ve shipped three updates that we think will make it noticeably more useful — particularly for the growing number of teams using Connect for Organisations to manage fleets of devices.

Once you have more than a handful of Raspberry Pi devices in an organisation, finding the right one quickly starts to matter. With these new updates, you can now apply tags to any device — for example, by location (london, cambridge), by environment (production, staging), or by what the device actually does (point-of-sale, kiosk). Tags appear underneath the device name on both the device page and the device list, and any administrator can add or remove them from the device’s Settings page, or when first linking a device to your account.

The search bar at the top of the device list now combines free-text search with structured filters. Type a qualifier followed by a colon — model:5, memory:4gb, os:raspios-13, or tag:production — and Connect will narrow the list as you type. You can even stack filters in a single query: model:5 tag:production dashboard will find every Raspberry Pi 5 tagged with production that has “dashboard” in its name. Selecting any tag in the device list adds it to your search instantly.
Tags are also exposed through the Management API, meaning you can apply them when you create an authentication key during provisioning — handy if you’re scripting the roll-out of a new batch of devices.

Connect for Organisations now lets administrators require all members to use two-factor authentication (2FA) on their Raspberry Pi ID. It’s a single switch in the new Authentication section of the organisation’s Settings tab, and it adds a meaningful layer of protection against compromised member accounts being used to reach your devices.
Turning it on starts a 14-day grace period. During that window, members without 2FA see a banner showing how long they have left and a link to enable it on their Raspberry Pi ID; everyone else carries on as normal. When the grace period ends, any member still without 2FA is blocked from the organisation until they enable it. They won’t be able to access devices or other organisation resources in the meantime.

If you ever want to relax the requirement, you can turn off 2FA in your organisation settings. (Disabling and re-enabling 2FA resets the grace period, so you can give members another two weeks if you need to.)

Connect’s screen-sharing interface works on phones and tablets as well as desktops, but typing on a touch device was previously only possible if you attached a physical keyboard. The screen-sharing toolbar now includes a dedicated Keyboard toggle alongside the existing buttons for Ctrl, Alt, Esc, and Tab, allowing you to use an on-screen keyboard on mobile devices without any extra hardware attached.
Connect is free for personal use. Connect for Organisations comes with a four-week free trial, after which you’re billed monthly in arrears for the peak number of registered devices. You can find full instructions for tagging and filtering devices, enabling 2FA, and everything else in the Raspberry Pi Connect documentation, or sign in at connect.raspberrypi.com to get started.
The post Raspberry Pi Connect: Device tags, required 2FA, and a mobile keyboard appeared first on Raspberry Pi.
What happens when you discover that a book that fundamentally changed how you think is built on a shaky foundation? In today's episode, I share my own struggle with the replication crisis surrounding Daniel Kahneman's *Thinking Fast and Slow*, and I use it as a springboard to talk about a much bigger skill: knowing how to update your beliefs when reality shifts underneath you. This isn't about throwing out science or losing trust in your heroes. It's about developing the muscle to replace old explanations with better ones — a skill that has never been more important for software engineers.
No matter what you're building, SerpApi is the web search API for your needs. If you're building an application that needs real-time search data—whether that's an AI agent, an SEO tool, or a price tracker—SerpApi handles it for you. ● Make an API call and get back clean JSON. ● They handle the proxies, CAPTCHAs, parsing, and all the scraping so you don't have to. ● They support dozens of search engines and platforms, and are trusted by companies like NVIDIA, Adobe, and Shopify. ● If you're building with AI, they even have an official MCP to make getting up and running a simple task. Get started with a free tier to build and test your application before you commit. Go to serpapi.com.
If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.
If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community today!
We are developing a brand new newsletter called The Tea Break! You can be the first in line to receive it by entering your email directly over at developertea.com.
As artificial intelligence reshapes how information is created, accessed, and controlled, a quieter crisis is emerging: the potential loss of the web’s historical record.
In this episode, tech writer Mike Masnick, Mark Graham of the Internet Archive, and public interest tech and media lawyer Kendra Albert come together for a timely conversation on what it means to preserve the web in the age of AI.
As publishers move to block AI scraping, they’re also increasingly restricting access to archiving tools like the Wayback Machine, raising urgent questions about who gets to access the past, and whether it will remain accessible at all. If preserving the web is increasingly treated as a threat, what happens to our collective memory? And what will it take to ensure that knowledge remains accessible in an AI-driven world?