Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153138 stories
·
33 followers

OAuth 2.1 Made Simple: The Only Flows You Need

1 Share

Back in 2019, Dominick Baier, Duende Cofounder and Security subject-matter expert, wrote a prophetic post called "Two is the Magic Number", a riff on De La Soul's "Three is the Magic Number", arguing that you only needed two OAuth flows to cover every real-world scenario. At the time, it was a bold simplification. OAuth 2.0 had shipped with a sprawl of grant types: Implicit, Resource Owner Password Credentials, Authorization Code without PKCE, Client Credentials, and the ecosystem dutifully built tutorials for all of them. The result was confusion. Developers picked the wrong flow, shipped insecure apps, and blamed OAuth itself for being "too complicated."

Dominick was right, and the standards body agreed. OAuth 2.1 formally removed the footguns, mandated PKCE on every authorization code grant, and left us with a protocol that is dramatically simpler to learn and harder to misuse. If you're building on .NET 10 in 2026, this is the only article you need. Two flows cover almost everything. A third handles the edge case. Let's go.

Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

CO2 Levels In the Atmosphere Hit 'Depressing' New Record

1 Share
Atmospheric carbon dioxide hit a new record in April, averaging about 431 parts per million at NOAA's Mauna Loa Observatory. That's up from under 320 ppm when the site began measurements in 1958. Scientific American reports: Greenhouse gases, such as carbon dioxide, are measured as a proportion of the total atmosphere. The numbers are presented as the number of molecules of a particular gas out of a million total molecules, or ppm. Climate scientist Zachary Labe of Climate Central, a nonprofit that researches climate change, says the new record is "depressing" but not unexpected. "It's just another sign that carbon dioxide continues to increase in our atmosphere as our planet continues to warm," he says. "For many climate scientists, this is just 'here it is again, another record in the wrong direction.'" Labe explains that the amount of CO2 in the atmosphere tends to peak in April each year as decaying plants release greenhouse gases after winter. Some of that CO2 gets reabsorbed by plants as they grow during the warmer months. But NOAA's data show a worrying trend, with the average monthly amount of CO2 steadily increasing. [...] Although the amount of CO2 in the atmosphere has continued to rise, there was a reduction in U.S. emissions in 2023 and 2024. That trend, however, was reversed in 2025, at least partially because of the increased electricity demand from artificial intelligence data centers. Still, Labe says there are reasons for optimism as the use of renewable energy sources such as solar and wind expands.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
59 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Kubernetes v1.36: Declarative Validation Graduates to GA

1 Share

In Kubernetes v1.36, Declarative Validation for Kubernetes native types has reached General Availability (GA).

For users, this means more reliable, predictable, and better-documented APIs. By moving to a declarative model, the project also unlocks the future ability to publish validation rules via OpenAPI and integrate with ecosystem tools like Kubebuilder. For contributors and ecosystem developers, this replaces thousands of lines of handwritten validation code with a unified, maintainable framework.

This post covers why this migration was necessary, how the declarative validation framework works, and what new capabilities come with this GA release.

The Motivation: Escaping the "Handwritten" Technical Debt

For years, the validation of Kubernetes native APIs relied almost entirely on handwritten Go code. If a field needed to be bounded by a minimum value, or if two fields needed to be mutually exclusive, developers had to write explicit Go functions to enforce those constraints.

As the Kubernetes API surface expanded, this approach led to several systemic issues:

  1. Technical Debt: The project accumulated roughly 18,000 lines of boilerplate validation code. This code was difficult to maintain, error-prone, and required intense scrutiny during code reviews.
  2. Inconsistency: Without a centralized framework, validation rules were sometimes applied inconsistently across different resources.
  3. Opaque APIs: Handwritten validation logic was difficult to discover or analyze programmatically. This meant clients and tooling couldn't predictably know validation rules without consulting the source code or encountering errors at runtime.

The solution proposed by SIG API Machinery was Declarative Validation: using Interface Definition Language (IDL) tags (specifically +k8s: marker tags) directly within types.go files to define validation rules.

Enter validation-gen

At the core of the declarative validation feature is a new code generator called validation-gen. Just as Kubernetes uses generators for deep copies, conversions, and defaulting, validation-gen parses +k8s: tags and automatically generates the corresponding Go validation functions.

These generated functions are then registered seamlessly with the API scheme. The generator is designed as an extensible framework, allowing developers to plug in new "Validators" by describing the tags they parse and the Go logic they should produce.

A Comprehensive Suite of +k8s: Tags

The declarative validation framework introduces a comprehensive suite of marker tags that provide rich validation capabilities highly optimized for Go types. For a full list of supported tags, check out the official documentation. Here is a catalog of some of the most common tags you will now see in the Kubernetes codebase:

  • Presence: +k8s:optional, +k8s:required
  • Basic Constraints: +k8s:minimum=0, +k8s:maximum=100, +k8s:maxLength=16, +k8s:format=k8s-short-name
  • Collections: +k8s:listType=map, +k8s:listMapKey=type
  • Unions: +k8s:unionMember, +k8s:unionDiscriminator
  • Immutability: +k8s:immutable, +k8s:update=[NoSet, NoModify, NoClear]

Example Usage:

type ReplicationControllerSpec struct {
 // +k8s:optional
 // +k8s:minimum=0
 Replicas *int32 `json:"replicas,omitempty"`
}

By placing these tags directly above the field definitions, the constraints are self-documenting and immediately visible to anyone reading the type definitions.

Advanced Capabilities: "Ambient Ratcheting"

One of the most substantial outcomes of this work is that validation ratcheting is now a standard, ambient part of the API. In the past, if we needed to tighten validation, we had to first add handwritten ratcheting code, wait a release, and then tighten the validation to avoid breaking existing objects.

With declarative validation, this safety mechanism is built-in. If a user updates an existing object, the validation framework compares the incoming object with the oldObject. If a specific field's value is semantically equivalent to its prior state (i.e., the user didn't change it), the new validation rule is bypassed. This "ambient ratcheting" means we can loosen or tighten validation immediately and in the least disruptive way possible.

Scaling API Reviews with kube-api-linter

Reaching GA required absolute confidence in the generated code, but our vision extends beyond just validation. Declarative validation is a key part of a comprehensive approach to making API review easier, more consistent, and highly scalable.

By moving validation rules out of opaque Go functions and into structured markers, we are empowering tools like kube-api-linter. This linter can now statically analyze API types and enforce API conventions automatically, significantly reducing the manual burden on SIG API Machinery reviewers and providing immediate feedback to contributors.

What's next?

With the release of Kubernetes v1.36, Declarative Validation graduates to General Availability (GA). As a stable feature, the associated DeclarativeValidation feature gate is now enabled by default. It has become the primary mechanism for adding new validation rules to Kubernetes native types.

Looking forward, the project is committed to adopting declarative validation even more extensively. This includes migrating the remaining legacy handwritten validation code for established APIs and requiring its use for all new APIs and new fields. This ongoing transition will continue to shrink the codebase's complexity while enhancing the consistency and reliability of the entire Kubernetes API surface.

Beyond the core migration, declarative validation also unlocks an exciting future for the broader ecosystem. Because validation rules are now defined as structured markers rather than opaque Go code, they can be parsed and reflected in the OpenAPI schemas published by the Kubernetes API server. This paves the way for tools like kubectl, client libraries, and IDEs to perform rich client-side validation before a request is ever sent to the cluster. The same declarative framework can also be consumed by ecosystem tools like Kubebuilder, enabling a more consistent developer experience for authors of Custom Resource Definitions (CRDs).

Getting involved

The migration to declarative validation is an ongoing effort. While the framework itself is GA, there is still work to be done migrating older APIs to the new declarative format.

If you are interested in contributing to the core of Kubernetes API Machinery, this is a fantastic place to start. Check out the validation-gen documentation, look for issues tagged with sig/api-machinery, and join the conversation in the #sig-api-machinery and #sig-api-machinery-dev-tools channels on Kubernetes Slack (for an invitation, visit https://slack.k8s.io/). You can also attend the SIG API Machinery meetings to get involved directly.

Acknowledgments

A huge thank you to everyone who helped bring this feature to GA:

And the many others across the Kubernetes community who contributed along the way.

Welcome to the declarative future of Kubernetes validation!

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Turning GitHub Copilot into a “Best Practices Coach” with Copilot Spaces + a Markdown Knowledge Base

1 Share

Why Copilot Spaces + Markdown repos work so well

When you ask Copilot generic questions (“How should we log errors?” “What’s our API versioning approach?”), the model will often respond with reasonable defaults. But reasonable defaults are not the same as your standards.

Copilot Spaces solve the context problem by allowing you to attach a curated set of sources (files, folders, repos, PRs/issues, uploads, free text) plus explicit instructions—so Copilot answers in the context of your team’s rules and artifacts. Spaces can be shared with your team and stay updated as the underlying GitHub content changes—so your “best practices coach” stays evergreen.

The architecture (high level)

Here’s the mental model:

Engineering Knowledge Base Repo: A dedicated repo containing your standards as Markdown (coding style, architecture decisions, security rules, testing conventions, examples, templates).

Copilot Space: “Engineering Standards Coach”: A Space that attaches the knowledge base repo (or key folders/files within it), optionally your main application repo(s), and a short set of “rules of engagement” (instructions).

In-repo reinforcement (optional but powerful): Custom instruction files (repo-wide + path-specific) and prompt files (slash commands) inside your production repos to standardize behavior and workflows.

Step 1 Create a Knowledge Base repo (Markdown-first)

Create a repo such as:

  • engineering-knowledge-base
  • platform-playbook
  • org-standards

A practical starter structure:

engineering-knowledge-base/
  README.md
  standards/
    coding-style.md
    logging.md
    error-handling.md
    performance.md
  security/
    secure-coding.md
    secrets.md
    threat-modeling.md
  architecture/
    overview.md
    adr/
      0001-service-boundaries.md
      0002-api-versioning.md
  testing/
    unit-testing.md
    integration-testing.md
    contract-testing.md
  templates/
    pr-review-checklist.md
    api-design-checklist.md
    definition-of-done.md

Tip: Keep these docs opinionated, concrete, and example-heavy—Copilot works best when it can point to specific patterns rather than abstract principles.

Step 2 Create a Copilot Space and attach your sources

Create a space, name it, choose an owner (personal or organization), then add sources and instructions. Inside the Space, add two types of context: Instructions (how Copilot should behave) and Sources (your actual code and docs).

2.1 Instructions (how Copilot should behave in this Space)

Example instructions you can paste:

You are the Engineering Standards Coach for this organization.

Goals:
- Recommend solutions that follow our standards in the attached knowledge base.
- When proposing code, align with our logging, error-handling, security, and testing guidelines.
- When uncertain, ask for the missing repo context or point to the exact standard that applies.

Output format:
- Start with the standard(s) you are applying (with a link or filename reference).
- Then provide the recommended implementation.
- Include a lightweight checklist for reviewers.

2.2 Sources (your real “knowledge base”)

Attach:

  • The knowledge base repo (or just the folders that matter)
  • Your main code repo(s) (or select folders)
  • PR checklist and Definition of Done templates
  • Key architecture docs, runbooks, or troubleshooting guides

Step 3 (Optional) Add instruction files to your production repos

Spaces are excellent for curated context and team-wide “ask me anything about our standards.” But you can reinforce consistency directly inside each repo by adding custom instruction files.

3.1 Repo-wide instructions (.github/copilot-instructions.md)

Create: your-app-repo/.github/copilot-instructions.md

# Repository Copilot Instructions

## Tech stack
- Language: TypeScript (strict)
- Framework: Node.js + Express
- Testing: Jest
- Lint/format: ESLint + Prettier

## Engineering rules
- Use structured logging as defined in /docs/logging.md
- Never log secrets or raw tokens
- Prefer small, composable functions
- All new endpoints must include: input validation, authz checks, unit tests, and consistent error handling

## Build & test
- Install: npm ci
- Test: npm test
- Lint: npm run lint

3.2 Path-specific instructions (.github/instructions/*.instructions.md)

Create: your-app-repo/.github/instructions/security.instructions.md

---
applyTo: "**/*.ts"
---
# Security rules (TypeScript)

- Never introduce dynamic SQL construction; use parameterized queries only.
- Any new external HTTP call must enforce timeouts and retry policy.
- Any auth logic must include negative tests.

Step 4 (Optional) Turn your best practices into “slash commands” with prompt files

To standardize repeatable workflows like code review, test scaffolding, or endpoint scaffolding, create prompt files (slash commands) as .prompt.md files—commonly in .github/prompts/. Engineers invoke them manually in chat by typing /.

Create: your-app-repo/.github/prompts/standards-code-review.prompt.md

---
description: Review code against our org standards (security, perf, style, tests)
---

You are a senior engineer performing a standards-based review.

Use these checks:
1) Security: input validation, authz, secrets, injection risks
2) Reliability: error handling, retries/timeouts, edge cases
3) Observability: structured logs, metrics, tracing where relevant
4) Testing: required coverage, negative tests, naming conventions
5) Style: follow repository rules in .github/copilot-instructions.md

Output:
- Summary (2-3 lines)
- Issues (severity: BLOCKER/REQUIRED/SUGGESTION)
- Suggested patch snippets (only where confident)
- “Ready to merge?” verdict

Now any engineer can type /standards-code-review and get the same structured output every time, without rewriting the prompt.

How teams actually use this day-to-day

Recipe A  Onboarding a new engineer

Ask inside the Space: “Summarize our service architecture and coding conventions for onboarding. Link the key docs.”

Recipe B  Writing a feature with best-practice guardrails

Ask in the Space: “We’re adding endpoint X. Generate a plan that follows our API versioning ADR and error-handling standard.”

Recipe C  Enforcing review standards consistently

In the repo, run the prompt file: /standards-code-review.

Governance and best practices (what to do / what to avoid)

  1. Keep Spaces purpose-built. Avoid dumping an entire org into one Space if your goal is consistent, grounded output.
  2. Prefer linking the “golden source.” Keep standards in a single repo and update them via PR—treat it like code.
  3. Make instructions short but strict. Detailed rules belong in your Markdown standards.
  4. Avoid conflicting instruction files. If instructions contradict each other, results can be inconsistent.

References (official docs for further reading)

About GitHub Copilot Spaces: https://docs.github.com/en/copilot/concepts/context/spaces 

Creating GitHub Copilot Spaces: https://docs.github.com/en/copilot/how-tos/provide-context/use-copilot-spaces/create-copilot-spaces 

Adding custom instructions for GitHub Copilot: https://docs.github.com/en/copilot/how-tos/copilot-cli/customize-copilot/add-custom-instructions 

Use custom instructions in VS Code: https://code.visualstudio.com/docs/copilot/customization/custom-instructions 

Use prompt files in VS Code: https://code.visualstudio.com/docs/copilot/customization/prompt-files 

Closing: the “best practices” flywheel

Once you implement this pattern, you get a virtuous cycle: teams encode standards as Markdown; Copilot Spaces ground answers in those standards; prompt files and instruction files standardize execution; and code reviews shift from style policing to design and correctness.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Raspberry Pi Connect: Device tags, required 2FA, and a mobile keyboard

1 Share

Raspberry Pi Connect lets you access your Raspberry Pi devices remotely from anywhere, straight from a web browser. Since we last wrote about Connect, we’ve shipped three updates that we think will make it noticeably more useful — particularly for the growing number of teams using Connect for Organisations to manage fleets of devices.

Tag and filter your devices

The Edit tags dialog, showing applied tags and the autocomplete dropdown with suggestions.

Once you have more than a handful of Raspberry Pi devices in an organisation, finding the right one quickly starts to matter. With these new updates, you can now apply tags to any device — for example, by location (london, cambridge), by environment (production, staging), or by what the device actually does (point-of-sale, kiosk). Tags appear underneath the device name on both the device page and the device list, and any administrator can add or remove them from the device’s Settings page, or when first linking a device to your account.

The device list with model and tag filters applied in the search bar.

The search bar at the top of the device list now combines free-text search with structured filters. Type a qualifier followed by a colon — model:5, memory:4gb, os:raspios-13, or tag:production — and Connect will narrow the list as you type. You can even stack filters in a single query: model:5 tag:production dashboard will find every Raspberry Pi 5 tagged with production that has “dashboard” in its name. Selecting any tag in the device list adds it to your search instantly.

Tags are also exposed through the Management API, meaning you can apply them when you create an authentication key during provisioning — handy if you’re scripting the roll-out of a new batch of devices.

Require two-factor authentication for your organisation

The Authentication section of the Connect organisation settings, showing the Require two-factor authentication button.

Connect for Organisations now lets administrators require all members to use two-factor authentication (2FA) on their Raspberry Pi ID. It’s a single switch in the new Authentication section of the organisation’s Settings tab, and it adds a meaningful layer of protection against compromised member accounts being used to reach your devices.

Turning it on starts a 14-day grace period. During that window, members without 2FA see a banner showing how long they have left and a link to enable it on their Raspberry Pi ID; everyone else carries on as normal. When the grace period ends, any member still without 2FA is blocked from the organisation until they enable it. They won’t be able to access devices or other organisation resources in the meantime.

The Two-factor authentication required page shown to members without 2FA after the grace period ends.

If you ever want to relax the requirement, you can turn off 2FA in your organisation settings. (Disabling and re-enabling 2FA resets the grace period, so you can give members another two weeks if you need to.)

Use a mobile keyboard while screen sharing

Connect’s screen-sharing interface works on phones and tablets as well as desktops, but typing on a touch device was previously only possible if you attached a physical keyboard. The screen-sharing toolbar now includes a dedicated Keyboard toggle alongside the existing buttons for Ctrl, Alt, Esc, and Tab, allowing you to use an on-screen keyboard on mobile devices without any extra hardware attached.

Try it out

Connect is free for personal use. Connect for Organisations comes with a four-week free trial, after which you’re billed monthly in arrears for the peak number of registered devices. You can find full instructions for tagging and filtering devices, enabling 2FA, and everything else in the Raspberry Pi Connect documentation, or sign in at connect.raspberrypi.com to get started.

The post Raspberry Pi Connect: Device tags, required 2FA, and a mobile keyboard appeared first on Raspberry Pi.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

You're Wrong All the Time, But All You Need Are Better Explanations

1 Share

What happens when you discover that a book that fundamentally changed how you think is built on a shaky foundation? In today's episode, I share my own struggle with the replication crisis surrounding Daniel Kahneman's *Thinking Fast and Slow*, and I use it as a springboard to talk about a much bigger skill: knowing how to update your beliefs when reality shifts underneath you. This isn't about throwing out science or losing trust in your heroes. It's about developing the muscle to replace old explanations with better ones — a skill that has never been more important for software engineers.

  • The Replication Crisis, Briefly Explained: Understand the difference between reproducing a study (re-running the analysis on the original data) and replicating one (recreating the study from the ground up), and why a surprisingly large portion of well-respected psychology research, including studies cited in Thinking Fast and Slow, doesn't hold up under scrutiny.
  • Base Rates Matter: Kahneman didn't pick uniquely bad studies. If you randomly sampled from the broader academic literature, you'd hit the same failure rate. The lesson isn't about one author — it's about how we evaluate any body of knowledge.
  • The Beginning of Infinity Framework: Drawing from David Deutsch's book, explore the idea that all progress is rooted in the assumption that we are fundamentally incorrect, and that improvement comes from continually building better explanations on top of incomplete ones.
  • Beliefs as Calibration, Not Truth: Your beliefs about what makes a good engineer, what makes good code, or what makes a good career move are not eternal truths. They are calibrations to your current reality, and that reality is changing fast.
  • The Ego Trap of Old Beliefs: Notice the very human, very subtle pull to defend things you previously argued for — not because they're still right, but because admitting otherwise creates a discontinuity with your former self. This is one of the biggest blockers to learning.
  • Two Competing Explanations of AI Adoption: Walk through a worked example of holding two predictions about AI in tension and asking honestly which one better explains the reality you're seeing — at both a macro industry level and the micro level of debugging a system.
  • Moving Goalposts Aren't a Conspiracy: A lot of what feels like shifting goalposts in our industry is just goalposts moving on their own. A big part of our job as engineers is figuring out where they are now and predicting where they're heading next.
  • Episode Homework: Pick one belief you hold strongly about your work — about what makes a good engineer, about a tool, about a process. Try to deconstruct it into its parts and ask whether a better explanation exists for what you're actually seeing.

🙏 Today's Episode is Brought To you by: SerpApi

No matter what you're building, SerpApi is the web search API for your needs. If you're building an application that needs real-time search data—whether that's an AI agent, an SEO tool, or a price tracker—SerpApi handles it for you. ● Make an API call and get back clean JSON. ● They handle the proxies, CAPTCHAs, parsing, and all the scraping so you don't have to. ● They support dozens of search engines and platforms, and are trusted by companies like NVIDIA, Adobe, and Shopify. ● If you're building with AI, they even have an official MCP to make getting up and running a simple task. Get started with a free tier to build and test your application before you commit. Go to serpapi.com.

📮 Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.

📮 Join the Discord

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community today!

🗞️ Subscribe to The Tea Break

We are developing a brand new newsletter called The Tea Break! You can be the first in line to receive it by entering your email directly over at developertea.com.

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review!





Download audio: https://dts.podtrac.com/redirect.mp3/cdn.simplecast.com/audio/c44db111-b60d-436e-ab63-38c7c3402406/episodes/2080e0c3-7dc3-41af-85a4-8fd64b29c7f9/audio/eea232ef-b5a3-42f1-ac38-8ae97c6e75cc/default_tc.mp3?aid=rss_feed&feed=dLRotFGk
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories