Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152211 stories
·
33 followers

The UX Designer’s Nightmare: When “Production-Ready” Becomes A Design Deliverable

1 Share

In early 2026, I noticed that the UX designer’s toolkit seemed to shift overnight. The industry standard “Should designers code?” debate was abruptly settled by the market, not through a consensus of our craft, but through the brute force of job requirements. If you browse LinkedIn today, you’ll notice a stark change: UX roles increasingly demand AI-augmented development, technical orchestration, and production-ready prototyping.

For many, including myself, this is the ultimate design job nightmare. We are being asked to deliver both the “vibe” and the “code” simultaneously, using AI agents to bridge a technical gap that previously took years of computer science knowledge and coding experience to cross. But as the industry rushes to meet these new expectations, they are discovering that AI-generated functional code is not always good code.

The LinkedIn Pressure Cooker: Role Creep In 2026

The job market is sending a clear signal. While traditional graphic design roles are expected to grow by only 3% through 2034, UX, UI, and Product Design roles.) are projected to grow by 16% over the same period.

However, this growth is increasingly tied to the rise of AI product development, where “design skills” have recently become the #1 most in-demand capability, even ahead of coding and cloud infrastructure. Companies building these platforms are no longer just looking for visual designers; they need professionals who can “translate technical capability into human-centered experiences.”

This creates a high-stakes environment for the UX designer. We are no longer just responsible for the interface; we are expected to understand the technical logic well enough to ensure that complex AI capabilities feel intuitive, safe, and useful for the human on the other side of the screen. Designers are being pushed toward a “design engineer” model, where we must bridge the gap between abstract AI logic and user-facing code.

A recent survey found that 73% of designers now view AI as a primary collaborator rather than just a tool. However, this “collaboration” often looks like “role creep.” Recruiters are often not just looking for someone who understands user empathy and information architecture — they want someone who can also prompt a React component into existence and push it to a repository!

This shift has created a competency gap.

As an experienced senior designer who has spent decades mastering the nuances of cognitive load, accessibility standards, and ethnographic research, I am suddenly finding myself being judged on my ability to debug a CSS Flexbox issue or manage a Git branch.

The nightmare isn’t the technology itself. It’s the reallocation of value.

Businesses are beginning to value the speed of output over the quality of the experience, fundamentally changing what it means to be a “successful” designer in 2026.

The Competence Trap: Two Job Skill Sets, One Average Result

There is potentially a very dangerous myth circulating in boardrooms that AI makes a designer “equal” to an engineer. This narrative suggests that because an LLM can generate a functional JavaScript event handler, the person prompting it doesn’t need to understand the underlying logic. In reality, attempting to master two disparate, deep fields simultaneously will most likely lead to being averagely competent at both.

The “Averagely Competent” Dilemma

For a senior UX designer to become a senior-level coder is like asking a master chef to also be a master plumber because “they both work in the kitchen.” You might get the water running, but you won’t know why the pipes are rattling.

  • The “cognitive offloading” risk.
    Research shows that while AI can speed up task completion, it often leads to a significant decrease in conceptual mastery. In a controlled study, participants using AI assistance scored 17% lower on comprehension tests than those who coded by hand.
  • The debugging gap.
    The largest performance gap between AI-reliant users and hand-coders is in debugging. When a designer uses AI to write code they don’t fully understand, they don’t have the ability to identify when and why it fails.

So, if a designer ships an AI-generated component that breaks during a high-traffic event and cannot manually trace the logic, they are no longer an expert. They are now a liability.

The High Cost Of Unoptimised Code

Any experienced code engineer will tell you that creating code with AI without the right prompt leads to a lot of rework. Because most designers lack the technical foundation to audit the code the AI gives them, they are inadvertently shipping massive amounts of “Quality Debt”.

Common Issues In Designer-Generated AI Code
  • The security flaw
    Recent reports indicate that up to 92% of AI-generated codebases contain at least one critical vulnerability. A designer might see a functioning login form, unaware that it has an 86% failure rate in XSS defense, which are the security measures aimed at preventing attackers from injecting malicious scripts into trusted websites.
  • The accessibility illusion
    AI often generates “functional” applications that lack semantic integrity. A designer might prompt a “beautiful and functional toggle switch,” but the AI may provide a non-semantic <div> that lacks keyboard focus and screen-reader compatibility, creating Accessibility Debt that is expensive to fix later.
  • The performance penalty
    AI-generated code tends to be verbose. AI is linked to 4x more code duplication than human-written code. This verbosity slows down page loads, creates massive CSS files, and negatively impacts SEO. To a business, the task looks “done.” To a user with a slow connection or a screen reader, the site is a nightmare.
Creating More Work, Not Less

The promise of AI was that designers could ship features without bothering the engineers. The reality has been the birth of a “Rework Tax” that is draining engineering resources across the industry.

  • Cleaning up
    Organisations are finding that while velocity increases, incidents per Pull Request are also rising by 23.5%. Some engineering teams now spend a significant portion of their week cleaning up “AI slop” delivered by design teams who skipped a rigorous review process.
  • The communication gap
    Only 69% of designers feel AI improves the quality of their work, compared to 82% of developers. This gap exists because “code that compiles” is not the same as “code that is maintainable.”

When a designer hands off AI-generated code that ignores a company’s internal naming conventions or management patterns, they aren’t helping the engineer; they are creating a puzzle that someone else has to solve later.

The Solution

We need to move away from the nightmare of the “Solo Full-Stack Designer” and toward a model of designer/coder collaboration.

The ideal reality:

  • The Partnership
    Instead of designers trying to be mediocre coders, they should work in a human-AI-human loop. A senior UX designer should work with an engineer to use AI; the designer creates prompts for intent, accessibility, and user flow, while the engineer creates prompts for architecture and performance.
  • Design systems as guardrails
    To prevent accessibility debt from spreading at scale, accessible components must be the default in your design system. AI should be used to feed these tokens into your UI, ensuring that even generated code stays within the “source of truth.”
Beyond The Prompt

The industry is currently in a state of “AI Infatuation,” but the pendulum will eventually swing back toward quality.

The UX designer’s nightmare ends when we stop trying to compete with AI tools at what they do best (generating syntax) and keep our focus on what they cannot do (understanding human complexity).

Businesses that prioritise “designer-shipped code” without engineering oversight will eventually face a reckoning of technical debt, security breaches, and accessibility lawsuits. The designers who thrive in 2026 and beyond will be those who refuse to be “prompt operators” and instead position themselves as the guardians of the user experience. This is the perfect outcome for experienced designers and for the industry.

Our value has always been our ability to advocate for the human on the other side of the screen. We must use AI to augment our design thinking, allowing us to test more ideas and iterate faster, but we must never let it replace the specialised engineering expertise that ensures our designs technically work for everyone.

Summary Checklist for UX Designers

  • Work Together.
    Use AI-made code as a starting point to talk with your developers. Don’t use it as a shortcut to avoid working with them. Ask them to help you with prompts for code creation for the best outcomes.
  • Understand the “Why”.
    Never submit code you don’t understand. If you can’t explain how the AI-generated logic works, don’t include it in your work.
  • Build for Everyone.
    Good design is more than just looks. Use AI to check if your code works for people using screen readers or keyboards, not just to make things look pretty.


Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Security considerations when using Passkeys on your website

1 Share
Security considerations when using Passkeys on your website

Passkeys are awesome and that's why we implemented them on Report URI! You can read about our implementation here and get the basics on how Passkeys work and why you want them. In this post, we're going to focus on what security considerations you should have once you start using Passkeys and we've produced a whitepaper for you to take away that contains valuable information.

Security considerations when using Passkeys on your website

What passkeys actually protect

Passkeys are built on WebAuthn and use asymmetric cryptography, offering some incredibly strong protections. The user’s device generates a key pair, the public key is registered with a service like Report URI, and the private key remains protected on the device, often inside secure hardware like a TPM. During authentication, the server issues a challenge and the device signs it after 'user verification', typically biometrics or a PIN. This model gives passkeys some very strong security properties!

First, there is no shared secret for an attacker to steal from the server and replay elsewhere because only the public key is stored with the service. This means that Report URI isn't storing anything sensitive related to Passkeys.

Second, the credential is bound to the correct origin, which makes phishing dramatically less effective. The browser or other device that registered the Passkey knows exactly where it was registered, so a user can't be tricked into using it in the wrong place.

Third, each authentication is challenge-based, which prevents replay, so even if an attacker could capture an authentication flow, it couldn't be used again later.

Fourth and finally, the private key is not exposed to JavaScript running in the page! 🎉

All of that is awesome and each point provides valuable protection. If your threat model includes password reuse, credential stuffing, password spraying, or fake login pages, then Passkeys are a direct and effective improvement.

Security considerations when using Passkeys on your website

Where the threat model shifts

What passkeys do not do is make the authenticated application trustworthy by default. Once the user has successfully authenticated, most applications establish a session using a cookie or token (probably a cookie). The Passkey is helping to solve the problem of reliably authenticating the user, but once that step is complete, we're still falling back to a traditional cookie! Strong passwords, 2FA, Passkeys, and everything else we do all still end up with a cookie(?!).

The question then remains "Can the attacker abuse the authenticated state?", and this is where traditional attacks like XSS and CSRF remain a real threat. Let's look at a few examples of the kind of things that can go wrong:

The first is "session hijacking" (sometimes called "session riding"). If session tokens are accessible, XSS may steal them. Even if they are protected with HttpOnly, malicious code can still perform actions inside the victim’s authenticated browser without needing to extract the cookie itself!

The second is malicious passkey registration. Let's be crystal clear, XSS cannot extract the victim’s private key or forge WebAuthn responses, but it may still be used to manipulate the user into approving registration of a passkey in an attacker-controlled environment. That creates persistence without breaking WebAuthn itself.

The third is transaction manipulation. This is one of the clearest examples of the gap between strong authentication and trustworthy application behaviour. A user may authenticate securely with a Passkey, but malicious JavaScript can still alter transaction parameters in the page or intercept API requests before submission. The user thinks they approved one action, while the application processes another, and we had probably the best example ever of that with the ByBit hack that cost them $1.4 billion dollars!

To clarify, none of these are Passkey failures, they're application failures, but a good example of the risks that remain.

Security considerations when using Passkeys on your website

Defence in depth!

Especially after deploying Passkeys, we should continue to maintain a strong focus on protecting against XSS (Cross-Site Scripting). We saw that yet again XSS was the #1 Top Threat of 2025, so we still have a little way to go here, but nonetheless, there's a lot we can do! Tactics like context-aware output encoding, avoiding dangerous DOM sinks, validating and sanitising input, and using modern frameworks safely should all feature high on your list of protections. Finally, of course, is Content Security Policy. A strict CSP is one of the strongest controls available for reducing the exploitability of XSS and acts as your final line of defence before bad things happen. Blocking inline scripts, restricting script sources, and removing dangerous execution paths like eval(), all materially improve your resilience. CSP will not compensate for insecure code, and it isn't meant to, but it can significantly constrain what an attacker can do.

Following on from a robust CSP, we have Permissions Policy, which is often overlooked. In Passkeys-enabled applications, restricting access to publickey-credentials-get and publickey-credentials-create allows us to control access to WebAuthn API / Credential Management calls. Permissions Policy does not prevent injection, but it does reduce the capabilities available to injected code and helps enforce least privilege across pages and origins. A simple config might look like this delivered as a HTTP response header:

Permissions-Policy: publickey-credentials-create=(self), publickey-credentials-get=(self)

Then there is security of the cookie itself. I wrote about this all the way back in 2017 in a blog post called Tough Cookies, but here's a quick summary for you. Session cookies should be HttpOnly, Secure, have an appropriate SameSite policy and use at least the __Secure- prefix (or __Host- prefix where possible).

Finally, sensitive actions need stronger guarantees than “the user has an active session”. High-risk operations such as transferring money, changing recovery settings, or managing credentials should require a fresh authentication challenge to ensure that the user is the one at the keyboard initiating the action.

Security considerations when using Passkeys on your website

Read our whitepaper

If you want more information to really understand the threats that exist in a Passkeys enabled environment, you can download a copy of our white paper that contains detailed information on the problem and the solutions. You can find the white paper on our Passkeys solutions page: https://report-uri.com/solutions/passkeys_protection

Read the whole story
alvinashcraft
15 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Gateway API v1.5: Moving features to Stable

1 Share

Gateway API logo

The Kubernetes SIG Network community presents the release of Gateway API (v1.5)! Released on March 14, 2026, version 1.5 is our biggest release yet, and concentrates on moving existing Experimental features to Standard (Stable).

The Gateway API v1.5.1 patch release is already available

The Gateway API v1.5 brings six widely-requested feature promotions to the Standard channel (Gateway API's GA release channel):

  • ListenerSet
  • TLSRoute promoted to Stable
  • HTTPRoute CORS Filter
  • Client Certificate Validation
  • Certificate Selection for Gateway TLS Origination
  • ReferenceGrant promoted to Stable

Special thanks for Gateway API Contributors for their efforts on this release.

New release process

As of Gateway API v1.5, the project has moved to a release train model, where on a feature freeze date, any features that are ready are shipped in the release.

This applies to both Experimental and Standard, and also applies to documentation -- if the documentation isn't ready to ship, the feature isn't ready to ship.

We are aiming for this to produce a more reliable release cadence (since we are basing our work off the excellent work done by SIG Release on Kubernetes itself). As part of this change, we've also introduced Release Manager and Release Shadow roles to our release team. Many thanks to Flynn (Buoyant) and Beka Modebadze (Google) for all the great work coordinating and filing the rough edges of our release process. They are both going to continue in this role for the next release as well.

New standard features

ListenerSet

Leads: Dave Protasowski, David Jumani

GEP-1713

Why ListenerSet?

Prior to ListenerSet, all listeners had to be specified directly on the Gateway object. While this worked well for simple use cases, it created challenges for more complex or multi-tenant environments:

  • Platform teams and application teams often needed to coordinate changes to the same Gateway
  • Safely delegating ownership of individual listeners was difficult
  • Extending existing Gateways required direct modification of the original resource

ListenerSet addresses these limitations by allowing listeners to be defined independently and then merged onto a target Gateway.

ListenerSets also enable attaching more than 64 listeners to a single, shared Gateway. This is critical for large scale deployments and scenarios with multiple hostnames per listener.

Even though the ListenerSet feature significantly enhances scalability, the listener field in Gateway remains a mandatory requirement and the Gateway must have at least one valid listener.

How it works

A ListenerSet attaches to a Gateway and contributes one or more listeners. The Gateway controller is responsible for merging listeners from the Gateway resource itself and any attached ListenerSet resources.

In this example, a central infrastructure team defines a Gateway with a default HTTP listener, while two different application teams define their own ListenerSet resources in separate namespaces. Both ListenerSets attach to the same Gateway and contribute additional HTTPS listeners.

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
 name: example-gateway
 namespace: infra
spec:
 gatewayClassName: example-gateway-class
 allowedListeners:
 namespaces:
 from: All # A selector lets you fine tune this
 listeners:
 - name: http
 protocol: HTTP
 port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: ListenerSet
metadata:
 name: team-a-listeners
 namespace: team-a
spec:
 parentRef:
 name: example-gateway
 namespace: infra
 listeners:
 - name: https-a
 protocol: HTTPS
 port: 443
 hostname: a.example.com
 tls:
 certificateRefs:
 - name: a-cert
---
apiVersion: gateway.networking.k8s.io/v1
kind: ListenerSet
metadata:
 name: team-b-listeners
 namespace: team-b
spec:
 parentRef:
 name: example-gateway
 namespace: infra
 listeners:
 - name: https-b
 protocol: HTTPS
 port: 443
 hostname: b.example.com
 tls:
 certificateRefs:
 - name: b-cert

TLSRoute

Leads: Rostislav Bobrovsky, Ricardo Pchevuzinske Katz

GEP-2643

The TLSRoute resource allows you to route requests by matching the Server Name Indication (SNI) presented by the client during the TLS handshake and directing the stream to the appropriate Kubernetes backends.

When working with TLSRoute, a Gateway's TLS listener can be configured in one of two modes: Passthrough or Terminate.

If you install Gateway API v1.5 Standard over v1.4 or earlier Experimental, your existing Experimental TLSRoutes will not be usable. This is because they will be stored in the v1alpha2 or v1alpha3 version, which is not included in the v1.5 Standard YAMLs. If this applies to you, either continue using Experimental for v1.5.1 and onward, or you'll need to download and migrate your TLSRoutes to v1, which is present in the Standard YAMLs.

Passthrough mode

The Passthrough mode is designed for strict security requirements. It is ideal for scenarios where traffic must remain encrypted end-to-end until it reaches the destination backend, when the external client and backend need to authenticate directly with each other, or when you can’t store certificates on the Gateway. This configuration is also applicable when an encrypted TCP stream is required instead of standard HTTP traffic.

In this mode, the encrypted byte stream is proxied directly to the destination backend. The Gateway has zero access to private keys or unencrypted data.

The following TLSRoute is attached to a listener that is configured in Passthrough mode. It will match only TLS handshakes with the foo.example.com SNI hostname and apply its routing rules to pass the encrypted TCP stream to the configured backend:

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
 name: example-gateway
spec:
 gatewayClassName: example-gateway-class
 listeners:
 - name: tls-passthrough
 protocol: TLS
 port: 8443
 tls:
 mode: Passthrough
---
apiVersion: gateway.networking.k8s.io/v1
kind: TLSRoute
metadata:
 name: foo-route
spec:
 parentRefs:
 - name: example-gateway
 sectionName: tls-passthrough
 hostnames:
 - "foo.example.com"
 rules:
 - backendRefs:
 - name: foo-svc
 port: 8443

Terminate mode

The Terminate mode provides the convenience of centralized TLS certificate management directly at the Gateway.

In this mode, the TLS session is fully terminated at the Gateway, which then routes the decrypted payload to the destination backend as a plain text TCP stream.

The following TLSRoute is attached to a listener that is configured in Terminate mode. It will match only TLS handshakes with the bar.example.com SNI hostname and apply its routing rules to pass the decrypted TCP stream to the configured backend:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
 name: example-gateway
spec:
 gatewayClassName: example-gateway-class
 listeners:
 - name: tls-terminate
 protocol: TLS
 port: 443
 tls:
 mode: Terminate
 certificateRefs:
 - name: tls-terminate-certificate
---
apiVersion: gateway.networking.k8s.io/v1
kind: TLSRoute
metadata:
 name: bar-route
spec:
 parentRefs:
 - name: example-gateway
 sectionName: tls-terminate
 hostnames:
 - "bar.example.com"
 rules:
 - backendRefs:
 - name: bar-svc
 port: 8080

HTTPRoute CORS filter

Leads: Damian Sawicki, Ricardo Pchevuzinske Katz, Norwin Schnyder, Huabing (Robin) Zhao, LiangLliu,

GEP-1767

Cross-origin resource sharing (CORS) is an HTTP-header based security mechanism that allows (or denies) a web page to access resources from a server on an origin different from the domain that served the web page. See our documentation page for more information. The HTTPRoute resource can be used to configure Cross-Origin Resource Sharing (CORS). The following HTTPRoute allows requests from https://app.example:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
 name: cors
spec:
 parentRefs:
 - name: same-namespace
 rules:
 - matches:
 - path:
 type: PathPrefix
 value: /cors-behavior-creds-false
 backendRefs:
 - name: infra-backend-v1
 port: 8080
 filters:
 - cors:
 allowOrigins:
 - https://app.example
 type: CORS

Instead of specifying a list of specific origins, you can also specify a single wildcard ("*"), which will allow any origin. It is also allowed to use semi-specified origins in the list, where the wildcard appears after the scheme and at the beginning of the hostname, e.g. https://*.bar.com:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
 name: cors
spec:
 parentRefs:
 - name: same-namespace
 rules:
 - matches:
 - path:
 type: PathPrefix
 value: /cors-behavior-creds-false
 backendRefs:
 - name: infra-backend-v1
 port: 8080
 filters:
 - cors:
 allowOrigins:
 - https://www.baz.com
 - https://*.bar.com
 - https://*.foo.com
 type: CORS

HTTPRoute filters allow for the configuration of CORS settings. See a list of supported options below:

allowCredentials
Specifies whether the browser is allowed to include credentials (such as cookies and HTTP authentication) in the CORS request.
allowMethods
The HTTP methods that are allowed for CORS requests.
allowHeaders
The HTTP headers that are allowed for CORS requests.
exposeHeaders
The HTTP headers that are exposed to the client.
maxAge
The maximum time in seconds that the browser should cache the preflight response.

A comprehensive example:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
 name: cors-allow-credentials
spec:
 parentRefs:
 - name: same-namespace
 rules:
 - matches:
 - path:
 type: PathPrefix
 value: /cors-behavior-creds-true
 backendRefs:
 - name: infra-backend-v1
 port: 8080
 filters:
 - cors:
 allowOrigins:
 - "https://www.foo.example.com"
 - "https://*.bar.example.com"
 allowMethods:
 - GET
 - OPTIONS
 allowHeaders:
 - "*"
 exposeHeaders:
 - "x-header-3"
 - "x-header-4"
 allowCredentials: true
 maxAge: 3600
 type: CORS

Gateway client certificate validation

Leads: Arko Dasgupta, Katarzyna Łach, Norwin Schnyder

GEP-91

Client certificate validation, also known as mutual TLS (mTLS), is a security mechanism where the client provides a certificate to the server to prove its identity. This is in contrast to standard TLS, where only the server presents a certificate to the client. In the context of the Gateway API, frontend mTLS means that the Gateway validates the client's certificate before allowing the connection to proceed to a backend service. This validation is done by checking the client certificate against a set of trusted Certificate Authorities (CAs) configured on the Gateway. The API was shaped this way to address a critical security vulnerability related to connection reuse and still provide some level of flexibility.

Configuration overview

Client validation is defined using the frontendValidation struct, which specifies how the Gateway should verify the client's identity.

  • caCertificateRefs: A list of references to Kubernetes objects (typically ConfigMap's) containing PEM-encoded CA certificate bundles used as trust anchors to validate the client's certificate.
  • mode: Defines the validation behavior.
    • AllowValidOnly (Default): The Gateway accepts connections only if the client presents a valid certificate that passes validation against the specified CA bundle.
    • AllowInsecureFallback: The Gateway accepts connections even if the client certificate is missing or fails verification. This mode typically delegates authorization to the backend and should be used with caution.

Validation can be applied globally to the Gateway or overridden for specific ports:

  1. Default Configuration: This configuration applies to all HTTPS listeners on the Gateway, unless a per-port override is defined.
  2. Per-Port Configuration: This allows for fine-grained control, overriding the default configuration for all listeners handling traffic on a specific port.

Example:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
 name: client-validation-basic
spec:
 gatewayClassName: acme-lb
 tls:
 frontend:
 default:
 validation:
 caCertificateRefs:
 - kind: ConfigMap
 group: ""
 name: foo-example-com-ca-cert
 perPort:
 - port: 8443
 tls:
 validation:
 caCertificateRefs:
 - kind: ConfigMap
 group: ""
 name: foo-example-com-ca-cert
 mode: "AllowInsecureFallback"
 listeners:
 - name: foo-https
 protocol: HTTPS
 port: 443
 hostname: foo.example.com
 tls:
 certificateRefs:
 - kind: Secret
 group: ""
 name: foo-example-com-cert
 - name: bar-https
 protocol: HTTPS
 port: 8443
 hostname: bar.example.com
 tls:
 certificateRefs:
 - kind: Secret
 group: ""
 name: bar-example-com-cert

Certificate selection for Gateway TLS origination

Leads: Marcin Kosieradzki, Rob Scott, Norwin Schnyder, Lior Lieberman, Katarzyna Lach

GEP-3155

Mutual TLS (mTLS) for upstream connections requires the Gateway to present a client certificate to the backend, in addition to verifying the backend's certificate. This ensures that the backend only accepts connections from authorized Gateways.

Gateway’s client certificate configuration

To configure the client certificate that the Gateway uses when connecting to backends, use the tls.backend.clientCertificateRef field in the Gateway resource. This configuration applies to the Gateway as a client for all upstream connections managed by that Gateway.

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
 name: backend-tls
spec:
 gatewayClassName: acme-lb
 tls:
 backend:
 clientCertificateRef:
 kind: Secret
 group: "" # empty string means core API group
 name: foo-example-cert
 listeners:
 - name: foo-http
 protocol: HTTP
 port: 80
 hostname: foo.example.com

ReferenceGrant promoted to v1

The ReferenceGrant resource has not changed in more than a year, and we do not expect it to change further, so its version has been bumped to v1, and it is now officially in the Standard channel, and abides by the GA API contract (that is, no breaking changes).

Try it out

Unlike other Kubernetes APIs, you don't need to upgrade to the latest version of Kubernetes to get the latest version of Gateway API. As long as you're running Kubernetes 1.30 or later, you'll be able to get up and running with this version of Gateway API.

To try out the API, follow the Getting Started Guide.

As of this writing, seven implementations are already fully conformant with Gateway API v1.5. In alphabetical order:

Get involved

Wondering when a feature will be added? There are lots of opportunities to get involved and help define the future of Kubernetes routing APIs for both ingress and service mesh.

The maintainers would like to thank everyone who's contributed to Gateway API, whether in the form of commits to the repo, discussion, ideas, or general support. We could never have made this kind of progress without the support of this dedicated and active community.

Read the whole story
alvinashcraft
25 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

From Clicks to Commands: A Terminal Beginner's Guide

1 Share
From: kayla.cinnamon
Duration: 35:43
Views: 20

In this video, James and I walk through the differences between terminals, shells, commands, and CLIs.

Links:
GitHub Copilot CLI: https://github.com/features/copilot/cli/

Intro: (00:00)
What is a terminal: (01:15)
What is a shell: (04:33)
What are commands: (07:50)
Terminal + shells demo: (10:37)
Commands demo: (13:00)
What is a CLI: (16:24)
When to use a CLI: (18:43)
CLI demo: (21:50)
What is a TUI: (24:57)
GitHub Copilot CLI: (27:28)
Copilot CLI demo: (29:11)
Outro: (32:46)

Socials:
👩‍💻 GitHub: https://github.com/cinnamon-msft
🐤 X: https://x.com/cinnamon_msft
📸 Instagram: https://www.instagram.com/kaylacinnamon/
🎥: TikTok: https://www.tiktok.com/@kaylacinnamon
🦋 Bluesky: https://bsky.app/profile/kaylacinnamon.bsky.social
🐘 Mastodon: https://hachyderm.io/@cinnamon

Disclaimer: I've created everything on my channel in my free time. Nothing is officially affiliated or endorsed by Microsoft in any way. Opinions and views are my own! 🩷

#terminal #shell #command #cli #github #copilot #tui #developer #development

Read the whole story
alvinashcraft
35 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code - Adding Features to IronBefunge, Part 1 (ish)

1 Share
From: Jason Bock
Duration: 55:25
Views: 11

I'm now focusing specifically on my Befunge interpreter, adding a couple of features to the language. The first one is "jump over" - wheee!

https://github.com/JasonBock/IronBefunge/issues/17

#dotnet #csharp

Read the whole story
alvinashcraft
40 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

998: How to Fix Vibe Coding

1 Share

Wes and Scott talk about making AI coding more reliable using deterministic tools like fallow, knip, ESLint, StyleLint, and Sentry. They cover code quality analysis, linting strategies, headless browsers, task workflows, and how to enforce better patterns so AI stops guessing and starts producing maintainable, predictable code.

Show Notes

Sick Picks

Shameless Plugs

Hit us up on Socials!

Syntax: X Instagram Tiktok LinkedIn Threads

Wes: X Instagram Tiktok LinkedIn Threads

Scott: X Instagram Tiktok LinkedIn Threads

Randy: X Instagram YouTube Threads





Download audio: https://traffic.megaphone.fm/FSI3956472750.mp3
Read the whole story
alvinashcraft
44 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories