Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152210 stories
·
33 followers

Security considerations when using Passkeys on your website

1 Share
Security considerations when using Passkeys on your website

Passkeys are awesome and that's why we implemented them on Report URI! You can read about our implementation here and get the basics on how Passkeys work and why you want them. In this post, we're going to focus on what security considerations you should have once you start using Passkeys and we've produced a whitepaper for you to take away that contains valuable information.

Security considerations when using Passkeys on your website

What passkeys actually protect

Passkeys are built on WebAuthn and use asymmetric cryptography, offering some incredibly strong protections. The user’s device generates a key pair, the public key is registered with a service like Report URI, and the private key remains protected on the device, often inside secure hardware like a TPM. During authentication, the server issues a challenge and the device signs it after 'user verification', typically biometrics or a PIN. This model gives passkeys some very strong security properties!

First, there is no shared secret for an attacker to steal from the server and replay elsewhere because only the public key is stored with the service. This means that Report URI isn't storing anything sensitive related to Passkeys.

Second, the credential is bound to the correct origin, which makes phishing dramatically less effective. The browser or other device that registered the Passkey knows exactly where it was registered, so a user can't be tricked into using it in the wrong place.

Third, each authentication is challenge-based, which prevents replay, so even if an attacker could capture an authentication flow, it couldn't be used again later.

Fourth and finally, the private key is not exposed to JavaScript running in the page! 🎉

All of that is awesome and each point provides valuable protection. If your threat model includes password reuse, credential stuffing, password spraying, or fake login pages, then Passkeys are a direct and effective improvement.

Security considerations when using Passkeys on your website

Where the threat model shifts

What passkeys do not do is make the authenticated application trustworthy by default. Once the user has successfully authenticated, most applications establish a session using a cookie or token (probably a cookie). The Passkey is helping to solve the problem of reliably authenticating the user, but once that step is complete, we're still falling back to a traditional cookie! Strong passwords, 2FA, Passkeys, and everything else we do all still end up with a cookie(?!).

The question then remains "Can the attacker abuse the authenticated state?", and this is where traditional attacks like XSS and CSRF remain a real threat. Let's look at a few examples of the kind of things that can go wrong:

The first is "session hijacking" (sometimes called "session riding"). If session tokens are accessible, XSS may steal them. Even if they are protected with HttpOnly, malicious code can still perform actions inside the victim’s authenticated browser without needing to extract the cookie itself!

The second is malicious passkey registration. Let's be crystal clear, XSS cannot extract the victim’s private key or forge WebAuthn responses, but it may still be used to manipulate the user into approving registration of a passkey in an attacker-controlled environment. That creates persistence without breaking WebAuthn itself.

The third is transaction manipulation. This is one of the clearest examples of the gap between strong authentication and trustworthy application behaviour. A user may authenticate securely with a Passkey, but malicious JavaScript can still alter transaction parameters in the page or intercept API requests before submission. The user thinks they approved one action, while the application processes another, and we had probably the best example ever of that with the ByBit hack that cost them $1.4 billion dollars!

To clarify, none of these are Passkey failures, they're application failures, but a good example of the risks that remain.

Security considerations when using Passkeys on your website

Defence in depth!

Especially after deploying Passkeys, we should continue to maintain a strong focus on protecting against XSS (Cross-Site Scripting). We saw that yet again XSS was the #1 Top Threat of 2025, so we still have a little way to go here, but nonetheless, there's a lot we can do! Tactics like context-aware output encoding, avoiding dangerous DOM sinks, validating and sanitising input, and using modern frameworks safely should all feature high on your list of protections. Finally, of course, is Content Security Policy. A strict CSP is one of the strongest controls available for reducing the exploitability of XSS and acts as your final line of defence before bad things happen. Blocking inline scripts, restricting script sources, and removing dangerous execution paths like eval(), all materially improve your resilience. CSP will not compensate for insecure code, and it isn't meant to, but it can significantly constrain what an attacker can do.

Following on from a robust CSP, we have Permissions Policy, which is often overlooked. In Passkeys-enabled applications, restricting access to publickey-credentials-get and publickey-credentials-create allows us to control access to WebAuthn API / Credential Management calls. Permissions Policy does not prevent injection, but it does reduce the capabilities available to injected code and helps enforce least privilege across pages and origins. A simple config might look like this delivered as a HTTP response header:

Permissions-Policy: publickey-credentials-create=(self), publickey-credentials-get=(self)

Then there is security of the cookie itself. I wrote about this all the way back in 2017 in a blog post called Tough Cookies, but here's a quick summary for you. Session cookies should be HttpOnly, Secure, have an appropriate SameSite policy and use at least the __Secure- prefix (or __Host- prefix where possible).

Finally, sensitive actions need stronger guarantees than “the user has an active session”. High-risk operations such as transferring money, changing recovery settings, or managing credentials should require a fresh authentication challenge to ensure that the user is the one at the keyboard initiating the action.

Security considerations when using Passkeys on your website

Read our whitepaper

If you want more information to really understand the threats that exist in a Passkeys enabled environment, you can download a copy of our white paper that contains detailed information on the problem and the solutions. You can find the white paper on our Passkeys solutions page: https://report-uri.com/solutions/passkeys_protection

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Gateway API v1.5: Moving features to Stable

1 Share

Gateway API logo

The Kubernetes SIG Network community presents the release of Gateway API (v1.5)! Released on March 14, 2026, version 1.5 is our biggest release yet, and concentrates on moving existing Experimental features to Standard (Stable).

The Gateway API v1.5.1 patch release is already available

The Gateway API v1.5 brings six widely-requested feature promotions to the Standard channel (Gateway API's GA release channel):

  • ListenerSet
  • TLSRoute promoted to Stable
  • HTTPRoute CORS Filter
  • Client Certificate Validation
  • Certificate Selection for Gateway TLS Origination
  • ReferenceGrant promoted to Stable

Special thanks for Gateway API Contributors for their efforts on this release.

New release process

As of Gateway API v1.5, the project has moved to a release train model, where on a feature freeze date, any features that are ready are shipped in the release.

This applies to both Experimental and Standard, and also applies to documentation -- if the documentation isn't ready to ship, the feature isn't ready to ship.

We are aiming for this to produce a more reliable release cadence (since we are basing our work off the excellent work done by SIG Release on Kubernetes itself). As part of this change, we've also introduced Release Manager and Release Shadow roles to our release team. Many thanks to Flynn (Buoyant) and Beka Modebadze (Google) for all the great work coordinating and filing the rough edges of our release process. They are both going to continue in this role for the next release as well.

New standard features

ListenerSet

Leads: Dave Protasowski, David Jumani

GEP-1713

Why ListenerSet?

Prior to ListenerSet, all listeners had to be specified directly on the Gateway object. While this worked well for simple use cases, it created challenges for more complex or multi-tenant environments:

  • Platform teams and application teams often needed to coordinate changes to the same Gateway
  • Safely delegating ownership of individual listeners was difficult
  • Extending existing Gateways required direct modification of the original resource

ListenerSet addresses these limitations by allowing listeners to be defined independently and then merged onto a target Gateway.

ListenerSets also enable attaching more than 64 listeners to a single, shared Gateway. This is critical for large scale deployments and scenarios with multiple hostnames per listener.

Even though the ListenerSet feature significantly enhances scalability, the listener field in Gateway remains a mandatory requirement and the Gateway must have at least one valid listener.

How it works

A ListenerSet attaches to a Gateway and contributes one or more listeners. The Gateway controller is responsible for merging listeners from the Gateway resource itself and any attached ListenerSet resources.

In this example, a central infrastructure team defines a Gateway with a default HTTP listener, while two different application teams define their own ListenerSet resources in separate namespaces. Both ListenerSets attach to the same Gateway and contribute additional HTTPS listeners.

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
 name: example-gateway
 namespace: infra
spec:
 gatewayClassName: example-gateway-class
 allowedListeners:
 namespaces:
 from: All # A selector lets you fine tune this
 listeners:
 - name: http
 protocol: HTTP
 port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: ListenerSet
metadata:
 name: team-a-listeners
 namespace: team-a
spec:
 parentRef:
 name: example-gateway
 namespace: infra
 listeners:
 - name: https-a
 protocol: HTTPS
 port: 443
 hostname: a.example.com
 tls:
 certificateRefs:
 - name: a-cert
---
apiVersion: gateway.networking.k8s.io/v1
kind: ListenerSet
metadata:
 name: team-b-listeners
 namespace: team-b
spec:
 parentRef:
 name: example-gateway
 namespace: infra
 listeners:
 - name: https-b
 protocol: HTTPS
 port: 443
 hostname: b.example.com
 tls:
 certificateRefs:
 - name: b-cert

TLSRoute

Leads: Rostislav Bobrovsky, Ricardo Pchevuzinske Katz

GEP-2643

The TLSRoute resource allows you to route requests by matching the Server Name Indication (SNI) presented by the client during the TLS handshake and directing the stream to the appropriate Kubernetes backends.

When working with TLSRoute, a Gateway's TLS listener can be configured in one of two modes: Passthrough or Terminate.

If you install Gateway API v1.5 Standard over v1.4 or earlier Experimental, your existing Experimental TLSRoutes will not be usable. This is because they will be stored in the v1alpha2 or v1alpha3 version, which is not included in the v1.5 Standard YAMLs. If this applies to you, either continue using Experimental for v1.5.1 and onward, or you'll need to download and migrate your TLSRoutes to v1, which is present in the Standard YAMLs.

Passthrough mode

The Passthrough mode is designed for strict security requirements. It is ideal for scenarios where traffic must remain encrypted end-to-end until it reaches the destination backend, when the external client and backend need to authenticate directly with each other, or when you can’t store certificates on the Gateway. This configuration is also applicable when an encrypted TCP stream is required instead of standard HTTP traffic.

In this mode, the encrypted byte stream is proxied directly to the destination backend. The Gateway has zero access to private keys or unencrypted data.

The following TLSRoute is attached to a listener that is configured in Passthrough mode. It will match only TLS handshakes with the foo.example.com SNI hostname and apply its routing rules to pass the encrypted TCP stream to the configured backend:

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
 name: example-gateway
spec:
 gatewayClassName: example-gateway-class
 listeners:
 - name: tls-passthrough
 protocol: TLS
 port: 8443
 tls:
 mode: Passthrough
---
apiVersion: gateway.networking.k8s.io/v1
kind: TLSRoute
metadata:
 name: foo-route
spec:
 parentRefs:
 - name: example-gateway
 sectionName: tls-passthrough
 hostnames:
 - "foo.example.com"
 rules:
 - backendRefs:
 - name: foo-svc
 port: 8443

Terminate mode

The Terminate mode provides the convenience of centralized TLS certificate management directly at the Gateway.

In this mode, the TLS session is fully terminated at the Gateway, which then routes the decrypted payload to the destination backend as a plain text TCP stream.

The following TLSRoute is attached to a listener that is configured in Terminate mode. It will match only TLS handshakes with the bar.example.com SNI hostname and apply its routing rules to pass the decrypted TCP stream to the configured backend:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
 name: example-gateway
spec:
 gatewayClassName: example-gateway-class
 listeners:
 - name: tls-terminate
 protocol: TLS
 port: 443
 tls:
 mode: Terminate
 certificateRefs:
 - name: tls-terminate-certificate
---
apiVersion: gateway.networking.k8s.io/v1
kind: TLSRoute
metadata:
 name: bar-route
spec:
 parentRefs:
 - name: example-gateway
 sectionName: tls-terminate
 hostnames:
 - "bar.example.com"
 rules:
 - backendRefs:
 - name: bar-svc
 port: 8080

HTTPRoute CORS filter

Leads: Damian Sawicki, Ricardo Pchevuzinske Katz, Norwin Schnyder, Huabing (Robin) Zhao, LiangLliu,

GEP-1767

Cross-origin resource sharing (CORS) is an HTTP-header based security mechanism that allows (or denies) a web page to access resources from a server on an origin different from the domain that served the web page. See our documentation page for more information. The HTTPRoute resource can be used to configure Cross-Origin Resource Sharing (CORS). The following HTTPRoute allows requests from https://app.example:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
 name: cors
spec:
 parentRefs:
 - name: same-namespace
 rules:
 - matches:
 - path:
 type: PathPrefix
 value: /cors-behavior-creds-false
 backendRefs:
 - name: infra-backend-v1
 port: 8080
 filters:
 - cors:
 allowOrigins:
 - https://app.example
 type: CORS

Instead of specifying a list of specific origins, you can also specify a single wildcard ("*"), which will allow any origin. It is also allowed to use semi-specified origins in the list, where the wildcard appears after the scheme and at the beginning of the hostname, e.g. https://*.bar.com:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
 name: cors
spec:
 parentRefs:
 - name: same-namespace
 rules:
 - matches:
 - path:
 type: PathPrefix
 value: /cors-behavior-creds-false
 backendRefs:
 - name: infra-backend-v1
 port: 8080
 filters:
 - cors:
 allowOrigins:
 - https://www.baz.com
 - https://*.bar.com
 - https://*.foo.com
 type: CORS

HTTPRoute filters allow for the configuration of CORS settings. See a list of supported options below:

allowCredentials
Specifies whether the browser is allowed to include credentials (such as cookies and HTTP authentication) in the CORS request.
allowMethods
The HTTP methods that are allowed for CORS requests.
allowHeaders
The HTTP headers that are allowed for CORS requests.
exposeHeaders
The HTTP headers that are exposed to the client.
maxAge
The maximum time in seconds that the browser should cache the preflight response.

A comprehensive example:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
 name: cors-allow-credentials
spec:
 parentRefs:
 - name: same-namespace
 rules:
 - matches:
 - path:
 type: PathPrefix
 value: /cors-behavior-creds-true
 backendRefs:
 - name: infra-backend-v1
 port: 8080
 filters:
 - cors:
 allowOrigins:
 - "https://www.foo.example.com"
 - "https://*.bar.example.com"
 allowMethods:
 - GET
 - OPTIONS
 allowHeaders:
 - "*"
 exposeHeaders:
 - "x-header-3"
 - "x-header-4"
 allowCredentials: true
 maxAge: 3600
 type: CORS

Gateway client certificate validation

Leads: Arko Dasgupta, Katarzyna Łach, Norwin Schnyder

GEP-91

Client certificate validation, also known as mutual TLS (mTLS), is a security mechanism where the client provides a certificate to the server to prove its identity. This is in contrast to standard TLS, where only the server presents a certificate to the client. In the context of the Gateway API, frontend mTLS means that the Gateway validates the client's certificate before allowing the connection to proceed to a backend service. This validation is done by checking the client certificate against a set of trusted Certificate Authorities (CAs) configured on the Gateway. The API was shaped this way to address a critical security vulnerability related to connection reuse and still provide some level of flexibility.

Configuration overview

Client validation is defined using the frontendValidation struct, which specifies how the Gateway should verify the client's identity.

  • caCertificateRefs: A list of references to Kubernetes objects (typically ConfigMap's) containing PEM-encoded CA certificate bundles used as trust anchors to validate the client's certificate.
  • mode: Defines the validation behavior.
    • AllowValidOnly (Default): The Gateway accepts connections only if the client presents a valid certificate that passes validation against the specified CA bundle.
    • AllowInsecureFallback: The Gateway accepts connections even if the client certificate is missing or fails verification. This mode typically delegates authorization to the backend and should be used with caution.

Validation can be applied globally to the Gateway or overridden for specific ports:

  1. Default Configuration: This configuration applies to all HTTPS listeners on the Gateway, unless a per-port override is defined.
  2. Per-Port Configuration: This allows for fine-grained control, overriding the default configuration for all listeners handling traffic on a specific port.

Example:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
 name: client-validation-basic
spec:
 gatewayClassName: acme-lb
 tls:
 frontend:
 default:
 validation:
 caCertificateRefs:
 - kind: ConfigMap
 group: ""
 name: foo-example-com-ca-cert
 perPort:
 - port: 8443
 tls:
 validation:
 caCertificateRefs:
 - kind: ConfigMap
 group: ""
 name: foo-example-com-ca-cert
 mode: "AllowInsecureFallback"
 listeners:
 - name: foo-https
 protocol: HTTPS
 port: 443
 hostname: foo.example.com
 tls:
 certificateRefs:
 - kind: Secret
 group: ""
 name: foo-example-com-cert
 - name: bar-https
 protocol: HTTPS
 port: 8443
 hostname: bar.example.com
 tls:
 certificateRefs:
 - kind: Secret
 group: ""
 name: bar-example-com-cert

Certificate selection for Gateway TLS origination

Leads: Marcin Kosieradzki, Rob Scott, Norwin Schnyder, Lior Lieberman, Katarzyna Lach

GEP-3155

Mutual TLS (mTLS) for upstream connections requires the Gateway to present a client certificate to the backend, in addition to verifying the backend's certificate. This ensures that the backend only accepts connections from authorized Gateways.

Gateway’s client certificate configuration

To configure the client certificate that the Gateway uses when connecting to backends, use the tls.backend.clientCertificateRef field in the Gateway resource. This configuration applies to the Gateway as a client for all upstream connections managed by that Gateway.

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
 name: backend-tls
spec:
 gatewayClassName: acme-lb
 tls:
 backend:
 clientCertificateRef:
 kind: Secret
 group: "" # empty string means core API group
 name: foo-example-cert
 listeners:
 - name: foo-http
 protocol: HTTP
 port: 80
 hostname: foo.example.com

ReferenceGrant promoted to v1

The ReferenceGrant resource has not changed in more than a year, and we do not expect it to change further, so its version has been bumped to v1, and it is now officially in the Standard channel, and abides by the GA API contract (that is, no breaking changes).

Try it out

Unlike other Kubernetes APIs, you don't need to upgrade to the latest version of Kubernetes to get the latest version of Gateway API. As long as you're running Kubernetes 1.30 or later, you'll be able to get up and running with this version of Gateway API.

To try out the API, follow the Getting Started Guide.

As of this writing, seven implementations are already fully conformant with Gateway API v1.5. In alphabetical order:

Get involved

Wondering when a feature will be added? There are lots of opportunities to get involved and help define the future of Kubernetes routing APIs for both ingress and service mesh.

The maintainers would like to thank everyone who's contributed to Gateway API, whether in the form of commits to the repo, discussion, ideas, or general support. We could never have made this kind of progress without the support of this dedicated and active community.

Read the whole story
alvinashcraft
18 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

From Clicks to Commands: A Terminal Beginner's Guide

1 Share
From: kayla.cinnamon
Duration: 35:43
Views: 20

In this video, James and I walk through the differences between terminals, shells, commands, and CLIs.

Links:
GitHub Copilot CLI: https://github.com/features/copilot/cli/

Intro: (00:00)
What is a terminal: (01:15)
What is a shell: (04:33)
What are commands: (07:50)
Terminal + shells demo: (10:37)
Commands demo: (13:00)
What is a CLI: (16:24)
When to use a CLI: (18:43)
CLI demo: (21:50)
What is a TUI: (24:57)
GitHub Copilot CLI: (27:28)
Copilot CLI demo: (29:11)
Outro: (32:46)

Socials:
👩‍💻 GitHub: https://github.com/cinnamon-msft
🐤 X: https://x.com/cinnamon_msft
📸 Instagram: https://www.instagram.com/kaylacinnamon/
🎥: TikTok: https://www.tiktok.com/@kaylacinnamon
🦋 Bluesky: https://bsky.app/profile/kaylacinnamon.bsky.social
🐘 Mastodon: https://hachyderm.io/@cinnamon

Disclaimer: I've created everything on my channel in my free time. Nothing is officially affiliated or endorsed by Microsoft in any way. Opinions and views are my own! 🩷

#terminal #shell #command #cli #github #copilot #tui #developer #development

Read the whole story
alvinashcraft
28 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code - Adding Features to IronBefunge, Part 1 (ish)

1 Share
From: Jason Bock
Duration: 55:25
Views: 11

I'm now focusing specifically on my Befunge interpreter, adding a couple of features to the language. The first one is "jump over" - wheee!

https://github.com/JasonBock/IronBefunge/issues/17

#dotnet #csharp

Read the whole story
alvinashcraft
33 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

998: How to Fix Vibe Coding

1 Share

Wes and Scott talk about making AI coding more reliable using deterministic tools like fallow, knip, ESLint, StyleLint, and Sentry. They cover code quality analysis, linting strategies, headless browsers, task workflows, and how to enforce better patterns so AI stops guessing and starts producing maintainable, predictable code.

Show Notes

Sick Picks

Shameless Plugs

Hit us up on Socials!

Syntax: X Instagram Tiktok LinkedIn Threads

Wes: X Instagram Tiktok LinkedIn Threads

Scott: X Instagram Tiktok LinkedIn Threads

Randy: X Instagram YouTube Threads





Download audio: https://traffic.megaphone.fm/FSI3956472750.mp3
Read the whole story
alvinashcraft
37 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

From Component Teams to Cross-Functional Teams — How to Navigate the Hardest Agile Transformation | Viktor Glinka

1 Share

Viktor Glinka: From Component Teams to Cross-Functional Teams — How to Navigate the Hardest Agile Transformation

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"Our customers do not buy our components. They use the product as a whole. And when it comes to integration, the real problem pops up." - Viktor Glinka

 

Viktor brings a challenge many Scrum Masters face: transitioning from component teams to cross-component, cross-functional teams in a large-scale Scrum setup. Picture 8 to 10 teams, each owning their own part of the system, never touching anything else — and the company stuck in delivery for months. The premise behind component teams sounds logical: specialization leads to speed. But as Viktor explains, that speed is local — optimized for the component, not the product. When integration time arrives, responsibility gaps appear, rework multiplies, and teams start identifying with their components rather than the product. "We're the billing team — we don't deal with anything else." When they reorganized into cross-functional teams, the complaints were immediate: "I was really productive before, and now I can't finish anything." Viktor and his fellow Scrum Masters took a two-pronged approach. First, they secured time credit from leadership — a couple of months where learning was prioritized over deadlines. They ran mob programming sessions, coached teams, and removed impediments. Second, they shifted focus from outputs to outcomes, organizing customer interviews that helped developers understand what users actually needed. The development director reinforced this by joining refinement sessions, telling teams: "You might not develop anything if it still satisfies the customer need." The result was a shift from transactional stakeholder relationships to genuine cooperation, and teams that began to see beyond their component boundaries.

 

Self-reflection Question: If your teams are organized around components, what would it take to run one experiment — just one sprint — where a team picks up work outside their usual component? What would you need to make that safe?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Viktor Glinka

 

Viktor is an organisational consultant and Professional Scrum Master who helps teams and leaders find simpler ways to deliver value while keeping the human side of work at the center. He's practical, curious, and focused on real outcomes rather than buzzwords. His true passion is adaptability - both in business and in personal life.

 

You can link with Viktor Glinka on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260422_Viktor_Glinka_W.mp3?dest-id=246429
Read the whole story
alvinashcraft
42 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories