Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146981 stories
·
33 followers

Announcing OpenAPI support for the Pulumi Cloud REST API

1 Share

We’re thrilled to announce that the Pulumi Cloud REST API is now described by an OpenAPI 3.0 specification, and we’re just getting started.

This is a feature that has been a long time coming. We have heard your requests for OpenAPI support loud and clear, and we’re excited to share that not only do we have a published specification for consumption, but our API code is now built from this specification as well. Moving forward, this single source of truth unlocks better tooling, tighter integration, and a more predictable API experience for everyone.

You can fetch the spec directly from the API at runtime or use it for client generation, validation, and documentation, all from one machine-readable contract.

A single contract for the Pulumi Cloud REST API

The Pulumi Cloud API powers the Pulumi CLI, the Pulumi Console, and third-party integrations. Until now, there was no single, published machine-readable description of that API. We’ve changed that. The API is now defined and served as a standard OpenAPI 3.0.3 document.

  • Runtime discovery: You can retrieve the spec from the API itself, so your tooling always sees the same surface the service implements.
  • Client generation: Use your favorite OpenAPI tooling (e.g. OpenAPI Generator, Swagger Codegen) to generate API clients in the language of your choice.
  • Validation and testing: Validate requests and responses, or build mocks and tests, from the same spec the service uses.
  • Documentation: The spec is the source of truth, not a separate, hand-maintained API doc that can drift from reality. Load the spec into Swagger UI, Redoc, or another viewer to browse the Pulumi Cloud API interactively.

How to get the spec

Send a GET request to:

https://api.pulumi.com/api/openapi/pulumi-spec.json

No authentication is required. The response is the OpenAPI 3.0 document for the Pulumi Cloud API, describing the supported, documented API surface.

Source of truth and stability

We do not hand-write the OpenAPI spec. We generate it from the same API definition that drives our backend and console code. When we add or change API routes or models, we regenerate the spec so the published document stays in sync with what the service actually implements. That gives you a clear, stable contract for the Pulumi Cloud API.

What we are building next

We are using this spec as the foundation for our own tooling, and have plans to continue leveraging the spec in our toolchain long-term.

  • CLI: We plan to drive the Pulumi CLI’s API client from the OpenAPI spec so that CLI and API stay in lockstep.
  • Pulumi Service Provider: We are also building towards day 1 updates to the Pulumi Service Provider so that new and changed API resources are generated from the spec and ship in sync with the service.
  • Docs Enhancements: Although you can load the spec using Swagger UI for your own browsing, we are intent on shipping enhancements to our public REST API docs that will keep them up-to-date according to the OpenAPI spec.

As we ship those updates, you will get a single source of truth from API to CLI to provider.

If you have questions or feedback about the OpenAPI spec or the Pulumi Cloud API, reach out in our Community Slack or open an issue in the Pulumi repository. We’re excited to see what you build with it.

Read the whole story
alvinashcraft
53 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Command Line Interface Consumer for Kafka in C#

1 Share

When I started working with Kafka, I installed it locally on Docker and used a combination of the Confluent Command Line Interface (CLI) and C# programs I wrote.

One of the CLI tools from Confluent let me produce and consume messages on a Kafka topic by specifying the topic name and broker address. It was an easy way try things out.

But when I used this more recently, I couldn’t get it to work. I had updated the Docker image and the CLI. There seem to be new authentication requirements when using the CLI, authentication requirements that are not needed when using a C# consumer. I also could not find a tutorial on Confluent about local Kafka usage.

I messed around with it for a while, but eventually decided to write my own simple CLI consumer in C# using the new dotnet run app.cs approach.

Here is the code -

 1#!/usr/bin/dotnet run
 2#:package Confluent.Kafka@2.5.0
 3using Confluent.Kafka;
 4
 5if (args.Length < 2 || args.Length > 3 )
 6{
 7 Console.WriteLine("Usage: url topic [groupId]");
 8 return;
 9}
10
11string url = args[0];
12string topic = args[1];
13string groupId = args.Length == 3 ? args[2] : Guid.NewGuid().ToString();
14Console.WriteLine($"Using groupId: {groupId}");
15ConsumerConfig _consumerConfig = new ConsumerConfig
16{
17 BootstrapServers = url,
18 GroupId = groupId,
19 AutoOffsetReset = AutoOffsetReset.Latest,
20 EnableAutoCommit = true, // this is the default but good to know about
21 EnableAutoOffsetStore = true // this is the default but good to know about
22};
23
24using var consumer = new ConsumerBuilder<string, string>(_consumerConfig).Build();
25consumer.Subscribe(topic);
26
27Console.WriteLine($"Looking for messages on topic: {topic}");
28while (true)
29{
30 var consumeResult = consumer.Consume();
31 Console.WriteLine($"{consumeResult.Message.Value}");
32}

This can be executed like a script after it is made executable with chmod +x consumer.cs.

I’ll write a follow up with an example of a producer in a day or two.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – February 4, 2026 (#714)

1 Share

Today I was impressed by so many thought-provoking articles and blogs. Maybe the most all year so far!

[article] Google parent beats on revenue, projects significant AI spending increase. Wowza, Cloud had a remarkable quarter (and year). We’re doing a few things right over here.

[article] How should AI agents consume external data? It’s a good question and one I haven’t seen discussed much. Locked down data sources? APIs? Secured caches? Make sure you have some gateways in front.

[article] Leaders, gainers and unexpected winners in the Enterprise AI arms race. It’s early, of course, but there are signals about who the real players are. Interesting to see third party apps GROWING in the enterprise.

[lab] Build and Deploy to Google Cloud with Antigravity. I clicked through this new step-by-step tutorial. It’s an excellent way to see how AI can help with more than just generating new code.

[article] Shared memory is the missing layer in AI orchestration. This has been on my mind a lot. What type of shared knowledge/memory should be stored, who should access it, and how is it fed into tools and apps?

[blog] Self-Improving Coding Agents. Speaking of memory, this excellent post from Addy highlights the value of the “right” memories passed between iterations of agentic loops.

[blog] Advancing AI benchmarking with Game Arena. Why not use games to really test the decision-making of AI models?

[article] My View of Software Engineering Has Changed For Good. It’s unavoidable at this point. Fairly soon, if not already, it’s not going to make much sense to sling all the code yourself.

[article] Most People Can’t Vibe Code. Here’s How We Fix That. Even with millions of new “builders” out there, there’s still a lot of assumed knowledge about software that many don’t have. Do we need different surfaces and platforms than what we have now?

[blog] AI Slopageddon and the OSS Maintainers. With all these new builders, however, we’re breaking open source workflows and maintainer morale.

[blog] Researching Topics in the Age of AI — Rock-Solid Webhooks Case Study. Should we share our AI-driven research reports publicly? Does that contribute to the “slop” problem? Or when guided well, do these reports represent useful knowledge worth disseminating? I lean towards the latter.

[article] How Do Workers Develop Good Judgment in the AI Era? Great question. How do you get good judgement without experience? We have to redesign how we develop that judgement.

[blog] MCP Development with COBOL, Cloud Run, and Gemini CLI. The fact that you can even do this is the takeaway. Not that you should start building MCP servers with COBOL.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing Windows 11 Insider Preview Build 28020.1546 (Canary Channel)

1 Share
Hello Windows Insiders, today we are releasing Windows 11 Insider Preview Build 28020.1546 to the Canary Channel (the desktop watermark is showing the wrong build number and will be addressed in a near term build). (KB 5074176)

What’s new in Canary Build 28020.1546

Changes and Improvements gradually being rolled out with toggle on*

  • This update includes a small set of general improvements and fixes that improve the overall experience for Insiders running this build on their PCs.
  • We fixed an issue with apps when working with files on OneDrive or Dropbox.

Reminders for Windows Insiders in the Canary Channel

  • The builds we release to the Canary Channel represent the latest platform changes early in the development cycle and should not be seen as matched to any specific release of Windows. Features and experiences included in these builds may never get released as we try out different concepts and get feedback. Features may change over time, be removed, or replaced and never get released beyond Windows Insiders. Some of these features and experiences could show up in future Windows releases when they’re ready.
  • Many features in the Canary Channel are rolled out using Control Feature Rollout technology, starting with a subset of Insiders and ramping up over time as we monitor feedback to see how they land before pushing them out to everyone in this channel.
  • The desktop watermark shown at the lower right corner of the desktop is normal for Windows Insider pre-release builds.
  • Some features may show up in the Dev and Beta Channels first before showing up in the Canary Channel.
  • Some features in active development we preview with Windows Insiders may not be fully localized and localization will happen over time as features are finalized. As you see issues with localization in your language, please report those issues to us via Feedback Hub.
  • To get off the Canary Channel, a clean install of Windows 11 will be required. As a reminder - Insiders can’t switch to a channel that is receiving builds with lower build numbers without doing a clean installation of Windows 11 due to technical setup requirements.
  • Check out Flight Hub for a complete look at what build is in which Insider channel.
Thanks, Windows Insider Program Team *Functionality will vary by device and market; text actions will be available across markets in select character sets. See aka.ms/copilotpluspcs.
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

No Display? No Problem: Cross-Device Passkey Authentication for XR Devices

1 Share
  • We’re sharing a novel approach to enabling cross-device passkey authentication for devices with inaccessible displays (like XR devices).
  • Our approach bypasses the use of QR codes and enables cross-device authentication without the need for an on-device display, while still complying with all trust and proximity requirements.
  • This approach builds on work done by the FIDO Alliance and we hope it will open the door to bring secure, passwordless authentication to a whole new ecosystem of devices and platforms.

Passkeys are a significant leap forward in authentication, offering a phishing-resistant, cryptographically secure alternative to traditional passwords. Generally, the standard cross-device passkey flow, where someone registers or authenticates on a desktop device by approving the action on their nearby mobile device, is done in a familiar way with QR codes scanned by their phone camera.  But how can we facilitate this flow for XR devices with a head-mounted display or no screen at all, or for other devices with an inaccessible display like smart home hubs and industrial sensors? 

We’ve taken a  novel approach to adapting the WebAuthn passkey flow and FIDO’s CTAP hybrid protocol for this unique class of devices that either lack a screen entirely or whose screen is not easily accessible to another device’s camera. Our implementation has been developed and is now broadly available on Meta Quest devices powered by Meta Horizon OS. We hope that this approach can also ensure robust security built on the strength of existing passkey frameworks, without sacrificing usability, for users of a variety of other screenless IoT devices, consumer electronics, and industrial hardware.

The Challenge: No Screen, No QR Code

The standard cross-device flow relies on two primary mechanisms:

  1. QR code scanning: The relying party displays a QR code on the desktop/inaccessible device, which the mobile authenticator scans to establish a secure link.
  2. Bluetooth/NFC proximity: The devices use local communication protocols to discover each other and initiate the secure exchange.

For devices with no display, the QR code method is impossible. Proximity-based discovery is feasible, but initiating the user verification step and confirming the intent without any on-device visual feedback can introduce security and usability risks. People need clear assurance that they are approving the correct transaction on the correct device.

Our Solution: Using a Companion App for Secure Message Transport

Scanning a QR code sends the authenticator device a command to initiate a hybrid (cross-device) login flow with a nonce that identifies the unauthenticated device client. But if a user has a companion application – like the Meta Horizon app – that uses the same account as the device we can use that application to pass this same request to the authenticator OS and execute it using general link/intent execution.

We made the flow easy to navigate by using in-app notifications to show users when a login request has been initiated, take them directly into the application, and immediately execute the login request.

For simplicity, we opted to begin the hybrid flow as soon as the application is opened since the user would have had to take some action (clicking the notification or opening the app) to trigger this and there is an additional user verification step in hybrid implementations on iOS and Android.

Here’s how this plays out on a Meta Quest with the Meta Horizon mobile app:

1. The Hybrid Flow Message Is Generated

When a passkey login is initiated on the Meta Quest, the headset’s browser locally constructs the same payload that would have been embedded in a QR Code – including a fresh ECDH public key, a session-specific secret, and routing information used later in the handshake. Instead of rendering this information into an image (QR code), the browser encodes it into a FIDO URL (the standard mechanism defined for hybrid transport) that instructs the mobile device to begin the passkey authentication flow.

2. The Message Is Sent to the Companion App

After the FIDO URL is generated, the headset requires a secure and deterministic method for transferring it to the user’s phone. Because the device cannot present a QR code, the system leverages the Meta Horizon app’s authenticated push channel to deliver the FIDO URL directly to the mobile device. When the user selects the passkey option in the login dialog, the headset encodes the FIDO URL as structured data within a GraphQL-based push notification. 

The Meta Horizon app, signed in with the same account as the headset, receives this payload and validates the delivery context to ensure it is routed to the correct user. 

3. The Application Sends a Notification of the Login Request

After the FIDO URL is delivered to the mobile device, the platform’s push service surfaces it as a standard iOS or Android notification indicating that a login request is pending. When the user taps the notification, the operating system routes the deep link to the Meta Horizon app. The app then opens the FIDO URL using the system URL launcher and invokes the operating system passkey interface.

For users that have the notification turned off or disabled, launching the Meta Horizon app directly will also trigger a query to the backend for any pending passkey requests associated with the user’s account. If a valid request exists (requests expire after five minutes), the app automatically initiates the same passkey flow by opening the FIDO URL.

Once the FIDO URL is opened, the mobile device begins the hybrid transport sequence, including broadcasting the BLE advertisement, establishing the encrypted tunnel, and producing the passkey assertion. In this flow, the system notification and the app launch path both serve as user consent surfaces and entry points into the standard hybrid transport workflow.

4. The App Executes the Hybrid Command

Once the user approves the action on their mobile device, the secure channel is established as per WebAuthn standards. The main difference is the challenge exchange timing:

  1. The inaccessible device generates the standard WebAuthn challenge and waits.
  2. The mobile authenticator, initiates the secure BLE/NFC connection.
  3. The challenge is transmitted over this secure channel.
  4. Upon UV success, the mobile device uses the relevant key material to generate the AuthenticatorAssertionResponse or AuthenticatorAttestationResponse.
  5. The response is sent back to the inaccessible device.

The inaccessible device then acts as the conduit, forwarding the response to the relying party server to complete the transaction, exactly as a standard display-equipped device would.

Impact and Future Direction

This novel implementation successfully bypasses the need for an on-device display in the cross-device flow and still complies with the proximity and other trust challenges that exist today for cross-device passkey login. We hope that our solution paves the way for secure, passwordless authentication across a wider range of different platforms and ecosystems, moving passkeys beyond just mobile and desktop environments and into the burgeoning world of wearable and IoT devices. 

We are proud to build on top of and collaborate the excellent work already done in this area by our peers in the FIDO Alliance and mobile operating systems committed to this work and building a robust and interoperable ecosystem for secure and easy login.

The post No Display? No Problem: Cross-Device Passkey Authentication for XR Devices appeared first on Engineering at Meta.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The Wayback Machine debuts a new plugin designed to fix the internet’s broken links problem

1 Share
WordPress is helping the non-profit fight the scourge of "link rot."
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories