Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150491 stories
·
33 followers

Next.js Deployment Adapters: A bright future for Next.js on Google Cloud

1 Share
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Generative UI Notes

1 Share

I’m really interested in this emerging idea that the future of web design is Generative UI Design. We see hints of this already in products, like Figma Sites, that tout being able to create websites on the fly with prompts.

Putting aside the clear downsides of shipping half-baked technology as a production-ready product (which is hard to do), the angle I’m particularly looking at is research aimed at using Generative AI (or GenAI) to output personalized interfaces. It’s wild because it completely flips the way we think about UI design on its head. Rather than anticipating user needs and designing around them, GenAI sees the user needs and produces an interface custom-tailored to them. In a sense, a website becomes a snowflake where no two experiences with it are the same.

Again, it’s wild. I’m not here to speculate, opine, or preach on Generative UI Design (let’s call it GenUI for now). Just loose notes that I’ll update as I continue learning about it.

Defining GenUI

Google Research (PDF):

Generative UI is a new modality where the AI model generates not only content, but the entire user experience. This results in custom interactive experiences, including rich formatting, images, maps, audio and even simulations and games, in response to any prompt (instead of the widely adopted “walls-of-text”).

NN/Group:

generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context.

UX Collective:

A Generative User Interface (GenUI) is an interface that adapts to, or processes, context such as inputs, instructions, behaviors, and preferences through the use of generative AI models (e.g. LLMs) in order to enhance the user experience.

Put simply, a GenUI interface displays different components, information, layouts, or styles, based on who’s using it and what they need at that moment.

Tree diagram showing three users, followed by inputs instructions, behaviors, and preferences, which output different webpage layouts.
Credit: UX Collective

Generative vs. Predictive AI

It’s easy to dump “AI” into one big bucket, but it’s often distinguished as two different types: predictive and generative.

Predictive AIGenerative AI
InputsUses smaller, more targeted datasets as input data. (Smashing Magazine)Trained on large datasets containing millions of sample content. (U.S. Congress, PDF)
OutputsForecasts future events and outcomes. (IBM)New content, including audio, code, images, text, simulations, and videos. (McKinsey)
ExamplesChatGPT, ClaudeSora, Suno, Cursor

So, when we’re talking about GenAI, we’re talking about the ability to create new materials trained on existing materials. And when we’re talking specifically about GenUI, it’s about generating a user interface based on what the AI knows about the user.

Accessibility

And I should note that what I’m talking about here is not strictly GenUI in how we’ve defined it so far as UI output that adapts to individual user experiences, but rather “developing” generated interfaces. These so-called AI website builders do not adapt to the individual user, but it’s easy to see it heading in that direction.

The thing I’m most interested in — concerned with, frankly — is to what extent GenUI can reliably output experiences that cater to all users, regardless of impairment, be it aural, visual, physical, etc. There are a lot of different inputs to consider here, and we’ve seen just how awful the early results have been.

That last link is a big poke at Figma Sites. They’re easy to poke because they made the largest commercial push into GenUI-based web development. To their credit (perhaps?), they received the severe pushback and decided to do something about it, announcing updates and publishing a guide for improving accessibility on Figma-generated sites. But even those have their limitations that make the effort and advice seem less useful and more about saving face.

Anyway. There are plenty of other players to jump into the game, notably WordPress, but also others like Vercel, Squarespace, Wix, GoDaddy, Lovable, and Reeady.

Some folks are more optimistic than others that GenUI is not only capable of producing accessible experiences, but will replace accessibility practitioners altogether as the technology evolves. Jakob Nielsen famously made that claim in 2024 which drew fierce criticism from the community. Nielsen walked that back a year later, but not much.

I’m not even remotely qualified to offer best practices, opine on the future of accessibility practice, or speculate on future developments and capabilities. But as I look at Google’s People + AI Guidebook, I see no mention at all of accessibility despite dripping with “human-centered” design principles.

Accessibility is a lagging consideration to the hype, at least to me. That has to change if GenUI is truly the “future” of web design and development.

Examples & Resources

Google has a repository of examples showing how user input can be used to render a variety of interfaces. Going a step further is Google’s Project Genie that bills itself as creating “interactive worlds” that are “generated in real-time.” I couldn’t get an invite to try it out, but maybe you can.

In addition to that, Google has a GenUI SDK designed to integrate into Flutter apps. So, yeah. Connect to your LLM provider and let it rip to create adaptive interfaces.

Thesys is another one in the adaptive GenUI space. Copilot, too.

References


Generative UI Notes originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Read the whole story
alvinashcraft
14 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Container registry configuration

1 Share
Learn how to configure container registries for your Aspire applications, including generic registries and Azure Container Registry.
Read the whole story
alvinashcraft
21 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Breaking the "Identity Wall" with Tenancy-as-a-Service

1 Share
Learn how B2B SaaS companies can use Tenancy-as-a-Service to scale past the "Identity Wall" and meet enterprise security requirements like SAML and SCIM.

Read the whole story
alvinashcraft
36 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Astro 6.1

1 Share
Astro 6.1 introduces codec-specific Sharp image defaults, advanced SmartyPants configuration, and i18n fallback routes for integrations.
Read the whole story
alvinashcraft
40 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing Babylon.js 9.0

1 Share
Our mission is to build one of the most powerful, beautiful, simple and open web rendering engines in the world. Today, we are thrilled to announce that mission takes a monumental leap forward with the release of Babylon.js 9.0. https://www.youtube.com/watch?v=Th9mD_D5DrQ Babylon.js 9.0 represents our biggest and most feature-rich update yet. This is a celebration of an incredible year of new features, optimizations and performance improvements that push the boundaries of what’s possible on the web. From groundbreaking lighting and particle systems to geospatial rendering, animation retargeting and an all-new inspector … Babylon.js 9.0 empowers web developers everywhere to create richer, more immersive experiences than ever before. Whether you’re just beginning your Babylon journey, or you’re a graphics expert leveraging 327 simultaneous AI agents, Babylon is built for you! Before we dive in, we want to take a moment to humbly thank the incredible community of developers, contributors and advocates who pour their knowledge, expertise and passion into this platform. Babylon.js would not be here without you. So, let’s dive in and see what’s new!

Clustered Lighting

When a scene has a lot of lights, per-pixel lighting calculations can get incredibly slow. Every single pixel has to compute the lighting contribution from every single light, even if those lights aren’t actually affecting that pixel. Clustered Lighting changes all of that. Babylon.js 9.0 introduces a powerful new Clustered Lighting system that dramatically speeds up lighting calculations by intelligently grouping lights into screen-space tiles and depth slices. At render time, each pixel only calculates lighting from the lights that actually affect it. The result? Scenes with hundreds or even thousands of lights running at buttery smooth frame rates! This system works on both WebGPU and WebGL 2, bringing next-generation lighting performance to the broadest possible audience. Check out a demo: https://aka.ms/babylon9CLDemo Learn more: https://aka.ms/babylon9CLDoc

Example of clustered lights.Textured Area Lights

Building on the Area Lights introduced in Babylon.js 8.0, we’re excited to announce that area lights in Babylon.js 9.0 now support emission textures! This means you can use any image as a light source for your rectangular area light, enabling effects like stained glass projections, LED panel displays or cinematic lighting setups, all with physically accurate light emission. An offline texture processing tool is also available for production workflows, and a runtime processing option is provided for quick prototyping and experimentation. Check out a demo: https://aka.ms/babylon9TALDemo Learn more: https://aka.ms/babylon9TALDoc Example of Textured Area Lights.

Node Particle Editor

We are absolutely thrilled to introduce the Node Particle Editor (NPE), a brand-new visual tool that lets you create complex particle systems using a powerful, non-destructive node graph. If you’re familiar with Babylon’s Node Material Editor, you’ll feel right at home! The NPE gives you complete control over every aspect of your particle systems (from emission shapes and sprite sheets to update behaviors and sub-emitters) all through an intuitive drag-and-connect interface. Whether you’re creating simple smoke effects or elaborate procedural fireworks, the Node Particle Editor makes it easy, visual and fun. Check out a demo: https://aka.ms/babylon9NPEDemo Learn more: https://aka.ms/babylon9NPEDoc A glowing planet in space.

Particle Flow Maps and Attractors

Want even more control over how your particles behave? Babylon.js 9.0 introduces Flow Maps, a screen-aligned texture that controls the direction and intensity of forces applied to particles based on their position on the screen. Each pixel in the flow map encodes a 3D direction vector and strength, giving you fine-grained, artistic control over particle movement. Flow maps work with both CPU and GPU particle systems, and integrate seamlessly with the new Node Particle Editor. Babylon.js 9.0 also adds gravity attractors to the particle system toolkit. An attractor is a simple but powerful concept: define a position and a strength, and watch as particles are pulled (or pushed!) toward that point in space. Set a negative strength to create a repulsor. Attractors can be repositioned and adjusted in real time, making it easy to create dynamic, interactive particle effects like swirling vortexes, magnetic fields or explosion shockwaves. Check out a demo: https://aka.ms/babylon9PartFMDemo Learn more about Particle Flow Maps: https://aka.ms/babylon9PartFMDoc Learn more about Particle Attractors: https://aka.ms/babylon9PartAttDoc Example of Particle Flow Maps.

Volumetric Lighting

Realistic light shafts streaming through fog, dust or haze can transform a scene from flat to cinematic. Babylon.js 9.0 makes this easier than ever with a powerful new Volumetric Lighting system. The result is stunningly realistic light scattering with configurable extinction and phase parameters that give you artistic control over how light interacts with the atmosphere. The system supports directional light sources, and takes full advantage of WebGPU compute shaders for optimal performance. WebGL 2 is also supported with graceful fallbacks. Whether you’re building a moody dungeon crawler, a foggy forest or an atmospheric architectural visualization, Volumetric Lighting brings your scenes to life. Check out a demo: https://aka.ms/babylon9vlDemo Learn more: https://aka.ms/babylon9vlDoc Example of Volumetric Lighting.

Frame Graph

One of the most transformative features in Babylon.js 9.0 is the Frame Graph system. Introduced as an alpha feature in 8.0, the Frame Graph is now a fully realized v1 feature that gives you complete, fine-grained control over the entire rendering pipeline. A Frame Graph is a Directed Acyclic Graph (DAG) where each node represents a rendering task, from object culling to post-processing. You declare what resources each task needs and produces, and the system intelligently manages texture allocation, reuse and optimization. This means substantial GPU memory savings (we’ve seen 40% or more in some cases!) and a level of rendering flexibility that was simply not possible before. You can customize and compose your own rendering pipeline visually using the Node Render Graph Editor, or programmatically through the class framework. No more opaque render black boxes! Check out a demo: https://aka.ms/babylon9FGDemo Learn more: https://aka.ms/babylon9FGDoc Example of Frame Graph.

Animation Retargeting

Animation retargeting is a game-changer for anyone working with character animations. New in Babylon.js 9.0, the retargeting system allows you to take an animation created for one character and apply it to a completely different character, even if they have different skeleton structures, bone proportions or naming conventions. The system mathematically remaps each animated bone transform from the source skeleton to the target, compensating for differences in reference pose, bone length and hierarchy. This means you can share an entire library of locomotion, combat or facial animations across many characters. An interactive Animation Retargeting Tool is also available for experimentation without writing any code! Check out a demo: https://aka.ms/babylon9ARDemo Learn more: https://aka.ms/babylon9ARDoc Three game characters with their arms held out horizontally.

Advanced Gaussian Splat Support

Babylon.js 7.0 introduced Gaussian Splatting, and Babylon.js 9.0 takes it to the next level. This release brings a host of advanced capabilities including support for multiple file formats (.PLY, .splat, .SPZ, and Self-Organizing Gaussians .SOG/.SOGS), Triangular Splatting for opaque mesh-like rendering, shadow casting support and the ability to combine multiple Gaussian Splat assets into a single scene with global splat sorting. You can now programmatically create, modify and download Gaussian Splat data, and each part of a composite splat scene can be independently transformed and animated. The result? Unprecedented flexibility for working with photorealistic volumetric captures on the web. Huge shout out to Adobe for their wonderful contributions to advancing Gaussian Splat support! Check out a demo: https://aka.ms/babylon9GSDemo Learn more: https://aka.ms/babylon9GSDoc Pink lawn chairs lined up in a circle around a fire pit. Those are just some of the standout features in Babylon.js 9.0—there’s much more to explore. Stay tuned for the next posts to learn more about our tooling updates and new geospatial features.
Read the whole story
alvinashcraft
49 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories