Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150581 stories
·
33 followers

Start building for glasses, new devices for Android XR and more in The Android Show | XR Edition

1 Share

Posted by Matthew McCullough - VP of Product Management, Android Developer



Today, during
The Android Show | XR Edition, we shared a look at the expanding Android XR platform, which is fundamentally evolving to bring a unified developer experience to the entire XR ecosystem. The latest announcements, from Developer Preview 3 to exciting new form factors, are designed to give you the tools and platform you need to create the next generation of XR experiences. Let's dive into the details!

A spectrum of new devices ready for your apps

The Android XR platform is quickly expanding, providing more users and more opportunities for your apps. This growth is anchored by several new form factors that expand the possibilities for XR experiences.


A major focus is on lightweight, all-day wearables. At I/O, we announced we are working with Samsung and our partners Gentle Monster and Warby Parker to design stylish, lightweight AI glasses and Display AI glasses that you can wear comfortably all day.  The integration of Gemini on glasses is set to unlock helpful, intelligent experiences like live translation and searching what you see.

And, partners like Uber are already exploring how AI Glasses can streamline the rider experience by providing simple, contextual directions and trip status right in the user’s view


The ecosystem is simultaneously broadening its scope to include wired XR glasses, exemplified by Project Aura from XREAL. This device blends the immersive experiences typically found in headsets with portability and real-world presence. Project Aura is scheduled for launch next year.

New tools unlock development for all form factors

If you are developing for Android, you are already developing for Android XR. The release of Android XR SDK Developer Preview 3 brings increased stability for headset APIs and, most significantly, opens up development for AI Glasses. 


You can now build augmented experiences for AI glasses using new libraries like Jetpack Compose Glimmer, a UI toolkit for transparent displays , and Jetpack Projected, which lets you extend your Android mobile app directly to glasses. Furthermore, the SDK now includes powerful ARCore for Jetpack XR updates, such as Geospatial capabilities for wayfinding.



For immersive experiences on headsets and wired XR glasses like Project Aura from XREAL, this release also provides new APIs for detecting a device's field-of-view, helping your adaptive apps adjust their UI.

Check out our post on the Android XR Developer Preview 3 to learn more about all the latest updates. 

Expanding your reach with new engine ecosystems

The Android XR platform is built on the OpenXR standard, enabling integration with the tools you already use so you can build with your preferred engine.

Developers can utilize Unreal Engine's native Android and OpenXR capabilities, today, to build for Android XR leveraging the existing VR Template for immersive experiences. To provide additional, optimized extensions for the Android XR platform, a Google vendor plug, including support for hand tracking, hand mesh, and more, will be released early next year.

Godot now includes Android XR support, leveraging its focus on OpenXR to enable development for devices like Samsung Galaxy XR. The new Godot OpenXR vendor plugin v4.2.2 stable allows developers to port their existing projects to the platform. 



Watch The Android Show | XR Edition

Thank you for tuning into the The Android Show | XR Edition. Start building differentiated experiences today using the Developer Preview 3 SDK and test your apps with the XR Emulator in Android Studio. Your feedback is crucial as we continue to build this platform together. Head over to developer.android.com/xr to learn more and share your feedback.


Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Build for AI Glasses with the Android XR SDK Developer Preview 3 and unlock new features for immersive experiences

1 Share

Posted by Matthew McCullough – VP of Product Management, Android Developer

In October, Samsung launched Galaxy XR - the first device powered by Android XR. And it’s been amazing seeing what some of you have been building! Here’s what some of our developers have been saying about their journey into Android XR.

Android XR gave us a whole new world to build our app within. Teams should ask themselves: What is the biggest, boldest version of your experience that you could possibly build? This is your opportunity to finally put into action what you’ve always wanted to do, because now, you have the platform that can make it real.

You’ve also seen us share a first look at other upcoming devices that work with Android XR like Project Aura from XREAL and stylish glasses from Gentle Monster and Warby Parker.

To support the expanding selection of XR devices, we are announcing Android XR SDK Developer Preview 3!




With Android XR SDK Developer Preview 3, on top of building
immersive experiences for devices such as Galaxy XR, you can also now build augmented experiences for upcoming AI Glasses with Android XR. 

New tools and libraries for augmented experiences

With developer preview 3, we are unlocking the tools and libraries you need to build intelligent and hands-free augmented experiences for AI Glasses. AI Glasses are lightweight and portable for all day wear. You can extend your existing mobile app to take advantage of the built-in speakers, camera, and microphone to provide new, thoughtful and helpful user interactions. With the addition of a small display on display AI Glasses, you can privately present information to users. AI Glasses are perfect for experiences that can help enhance a user’s focus and presence in the real world.


To power augmented experiences on AI Glasses, we are introducing two new, purpose-built libraries to the Jetpack XR SDK:

  • Jetpack Projected - built to bridge mobile devices and AI Glasses with features that allow you to access sensors, speakers, and displays on glasses

  • Jetpack Compose Glimmer - new design language and UI components for crafting and styling your augmented experiences on display AI Glasses


Jetpack Compose Glimmer is a demonstration of design best practices for beautiful, optical see-through augmented experiences. With UI components optimized for the input modality and styling requirements of display AI Glasses, Jetpack Compose Glimmer is designed for clarity, legibility, and minimal distraction.

To help visualize and test your Jetpack Compose Glimmer UI we are introducing the AI Glasses emulator in Android Studio. The new AI Glasses emulator can simulate glasses-specific interactions such as touchpad and voice input. 



Beyond the new Jetpack Projected and Jetpack Compose Glimmer libraries, we are also expanding ARCore for Jetpack XR to support AI Glasses. We are starting off with motion tracking and geospatial capabilities for augmented experiences - the exact features that enable you to create helpful navigation experiences perfect for all-day-wear devices like AI Glasses.


Expanding support for immersive experiences

We continue to invest in the libraries and tooling that power immersive experiences for XR Headsets like Samsung Galaxy XR and wired XR Glasses like the upcoming Project Aura from XREAL. We’ve been listening to your feedback and have added several highly-requested features to the Jetpack XR SDK since developer preview 2.


Jetpack SceneCore now features dynamic glTF model loading via URIs and improved materials support for creating new PBR materials at runtime. Additionally, the SurfaceEntity component has been enhanced with full Widevine Digital Rights Management (DRM) support and new shapes, allowing it to render 360-degree and 180-degree videos in spheres and hemispheres.

In Jetpack Compose for XR, you'll find new features like the UserSubspace component for follow behavior, ensuring content remains in the user's view regardless of where they look. Additionally, you can now use spatial animations for smooth transitions like sliding or fading. And to support an expanding ecosystem of immersive devices with diverse display capabilities, you can now specify layout sizes as fractions of the user’s comfortable field of view.

In Material Design for XR, new components automatically adapt spatially via overrides. These include dialogs that elevate spatially, and navigation bars, which pop out into an Orbiter. Additionally, there is a new SpaceToggleButton component for easily transitioning to and from full space.

And in ARCore for Jetpack XR, new perception capabilities have been added, including face tracking with 68 blendshape values unlocking a world of facial gestures. You can also use eye tracking to power virtual avatars, and depth maps to enable more-realistic interactions with a user’s environment.

For devices like Project Aura from XREAL, we are introducing the XR Glasses emulator  in Android Studio. This essential tool is designed to give you accurate content visualization, while matching real device specifications for Field of View (FoV), Resolution, and DPI to accelerate your development. 


If you build immersive experiences with Unity, we’re also expanding your perception capabilities in the Android XR SDK for Unity. In addition to lots of bug fixes and other improvements, we are expanding tracking capabilities to include: QR and ArUco codes, planar images, and body tracking (experimental). We are also introducing a much-requested feature: scene meshing. It enables you to have much deeper interactions with your user’s environment - your digital content can now bounce off of walls and climb up couches!

And that’s just the tip of the iceberg! Be sure to check out our immersive experiences page for more information.

Get Started Today!

The Android XR SDK Developer Preview 3 is available today! Download the latest Android Studio Canary (Otter 3, Canary 4 or later) and upgrade to the latest emulator version (36.4.3 Canary or later) and then visit  developer.android.com/xr to get started with the latest libraries and samples you need to build for the growing selection of Android XR devices. We’re building Android XR together with you! Don’t forget to share your feedback, suggestions, and ideas with our team as you progress on your journey in Android XR.


Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Aggregates in DDD: Model Rules, Not Relationships

1 Share

In a recent video I did about Domain-Driven Design Misconceptions, there was a comment that turned into a great thread that I want to highlight. Specifically, somebody left a comment about their problem with Aggregates in DDD.

Their example: if you have a chat, it has millions of messages. If you have a user, it has millions of friends, etc. It’s impossible to make an aggregate big enough to load into memory and enforce invariants.

So the example I’m going to use in this post is the rule: a group chat cannot have more than 100,000 members.

The assumption here is that aggregates need to hold all the information. They need to know about all the users. But that’s not what aggregates are for!

I’m going to show four different options for how you can model this. One of them is not using an aggregate at all. And, of course, the trade-offs with each approach.

YouTube

Check out my YouTube channel, where I post all kinds of content on Software Architecture & Design, including this video showing everything in this post.

The Common Starting Point (and the Trap)

View the code on Gist.

So this is how people often start with aggregates in DDD, which is directly what that comment was talking about. Say we have a GroupChat class. This is our aggregate. We’re defining our max number of members as 100,000. And then we have this list, this collection of all the members, all the users associated to this group chat.

Now, this user could itself be pretty heavy in terms of username, email address, a bunch of other information, and maybe some relationships with it.

Then, for our method to add a new member, all we’re doing is checking to make sure we’re not exceeding 100,000, and then we throw.

This is where people start. But here’s the problem with it.

It may feel intuitive, but it’s a trap. It’s a trap because you’re querying and pulling all that data from your database into memory to enforce a very simple rule.

The big mistake here is: we’re modeling relationships, not the rules.

We’re building up this object graph rather than modeling behaviors.

Option 1: Store Only the Count

View the code on Gist.

An alternative is to just record the number of members of the group chat. That’s actually the rule we’re trying to enforce. We don’t need to know who is associated to the group chat. We don’t need to know which users, just the total number so we can enforce the rule.

The obvious benefit is we solved the problem: we don’t have to load all those users into memory. This is going to be very fast.

The trade-off is if you do need to track which users are part of which group, you’ll have to model that separately.

Option 2: Enforce the Rule Above the Aggregate

View the code on Gist.

Another option, if you feel storing a count is too risky because it could get out of sync, and you’re already recording which users are associated to which group, is to push the invariant up a layer, above the aggregate, into some type of application request or application layer.

Here I’m using some kind of read model or projection to get the number of users. Because it’s a projection, it could be stale. That’s the trade-off. Then we enforce the invariant there. If we pass, we add the user to the group chat.

A fair argument here is: “Well, really? We have some aggregates enforcing invariants, some application or service layer enforcing invariants, everything scattered everywhere.” But reality is: you have to enforce rules where you can do so reliably, not where it always feels clean and tidy in some centralized place. That’s not reality.

An aggregate can only enforce a rule if it has all the data it needs. And often your application or service layer isn’t just a pass-through. It shouldn’t be. It’s doing orchestration, gathering information and deciding whether a command should be executed.

Option 3: No Aggregate At All (Transaction Script)

View the code on Gist.

This might sound surprising, but you don’t actually need an aggregate at all. Sometimes I advocate for using transaction scripts when they fit best.

That’s what I’m doing here: start a transaction. Set the right isolation level. Interact with the database. Do a SELECT COUNT(*). That’s going to be very fast with the right index. Lock if needed. Check the invariant. Insert the new record. Commit the transaction.

Simple.

Sometimes a simple problem just needs a simple solution, and a transaction script is very valid.

The trade-off here is if you’re in a domain with a lot of complexity and a lot of rules, this can get out of hand and hard to manage.

Option 4: Model Rules, Not Relationships

View the code on Gist.

Another option I mentioned earlier is: stop focusing on relationships and focus on the actual rule.

What makes us say the group chat is the one that needs to enforce the rule? Maybe there’s actually the concept of group membership, and group chat is about handling messages. These have different responsibilities.

That’s really what I want to emphasize: you don’t need one model to rule them all. You can enforce something in one place and something else somewhere else. You can have a group membership component enforcing whether you can join, and group chat is just about messages.

There are all kinds of approaches you can take, and they all have different trade-offs. Given the rule and how you’re modeling, pick what fits. It does not need to be an aggregate just because dogma says so.

Maybe it’s a transaction script. Maybe it’s an aggregate. Use what fits best.

When you’re modeling something like the group chat example, start with the rule. Ask yourself: Where can I reliably and efficiently enforce this rule?
Not: “How can I convert this schema into my object model?”

Too long didn’t read/watch: model rules, not relationships.

Join CodeOpinon!
Developer-level members of my Patreon or YouTube channel get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out my Patreon or YouTube Membership for more info.

The post Aggregates in DDD: Model Rules, Not Relationships appeared first on CodeOpinion.

Read the whole story
alvinashcraft
50 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Figma MCP vs Kombai: Cloning the Front End from Figma with AI Tools

1 Share

Frontend automation is moving fast. Tools like Figma MCP and Kombai can read design context and generate working UI code. I wanted to see what you actually get in practice, so I decided to compare them.

Figma MCP exposes design metadata to AI clients, while Kombai is a frontend-first agent that integrates with editors and existing stacks.

In this article, we’ll feed the same two Figma files into both tools, review how close the output is to the designs, and look at the code structure in a real editor.

Table of Contents

  1. What's the Deal?

  2. Meet the Tools

  3. Frontend Comparison with Figma

  4. Test 1: Simple Portfolio Design

  5. Test 2: Complex Learning Dashboard

  6. What You Should Know Before Using These Tools

  7. Final Verdict and What's Next?

  8. Conclusion

What's the Deal?

Cloning complex Figma designs by hand isn’t fun anymore, nor is writing your CSS line by line with exact precision.

And sure, you can attach a screenshot or whatever to GPT, but it often ends up with something that barely looks like your design. That's where Kombai or the Figma MCP come in.

They actually get your Figma design metadata and give you frontend code that's super close to the real thing.

So now, instead of spending hours rebuilding what's already in your design file, you can focus more on small tweaks and what actually matters.

Meet the Tools

Kombai

Kombai - AI Agent for Frontend

Kombai is an AI agent designed for frontend work. It takes input from Figma (like text, images, or your existing code), understands your stack, and converts it into clean, production-ready UI.

💡 It’s made specifically for frontend work, so you can expect it to be very good at that (unlike more generic tools like ChatGPT or Claude).

Kombai also handles large repositories easily. It doesn't just convert Figma designs into code. It actually understands your entire frontend codebase, even if it's huge.

So, even if you're working on a small side project or a very large production app, it can read, change, and write code that fits perfectly into your existing project.

Note: Kombai isn’t just good at cloning Figma designs and writing clean code. It actually understands your whole repo, too. You can chat with it like GPT, but it already knows your frontend. It can help refactor code, clean things up, or make changes without ever touching your backend logic.

Pretty handy, right?

No backend code is ever touched, which ensures none of your business logic is mistakenly changed.

You can also add Kombai right inside your editor. It works with VSCode, Cursor, Windsurf, and Trae. Just grab it from the extension marketplace, launch it, and you’re ready to go.

With Kombai, you can:

  • Turn Figma designs into code (React, HTML, CSS, and so on) using the component library your project already uses.

  • Work with a frontend-smart engine that understands 30+ libraries including Next.js, MUI, and Chakra UI.

  • Stay in your editor, follow your own conventions, and ship faster with good accuracy.

  • And most importantly, preview the changes in a sandbox so you can approve or reject the change before committing it to the files.

You can be up and running in under a minute. Here are the steps to get started:

  • Install the extension for your editor

  • Sign in and connect your project

  • Paste a Figma link or describe what you want to build

  • Review the output and commit your code

You can find it in the Extension marketplace of your IDE.

Kombai - Cursor marketplace extension

Now, using it is just as simple as accessing it from the left sidebar and having a chat similar to how you would with ChatGPT. (Optionally, you can add your tech stack, but Kombai handles it automatically.)

Kombai open inside the Cursor editor, highlighting the user interface

Head to the docs to get started and find the setup for your editor.

Pricing Note: Kombai is a paid tool but gives you a free plan with 300 credits per month, which is great for personal projects. For more advanced workflows, you can move up to the Pro plan or the Enterprise plan.

If you spend most of your time on the frontend, Kombai may be a good fit.

Figma MCP

Figma MCP (Model Context Protocol) lets AI agents connect directly to your Figma files. It closes the gap between your designs and your AI tools by giving them structured access to real design data instead of relying on screenshots or rough estimates.

It works by exposing your design's node tree, styles, layout rules, and component structure so the model can build the UI with actual design data.

That means tools like Claude Code, Gemini CLI, Cursor, and VSCode can actually read your designs, including layers, components, colors, spacing, and text, and use that context to generate accurate, production-ready code or design updates.

With Figma MCP, you can:

  • Let AI tools pull live data from your Figma files, so your code suggestions always match your latest designs

  • Ask your AI assistant to inspect components, layouts, or styles directly from Figma

  • Generate UI code that reflects real design and structure instead of guessing from an image

  • Keep designers and developers in sync without constantly sending files back and forth.

Setting it up is simple:

  • Run the Figma MCP server locally

  • Authorize your Figma workspace

  • Connect your editor or AI tool (Cursor, Claude Code, Gemini CLI, and so on)

For this test, I'll be using Figma MCP inside Claude Code in Linux, and setting it up is as simple as adding the following JSON in your Claude configuration file ~/.claude.json:

{
  "mcpServers": {
    "Framelink MCP for Figma": {
      "command": "npx",
      "args": ["-y", "figma-developer-mcp", "--figma-api-key=YOUR-KEY", "--stdio"]
    }
  }
}

For Windows users:

{
  "mcpServers": {
    "Framelink MCP for Figma": {
      "command": "cmd",
      "args": ["/c", "npx", "-y", "figma-developer-mcp", "--figma-api-key=YOUR-KEY", "--stdio"]
    }
  }
}

Pricing Note: To use Figma MCP, you need to have a paid Figma plan, either Professional, Organization, or Enterprise. But there's a community-maintained open-source MCP server, Figma-Context-MCP, that you can test out for free – which I'll be using for this test.

Once it’s running, any MCP-supported tool can understand your design files, making frontend coding development much more accurate.

Check the Figma MCP Guide to get started.

Frontend Comparison with Figma

For this test, we'll be comparing Kombai with Figma MCP using two Figma designs: one is a simple portfolio design, and the other is a more complex learner dashboard.

NOTE: For this test with Figma MCP, I'll be using Sonnet 4, which, in my experience, has been the best model for coding the frontend. I've also tested with the recent GPT-5 and Opus 4, but Sonnet 4 seems to be the best for frontend work. If you want to try other models, feel free to do so and see if you notice much difference in the results.

💁 Prompt: Clone this Figma design from this Figma frame link attached. Write clean, maintainable, and responsive code that matches the design closely. Keep components simple, reusable, and production-ready.

Quick note about the videos in the next section: The demo recordings are pretty long because I kept them raw. The idea is to show how the tools behave in real time. If you only care about the final output, feel free to skip to the end of each video.

Test 1: Simple Portfolio Design

Let's start with a simpler design that doesn't have much going on in the UI.

You can find the Figma design template here: Personal Portfolio Template

Figma MCP

Here's the response from Figma MCP:

This is pretty decent. The overall UI looks good, and the colors and fonts are all accurate. The biggest visual issues are with the hero image and a few icon placements, which are a bit off compared to the original Figma file.

The overall implementation took just about 5 minutes of coding and achieved this entire result in one go, as you see in the video demo. The time it takes isn't really dependent on the MCP itself but mostly on the model, so the timings will vary based on the model you choose to work with. The timing is something you can simply ignore here.

The whole page is split into sensible components (Header, Hero, Projects, ProjectCard, Footer) and composed in a clean page.tsx.

export default function Home() {
  return (
    <div className="min-h-screen bg-bg-gray">
      <Header />
      <main>
        <Hero />
        <Projects />
      </main>
      <Footer />
    </div>
  );
}

That is a nice, readable starting point for a Next app.

You can find the code it generated here.

But here are some issues I noticed right away:

  1. The hero decoration is positioned with pretty brittle absolute values:
<div className="hidden lg:block absolute right-0 top-0 w-[720px] h-[629px] pointer-events-none">
  <div className="relative w-full h-full">
    <div className="absolute left-0 top-0 w-[777px] h-[877px] -translate-y-[248px] bg-brand-yellow" />
    <div className="absolute left-0 top-0 w-full h-full">
      <img
        src="/images/hero-decoration-58b6e4.png"
        alt="Decorative"
        className="w-full h-full object-cover"
      />
    </div>
  </div>
</div>

This achieves the desired look at one screen size, but it can easily become misaligned when you resize. When compared side by side with the Figma frame, the hero image and yellow shape do not align as they should.

  1. Fixed Header

For a simple portfolio page with a short hero, a fixed header is not always worth the complexity.

The problem here is that since the header is fixed to the top, the rest of the content also starts from the top. On smaller devices, this might cover parts of the content when scrolling.

return (
  <header className="fixed top-0 left-0 right-0 bg-bg-gray z-50 h-14">
    {/* ... */}
    <button
      onClick={() => scrollToSection("about")}
      className="font-raleway ..."
    >
      About
    </button>
    {/* more buttons */}
  </header>
);

This is still a great head start, though it is not quite at the level where I would add it to a production repo without tidying up some of the layout changes.

Kombai

Here's the response from Kombai:

Visually, this one is extremely close to the Figma template. Apart from the hero image being slightly off from the Figma design, I see no other differences. It actually feels like the design is exactly copy-pasted.

Notice that the font, images, and icons are exactly the same, which to me is insane.

You can find the code it generated here.

Here are the specific things it does better in this simple example.

  1. It mirrors the Figma typography and colors as real tokens

Kombai sets up globals.css with Figma-like tokens and even defines utility classes for the text styles:

:root {
  /* ... */
}

@theme inline {
  /* ... */
}

@utility text-heading-large {
  /* ... */
}

@utility text-subtitle {
  /* ... */
}

That is very similar to how a designer would set up styles in Figma, and it means you can reuse these utilities in new screens instead of retyping Tailwind font sizes everywhere.

  1. Components are cleaner and more reusable

All the other components, like Hero or some smaller button components, use the same styles set up in styles.css.

const baseClasses =
  "text-button px-6 py-3 rounded-sm transition-all hover:opacity-90";

const variantClasses =
  variant === "primary"
    ? "bg-(--primary-yellow) text-(--foreground)"
    : "bg-transparent border-2 border-(--foreground) text-(--foreground) hover:bg-(--foreground) hover:text-white";

The footer pulls each icon into its own component:

import InstagramIcon from "./icons/InstagramIcon";
import LinkedInIcon from "./icons/LinkedInIcon";
import MailIcon from "./icons/MailIcon";

In practice, that means if the designer swaps the mail icon or tweaks the size, there is a single place to update it.

So for this simple test, Kombai’s output is both closer to the visual design and a bit nicer structurally for a real project. I would still tweak naming and some minor details, but I would happily keep most of this as is. How crazy is that?

Test 2: Complex Learner Dashboard

So, for the second one, let's create a slightly more complex design with a lot happening in the UI.

You can find the Figma design template here: Learning Dashboard

Figma MCP

Here's the response from Figma MCP:

This is good, considering the complexity of the design. It’s able to put all the images and assets in place. This is much better than what I expected. But there's a slight inconsistency in the placement of images between the original design and the implementation, as you can see for yourself.

If I compare the time, this got it done super fast, in just about 8 minutes, whereas Kombai took over 15 minutes to get it done (but with a better result).

You can find the code it generated here.

Here's what I like and dislike about a few things it did here:

  1. Great smaller components, but everything is still quite page-centric

It does break things into logical components like Sidebar, Input, Button, StatCard, CourseCard, and Icons. The main page then stitches them together:

export default function Home() {
  const mentors = [
    {
      id: 1,
      name: "John Doe",
      subject: "UI/UX Design",
      color: "bg-purple-500",
    },
    // ...
  ];

  return (
    <div className="flex items-center gap-8 w-full max-w-[1440px] h-[933px] bg-white rounded-[20px] mx-auto overflow-hidden">
      {/* Sidebar */}
      <Sidebar />

      {/* Main content */}
      <main className="flex flex-col items-center gap-6 pt-5 pb-0 flex-1 h-full overflow-hidden">
        {/* Search, hero, cards, mentor table */}
      </main>
    </div>
  );
}

The separation into components is nice, but everything is still wired directly inside one big page component with inline mock data. For a real app, I would want that data in its own module, ideally typed, so it is not mixed with layout logic.

  1. Hard-coded dimensions tied to the original frame

The outer container is pinned to a specific height:

<div className="flex items-center gap-8 w-full max-w-[1440px] h-[933px] bg-white rounded-[20px] mx-auto overflow-hidden">

That’s fine if you are literally recreating a 1440 by 933 frame for a screenshot, but in a live app, it means:

  • You get weird empty space on taller screens.

  • Anything that grows vertically (longer course titles, more mentors) will either overflow or get clipped.

The hero banner has the same kind of pixel-exact positioning:

<div className="relative w-full h-[181px] bg-primary rounded-[20px] overflow-hidden">
  <Image
    src="/images/star1.svg"
    alt="Star"
    width={80}
    height={80}
    className="absolute top-[45px] left/[683px] opacity-25"
  />
  {/* four more star images with fixed top/left */}
</div>

This is great for matching the specific Figma design, but as soon as the width changes, these positions stop lining up perfectly.

So overall, I would call this result surprisingly good for a single prompt, but a bit rigid and template-like once you start thinking about real data and using it in production.

Kombai

Here's the response from Kombai:

You will see in the video that I had to fix a small error with an extra prompt, but after that, it produced a fully working dashboard. The visual match is very strong, given how complex the layout is.

You can find the code it generated here.

Here is what stands out compared to the MCP output.

  1. It treats the Figma file like a real product, not just a static screen.

Instead of wiring everything in a single page with inline arrays, Kombai creates proper domain types and a mock-data.ts:

import { UserProfile, Friend, Course, ProgressCard, Mentor } from "./types";

export const courses: Course[] = [
  {
    id: "1",
    title: "Beginner's Guide to becoming a professional frontend developer",
    category: "Frontend",
    thumbnail: "/images/course-coding.jpg",
    instructor: {
      name: "Prashant Kumar singh",
      role: "software Developer",
      avatar: "/images/avatar-prashant.jpg",
    },
  },
  // ...
];

That looks much closer to what you would expect in a production codebase: clear types, data separated from layout, and a page component that just composes everything.

  1. Better mapping of the smaller UI pieces

The course card is similar to the MCP one, but now it is fully driven by a Course object:

export function CourseCard({ course }: { course: Course }) {
  return (
    <div className="flex flex-col gap-2.5 rounded-[20px] bg-white shadow-[0px_14px_42px_rgba(8,15,52,0.06)] overflow-hidden min-w-[268px]">
      <div className="relative">
        <Image
          src={course.thumbnail}
          alt={course.title}
          width={244}
          height={113}
          className="w-full h-28 object-cover rounded-t-xl"
        />
        <button className="absolute top-3 right-3 w-2 h-2 bg-white rounded-full" />
      </div>
      <div className="px-3 pb-4 flex flex-col gap-2.5">
        <span className="text-[8px] font-normal uppercase text-primary px-3 py-1 bg-purple-50 rounded w-fit">
          {course.category}
        </span>
        <p className="text-[14px] font-medium text-text-primary leading-tight">
          {course.title}
        </p>
        <div className="w-full h-1.5 bg-gray-100 rounded-full overflow-hidden">
          <div
            className="h-full bg-primary rounded-full"
            style={{ width: "60%" }}
          />
        </div>
        {/* instructor avatar and name */}
      </div>
    </div>
  );
}

The structure and text styles are very close to the original design, and because the card is fully data-driven, you can plug in real data without touching the JSX.

  1. Design tokens and typography utilities again

Just like in the portfolio example, Kombai sets up a proper token layer for the dashboard:

:root {
  /* ... */
}

@utility heading-section {
  /* ... */
}

@utility text-caption {
  /* ... */
}

The components then reuse these utilities, which keeps the code close to the design system instead of scattering font sizes and colors everywhere.

  1. Things I would still tweak

It is not perfect:

  • The Next layout.tsx is still using the default Geist fonts and “Create Next App” metadata, so you would want to align that with the Inter font and real app title.

  • Some of the mock data has inconsistent casing in names and roles, which you would clean up in a real project.

  • The play button on the course card is just a white dot button for now, so you would still plug in the real icon.

But even with those issues, it is very close to something I would actually keep in a production repo after a quick pass.

Now, this is not as perfect as the previous Kombai implementation, and it did not run into errors. But considering how complex this design is, with multiple different cards with images and all, it's still really impressive to me.

For this one, it took a bit longer to code, but in my opinion, the extra time was worth it.

Imagine you're building something similar and get a response this good already. Then it's not that big of a deal to iterate a little bit, right? You don't have to start from scratch. Just make a few changes if required, and you're done.

What You Should Know Before Using These Tools

As good as these tools are, they’re not something you can just trust blindly. They’ll get you off to a solid start, but you’ll still need to tweak a few things before calling it production-ready.

Kombai does a great job cloning Figma designs and writing clean, modular code. It breaks components into smaller files and generally follows good structure.

The only issue I noticed is that it sometimes slips on naming conventions. Since it scans your entire codebase to stay consistent with your setup, it can be a bit slower to generate code, but that’s also what makes it smarter. You’re not just getting a Figma cloner, you’re getting an assistant that actually understands your frontend.

Figma MCP is fast and does a decent job matching the UI, although the results depend a lot on the model you use for generation. If your main goal is to clone Figma designs quickly and you don’t mind refining the output, it’s a good option.

In short, both tools can save you a ton of time, but they’re not plug-and-play replacements for a frontend workflow. Treat them as part of your toolkit, and you’ll get the best results.

Final Verdict, and What's Next?

Now that you’ve got the gist of what these tools can do, go ahead and try them out. You can turn your Figma designs into working frontends in just a few minutes without all the endless play with CSS.

To sum up, here’s the quick rundown:

  • If you want production-ready code that actually looks like your Figma design and you mostly live in VS Code, Cursor, or any GUI IDE, go with Kombai. It nails the details and even understands your codebase, which is completely missing in Figma MCP.

  • If you just want to clone a Figma design quickly and don’t mind if things are slightly off, Figma MCP is totally fine. It gets the job done pretty well.

Basically, choose Kombai if you care about precision and code quality with codebase understanding.

Choose Figma MCP if you want something quick, that works and looks decent enough. 🤷‍♂️

Conclusion

So, what do you think? Pretty cool, right? This was a fun little experiment to see how close tools like Figma MCP and Kombai can get to cloning real frontends straight from Figma.

If you’re into building frontends and want to save yourself a few hours of CSS pain, definitely give them a try. Just don’t expect them to be perfect in one try – their output still needs review and likely a little refining.

That’s all for this one. Thank you for reading! ✌️



Read the whole story
alvinashcraft
55 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Contribute to the Open Source Vonage MCP Tooling Server

1 Share
The Vonage MCP Tooling Server is open source and beginner-friendly. Add real SDK features through straightforward PRs and clear MCP guidelines.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Learn MCP Server Elevates Development

1 Share

Have you tried asking GitHub Copilot about Aspire 13 or the new Agent Framework and found it either hallucinated an answer or told you that those things didn’t exist? This happens because the model used was trained before those things did exist, so it doesn’t know how to answer or help you. As you continue to innovate and move at the speed of AI, you need a development assistant that can keep up with the latest information.

Introducing the MS Learn Model Context Protocol (MCP) server tools. In this post, we’ll explore how the Learn MCP server enhances the developer experience with Copilot, showcase practical examples, and provide straightforward integration instructions for Visual Studio, Visual Studio Code, the Copilot Command Line Interface, and the Copilot Coding Agent.

What Is the MS Learn MCP Server?

The MS Learn MCP server is a managed content provider designed to seamlessly provide Copilot with high-quality, up-to-date, context-aware Microsoft product documentation, code samples, and learning resources to ensure it has the latest information to provide the developer with the best results. Whether you’re building a new AI agent or optimizing an existing WinForms application, the Learn MCP server ensures Copilot has the information it needs.

Enhancing the Developer Experience

By integrating the Learn MCP server with Copilot, .NET developers benefit from a more intelligent and responsive coding environment. Here’s how it makes a difference:

  • Improved Code Suggestions: Copilot delivers code suggestions and explanations backed by trusted Microsoft Learn content, reducing the risk of outdated or incorrect guidance.
  • Context Awareness: The MCP server returns documentation and code samples specific to your scenario—whether you’re working with .NET 10, experimenting with Aspire, or building APIs in C#.
  • Faster Problem Solving: Instead of leaving your editor to search for documentation, you get instant, in-place answers and code references, accelerating your workflow.
  • Learning While Coding: Accessing MS Learn modules and tutorials helps you upskill in real time as you work on projects.

Key Use Cases: MCP Server in Action with Copilot

  • On-Demand API References: While implementing authentication in ASP.NET Core, Copilot—powered by the Learn MCP server—provides inline references to the latest Microsoft Identity documentation and code samples specific to your framework version. Screenshot of Copilot Chat using MS Learn MCP Server to get API references
  • Best Practice Recommendations: As you write a new MCP Server, Copilot surfaces best practices from MS Learn, ensuring your implementation follows current guidelines.

    Screenshot of Copilot Chat using MS Learn MCP Server to get best practice information

  • Learning New Frameworks or Libraries: When experimenting with technologies like gRPC or SignalR, Copilot can recommend relevant MS Learn modules and code samples, accelerating onboarding and knowledge acquisition. Screenshot of Copilot Chat using MS Learn MCP Server to get information about a framework or library

Integration Instructions

Ready to harness the power of the Learn MCP server with Copilot? Below are step-by-step guides for integrating the MCP server into your favorite tools.

Visual Studio

  1. Make sure you are on Visual Studio 2026 or Visual Studio 2022 version 17.14.
  2. The MS Learn MCP Server is built-in and is available for you to use, just make sure they are turned on when you submit your chat.

Screenshot of Visual Studio Copilot Chat Tools

Visual Studio Code

  1. Open VS Code and go to the Extensions view.
  2. Ensure you have the GitHub Copilot extension installed.
  3. Go to the MCP Server section and select the search icon.
  4. Search for Microsoft Docs and select Install.

Copilot CLI

  1. In the Copilot CLI, type /mcp add
  2. Give it the name “microsoft-docs”
  3. Select “2” for HTTP
  4. Provide the remote URL: https://learn.microsoft.com/api/mcp
  5. Ctrl+S to save the server.

Copilot Coding Agent (CCA)

  1. In your repo, go to your settings and select Copilot > Coding Agent.
  2. Scroll down to the Model Context Protocol section.
  3. Add the following to the text box:
{ 
    "mcpServers": { 
        "microsoft-docs": { 
        "type": "http", 
        "url": "https://learn.microsoft.com/api/mcp", 
        "tools": ["*"]
        }
    }
}

Conclusion

Integrating the Microsoft Learn MCP server with Copilot supercharges your development workflow, providing trusted, up-to-date, context-aware content exactly when and where you need it. Whether you’re new to .NET or a seasoned developer, this enhanced experience means faster solutions, better code quality, and continuous learning without leaving your preferred tools. Try integrating the Learn MCP server today and experience a smarter, more connected way to develop with .NET!

Learn more at these helpful resources.

The post Microsoft Learn MCP Server Elevates Development appeared first on .NET Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories