Posted by Matthew McCullough - VP of Product Management, Android Developer
Today, during The Android Show | XR Edition, we shared a look at the expanding Android XR platform, which is fundamentally evolving to bring a unified developer experience to the entire XR ecosystem. The latest announcements, from Developer Preview 3 to exciting new form factors, are designed to give you the tools and platform you need to create the next generation of XR experiences. Let's dive into the details!
A spectrum of new devices ready for your apps
The Android XR platform is quickly expanding, providing more users and more opportunities for your apps. This growth is anchored by several new form factors that expand the possibilities for XR experiences.
A major focus is on lightweight, all-day wearables. At I/O, we announced we are working with Samsung and our partners Gentle Monster and Warby Parker to design stylish, lightweight AI glasses and Display AI glasses that you can wear comfortably all day. The integration of Gemini on glasses is set to unlock helpful, intelligent experiences like live translation and searching what you see.
And, partners like Uber are already exploring how AI Glasses can streamline the rider experience by providing simple, contextual directions and trip status right in the user’s view
The ecosystem is simultaneously broadening its scope to include wired XR glasses, exemplified by Project Aura from XREAL. This device blends the immersive experiences typically found in headsets with portability and real-world presence. Project Aura is scheduled for launch next year.
New tools unlock development for all form factors
If you are developing for Android, you are already developing for Android XR. The release of Android XR SDK Developer Preview 3 brings increased stability for headset APIs and, most significantly, opens up development for AI Glasses.
You can now build augmented experiences for AI glasses using new libraries like Jetpack Compose Glimmer, a UI toolkit for transparent displays , and Jetpack Projected, which lets you extend your Android mobile app directly to glasses. Furthermore, the SDK now includes powerful ARCore for Jetpack XR updates, such as Geospatial capabilities for wayfinding.
For immersive experiences on headsets and wired XR glasses like Project Aura from XREAL, this release also provides new APIs for detecting a device's field-of-view, helping your adaptive apps adjust their UI.
The Android XR platform is built on the OpenXR standard, enabling integration with the tools you already use so you can build with your preferred engine.
Developers can utilize Unreal Engine's native Android and OpenXR capabilities, today, to build for Android XR leveraging the existing VR Template for immersive experiences. To provide additional, optimized extensions for the Android XR platform, a Google vendor plug, including support for hand tracking, hand mesh, and more, will be released early next year.
Godot now includes Android XR support, leveraging its focus on OpenXR to enable development for devices like Samsung Galaxy XR. The new Godot OpenXR vendor plugin v4.2.2 stable allows developers to port their existing projects to the platform.
Watch The Android Show | XR Edition
Thank you for tuning into the The Android Show | XR Edition. Start building differentiated experiences today using the Developer Preview 3 SDK and test your apps with the XR Emulator in Android Studio. Your feedback is crucial as we continue to build this platform together. Head over to developer.android.com/xr to learn more and share your feedback.
Posted by Matthew McCullough – VP of Product Management, Android Developer
In October, Samsung launched Galaxy XR - the first device powered by Android XR. And it’s been amazing seeing what some of you have been building! Here’s what some of our developers have been saying about their journey into Android XR.
Android XR gave us a whole new world to build our app within. Teams should ask themselves: What is the biggest, boldest version of your experience that you could possibly build? This is your opportunity to finally put into action what you’ve always wanted to do, because now, you have the platform that can make it real.
You’ve also seen us share a first look at other upcoming devices that work with Android XR like Project Aura from XREAL and stylish glasses from Gentle Monster and Warby Parker.
To support the expanding selection of XR devices, we are announcing Android XR SDK Developer Preview 3!
With Android XR SDK Developer Preview 3, on top of building immersive experiences for devices such as Galaxy XR, you can also now build augmented experiences for upcoming AI Glasses with Android XR.
New tools and libraries for augmented experiences
With developer preview 3, we are unlocking the tools and libraries you need to build intelligent and hands-free augmented experiences for AI Glasses. AI Glasses are lightweight and portable for all day wear. You can extend your existing mobile app to take advantage of the built-in speakers, camera, and microphone to provide new, thoughtful and helpful user interactions. With the addition of a small display on display AI Glasses, you can privately present information to users. AI Glasses are perfect for experiences that can help enhance a user’s focus and presence in the real world.
To power augmented experiences on AI Glasses, we are introducing two new, purpose-built libraries to the Jetpack XR SDK:
Jetpack Compose Glimmer - new design language and UI components for crafting and styling your augmented experiences on display AI Glasses
Jetpack Compose Glimmer is a demonstration of design best practices for beautiful, optical see-through augmented experiences. With UI components optimized for the input modality and styling requirements of display AI Glasses, Jetpack Compose Glimmer is designed for clarity, legibility, and minimal distraction.
To help visualize and test your Jetpack Compose Glimmer UI we are introducing the AI Glasses emulator in Android Studio. The new AI Glasses emulator can simulate glasses-specific interactions such as touchpad and voice input.
Beyond the new Jetpack Projected and Jetpack Compose Glimmer libraries, we are also expanding ARCore for Jetpack XR to support AI Glasses. We are starting off with motion tracking and geospatial capabilities for augmented experiences - the exact features that enable you to create helpful navigation experiences perfect for all-day-wear devices like AI Glasses.
Expanding support for immersive experiences
We continue to invest in the libraries and tooling that power immersive experiences for XR Headsets like Samsung Galaxy XR and wired XR Glasses like the upcoming Project Aura from XREAL. We’ve been listening to your feedback and have added several highly-requested features to the Jetpack XR SDK since developer preview 2.
In Jetpack Compose for XR, you'll find new features like the UserSubspace component for follow behavior, ensuring content remains in the user's view regardless of where they look. Additionally, you can now use spatial animations for smooth transitions like sliding or fading. And to support an expanding ecosystem of immersive devices with diverse display capabilities, you can now specify layout sizes as fractions of the user’s comfortable field of view.
In Material Design for XR, new components automatically adapt spatially via overrides. These include dialogs that elevate spatially, and navigation bars, which pop out into an Orbiter. Additionally, there is a new SpaceToggleButton component for easily transitioning to and from full space.
And in ARCore for Jetpack XR, new perception capabilities have been added, including face tracking with 68 blendshape values unlocking a world of facial gestures. You can also use eye tracking to power virtual avatars, and depth maps to enable more-realistic interactions with a user’s environment.
For devices like Project Aura from XREAL, we are introducing the XR Glasses emulator in Android Studio. This essential tool is designed to give you accurate content visualization, while matching real device specifications for Field of View (FoV), Resolution, and DPI to accelerate your development.
If you build immersive experiences with Unity, we’re also expanding your perception capabilities in the Android XR SDK for Unity. In addition to lots of bug fixes and other improvements, we are expanding tracking capabilities to include: QR and ArUco codes, planar images, and body tracking (experimental). We are also introducing a much-requested feature: scene meshing. It enables you to have much deeper interactions with your user’s environment - your digital content can now bounce off of walls and climb up couches!
And that’s just the tip of the iceberg! Be sure to check out our immersive experiences page for more information.
Get Started Today!
The Android XR SDK Developer Preview 3 is available today! Download the latest Android Studio Canary (Otter 3, Canary 4 or later) and upgrade to the latest emulator version (36.4.3 Canary or later) and then visit developer.android.com/xr to get started with the latest libraries and samples you need to build for the growing selection of Android XR devices. We’re building Android XR together with you! Don’t forget to share your feedback, suggestions, and ideas with our team as you progress on your journey in Android XR.
In a recent video I did about Domain-Driven Design Misconceptions, there was a comment that turned into a great thread that I want to highlight. Specifically, somebody left a comment about their problem with Aggregates in DDD.
Their example: if you have a chat, it has millions of messages. If you have a user, it has millions of friends, etc. It’s impossible to make an aggregate big enough to load into memory and enforce invariants.
So the example I’m going to use in this post is the rule: a group chat cannot have more than 100,000 members.
The assumption here is that aggregates need to hold all the information. They need to know about all the users. But that’s not what aggregates are for!
I’m going to show four different options for how you can model this. One of them is not using an aggregate at all. And, of course, the trade-offs with each approach.
YouTube
Check out my YouTube channel, where I post all kinds of content on Software Architecture & Design, including this video showing everything in this post.
So this is how people often start with aggregates in DDD, which is directly what that comment was talking about. Say we have a GroupChat class. This is our aggregate. We’re defining our max number of members as 100,000. And then we have this list, this collection of all the members, all the users associated to this group chat.
Now, this user could itself be pretty heavy in terms of username, email address, a bunch of other information, and maybe some relationships with it.
Then, for our method to add a new member, all we’re doing is checking to make sure we’re not exceeding 100,000, and then we throw.
This is where people start. But here’s the problem with it.
It may feel intuitive, but it’s a trap. It’s a trap because you’re querying and pulling all that data from your database into memory to enforce a very simple rule.
The big mistake here is: we’re modeling relationships, not the rules.
We’re building up this object graph rather than modeling behaviors.
An alternative is to just record the number of members of the group chat. That’s actually the rule we’re trying to enforce. We don’t need to know who is associated to the group chat. We don’t need to know which users, just the total number so we can enforce the rule.
The obvious benefit is we solved the problem: we don’t have to load all those users into memory. This is going to be very fast.
The trade-off is if you do need to track which users are part of which group, you’ll have to model that separately.
Another option, if you feel storing a count is too risky because it could get out of sync, and you’re already recording which users are associated to which group, is to push the invariant up a layer, above the aggregate, into some type of application request or application layer.
Here I’m using some kind of read model or projection to get the number of users. Because it’s a projection, it could be stale. That’s the trade-off. Then we enforce the invariant there. If we pass, we add the user to the group chat.
A fair argument here is: “Well, really? We have some aggregates enforcing invariants, some application or service layer enforcing invariants, everything scattered everywhere.” But reality is: you have to enforce rules where you can do so reliably, not where it always feels clean and tidy in some centralized place. That’s not reality.
An aggregate can only enforce a rule if it has all the data it needs. And often your application or service layer isn’t just a pass-through. It shouldn’t be. It’s doing orchestration, gathering information and deciding whether a command should be executed.
Option 3: No Aggregate At All (Transaction Script)
This might sound surprising, but you don’t actually need an aggregate at all. Sometimes I advocate for using transaction scripts when they fit best.
That’s what I’m doing here: start a transaction. Set the right isolation level. Interact with the database. Do a SELECT COUNT(*). That’s going to be very fast with the right index. Lock if needed. Check the invariant. Insert the new record. Commit the transaction.
Simple.
Sometimes a simple problem just needs a simple solution, and a transaction script is very valid.
The trade-off here is if you’re in a domain with a lot of complexity and a lot of rules, this can get out of hand and hard to manage.
Another option I mentioned earlier is: stop focusing on relationships and focus on the actual rule.
What makes us say the group chat is the one that needs to enforce the rule? Maybe there’s actually the concept of group membership, and group chat is about handling messages. These have different responsibilities.
That’s really what I want to emphasize: you don’t need one model to rule them all. You can enforce something in one place and something else somewhere else. You can have a group membership component enforcing whether you can join, and group chat is just about messages.
There are all kinds of approaches you can take, and they all have different trade-offs. Given the rule and how you’re modeling, pick what fits. It does not need to be an aggregate just because dogma says so.
Maybe it’s a transaction script. Maybe it’s an aggregate. Use what fits best.
When you’re modeling something like the group chat example, start with the rule. Ask yourself: Where can I reliably and efficiently enforce this rule? Not: “How can I convert this schema into my object model?”
Too long didn’t read/watch: model rules, not relationships.
Join CodeOpinon! Developer-level members of my Patreon or YouTube channel get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out my Patreon or YouTube Membership for more info.
Frontend automation is moving fast. Tools like Figma MCP and Kombai can read design context and generate working UI code. I wanted to see what you actually get in practice, so I decided to compare them.
Figma MCP exposes design metadata to AI clients, while Kombai is a frontend-first agent that integrates with editors and existing stacks.
In this article, we’ll feed the same two Figma files into both tools, review how close the output is to the designs, and look at the code structure in a real editor.
Cloning complex Figma designs by hand isn’t fun anymore, nor is writing your CSS line by line with exact precision.
And sure, you can attach a screenshot or whatever to GPT, but it often ends up with something that barely looks like your design. That's where Kombai or the Figma MCP come in.
They actually get your Figma design metadata and give you frontend code that's super close to the real thing.
So now, instead of spending hours rebuilding what's already in your design file, you can focus more on small tweaks and what actually matters.
Kombai is an AI agent designed for frontend work. It takes input from Figma (like text, images, or your existing code), understands your stack, and converts it into clean, production-ready UI.
💡 It’s made specifically for frontend work, so you can expect it to be very good at that (unlike more generic tools like ChatGPT or Claude).
Kombai also handles large repositories easily. It doesn't just convert Figma designs into code. It actually understands your entire frontend codebase, even if it's huge.
So, even if you're working on a small side project or a very large production app, it can read, change, and write code that fits perfectly into your existing project.
Note: Kombai isn’t just good at cloning Figma designs and writing clean code. It actually understands your whole repo, too. You can chat with it like GPT, but it already knows your frontend. It can help refactor code, clean things up, or make changes without ever touching your backend logic.
Pretty handy, right?
No backend code is ever touched, which ensures none of your business logic is mistakenly changed.
You can also add Kombai right inside your editor. It works with VSCode, Cursor, Windsurf, and Trae. Just grab it from the extension marketplace, launch it, and you’re ready to go.
With Kombai, you can:
Turn Figma designs into code (React, HTML, CSS, and so on) using the component library your project already uses.
Work with a frontend-smart engine that understands 30+ libraries including Next.js, MUI, and Chakra UI.
Stay in your editor, follow your own conventions, and ship faster with good accuracy.
And most importantly, preview the changes in a sandbox so you can approve or reject the change before committing it to the files.
You can be up and running in under a minute. Here are the steps to get started:
Install the extension for your editor
Sign in and connect your project
Paste a Figma link or describe what you want to build
Review the output and commit your code
You can find it in the Extension marketplace of your IDE.
Now, using it is just as simple as accessing it from the left sidebar and having a chat similar to how you would with ChatGPT. (Optionally, you can add your tech stack, but Kombai handles it automatically.)
Head to the docs to get started and find the setup for your editor.
Pricing Note: Kombai is a paid tool but gives you a free plan with 300 credits per month, which is great for personal projects. For more advanced workflows, you can move up to the Pro plan or the Enterprise plan.
If you spend most of your time on the frontend, Kombai may be a good fit.
Figma MCP (Model Context Protocol) lets AI agents connect directly to your Figma files. It closes the gap between your designs and your AI tools by giving them structured access to real design data instead of relying on screenshots or rough estimates.
It works by exposing your design's node tree, styles, layout rules, and component structure so the model can build the UI with actual design data.
That means tools like Claude Code, Gemini CLI, Cursor, and VSCode can actually read your designs, including layers, components, colors, spacing, and text, and use that context to generate accurate, production-ready code or design updates.
With Figma MCP, you can:
Let AI tools pull live data from your Figma files, so your code suggestions always match your latest designs
Ask your AI assistant to inspect components, layouts, or styles directly from Figma
Generate UI code that reflects real design and structure instead of guessing from an image
Keep designers and developers in sync without constantly sending files back and forth.
Setting it up is simple:
Run the Figma MCP server locally
Authorize your Figma workspace
Connect your editor or AI tool (Cursor, Claude Code, Gemini CLI, and so on)
For this test, I'll be using Figma MCP inside Claude Code in Linux, and setting it up is as simple as adding the following JSON in your Claude configuration file ~/.claude.json:
Pricing Note: To use Figma MCP, you need to have a paid Figma plan, either Professional, Organization, or Enterprise. But there's a community-maintained open-source MCP server, Figma-Context-MCP, that you can test out for free – which I'll be using for this test.
Once it’s running, any MCP-supported tool can understand your design files, making frontend coding development much more accurate.
For this test, we'll be comparing Kombai with Figma MCP using two Figma designs: one is a simple portfolio design, and the other is a more complex learner dashboard.
NOTE: For this test with Figma MCP, I'll be using Sonnet 4, which, in my experience, has been the best model for coding the frontend. I've also tested with the recent GPT-5 and Opus 4, but Sonnet 4 seems to be the best for frontend work. If you want to try other models, feel free to do so and see if you notice much difference in the results.
💁 Prompt: Clone this Figma design from this Figma frame link attached. Write clean, maintainable, and responsive code that matches the design closely. Keep components simple, reusable, and production-ready.
Quick note about the videos in the next section: The demo recordings are pretty long because I kept them raw. The idea is to show how the tools behave in real time. If you only care about the final output, feel free to skip to the end of each video.
Test 1: Simple Portfolio Design
Let's start with a simpler design that doesn't have much going on in the UI.
This is pretty decent. The overall UI looks good, and the colors and fonts are all accurate. The biggest visual issues are with the hero image and a few icon placements, which are a bit off compared to the original Figma file.
The overall implementation took just about 5 minutes of coding and achieved this entire result in one go, as you see in the video demo. The time it takes isn't really dependent on the MCP itself but mostly on the model, so the timings will vary based on the model you choose to work with. The timing is something you can simply ignore here.
The whole page is split into sensible components (Header, Hero, Projects, ProjectCard, Footer) and composed in a clean page.tsx.
This achieves the desired look at one screen size, but it can easily become misaligned when you resize. When compared side by side with the Figma frame, the hero image and yellow shape do not align as they should.
Fixed Header
For a simple portfolio page with a short hero, a fixed header is not always worth the complexity.
The problem here is that since the header is fixed to the top, the rest of the content also starts from the top. On smaller devices, this might cover parts of the content when scrolling.
This is still a great head start, though it is not quite at the level where I would add it to a production repo without tidying up some of the layout changes.
Kombai
Here's the response from Kombai:
Visually, this one is extremely close to the Figma template. Apart from the hero image being slightly off from the Figma design, I see no other differences. It actually feels like the design is exactly copy-pasted.
Notice that the font, images, and icons are exactly the same, which to me is insane.
That is very similar to how a designer would set up styles in Figma, and it means you can reuse these utilities in new screens instead of retyping Tailwind font sizes everywhere.
Components are cleaner and more reusable
All the other components, like Hero or some smaller button components, use the same styles set up in styles.css.
The footer pulls each icon into its own component:
import InstagramIcon from "./icons/InstagramIcon";
import LinkedInIcon from "./icons/LinkedInIcon";
import MailIcon from "./icons/MailIcon";
In practice, that means if the designer swaps the mail icon or tweaks the size, there is a single place to update it.
So for this simple test, Kombai’s output is both closer to the visual design and a bit nicer structurally for a real project. I would still tweak naming and some minor details, but I would happily keep most of this as is. How crazy is that?
Test 2: Complex Learner Dashboard
So, for the second one, let's create a slightly more complex design with a lot happening in the UI.
This is good, considering the complexity of the design. It’s able to put all the images and assets in place. This is much better than what I expected. But there's a slight inconsistency in the placement of images between the original design and the implementation, as you can see for yourself.
If I compare the time, this got it done super fast, in just about 8 minutes, whereas Kombai took over 15 minutes to get it done (but with a better result).
The separation into components is nice, but everything is still wired directly inside one big page component with inline mock data. For a real app, I would want that data in its own module, ideally typed, so it is not mixed with layout logic.
Hard-coded dimensions tied to the original frame
The outer container is pinned to a specific height:
That’s fine if you are literally recreating a 1440 by 933 frame for a screenshot, but in a live app, it means:
You get weird empty space on taller screens.
Anything that grows vertically (longer course titles, more mentors) will either overflow or get clipped.
The hero banner has the same kind of pixel-exact positioning:
<div className="relative w-full h-[181px] bg-primary rounded-[20px] overflow-hidden">
<Image
src="/images/star1.svg"
alt="Star"
width={80}
height={80}
className="absolute top-[45px] left/[683px] opacity-25"
/>
{/* four more star images with fixed top/left */}
</div>
This is great for matching the specific Figma design, but as soon as the width changes, these positions stop lining up perfectly.
So overall, I would call this result surprisingly good for a single prompt, but a bit rigid and template-like once you start thinking about real data and using it in production.
Kombai
Here's the response from Kombai:
You will see in the video that I had to fix a small error with an extra prompt, but after that, it produced a fully working dashboard. The visual match is very strong, given how complex the layout is.
That looks much closer to what you would expect in a production codebase: clear types, data separated from layout, and a page component that just composes everything.
Better mapping of the smaller UI pieces
The course card is similar to the MCP one, but now it is fully driven by a Course object:
The structure and text styles are very close to the original design, and because the card is fully data-driven, you can plug in real data without touching the JSX.
Design tokens and typography utilities again
Just like in the portfolio example, Kombai sets up a proper token layer for the dashboard:
The components then reuse these utilities, which keeps the code close to the design system instead of scattering font sizes and colors everywhere.
Things I would still tweak
It is not perfect:
The Next layout.tsx is still using the default Geist fonts and “Create Next App” metadata, so you would want to align that with the Inter font and real app title.
Some of the mock data has inconsistent casing in names and roles, which you would clean up in a real project.
The play button on the course card is just a white dot button for now, so you would still plug in the real icon.
But even with those issues, it is very close to something I would actually keep in a production repo after a quick pass.
Now, this is not as perfect as the previous Kombai implementation, and it did not run into errors. But considering how complex this design is, with multiple different cards with images and all, it's still really impressive to me.
For this one, it took a bit longer to code, but in my opinion, the extra time was worth it.
Imagine you're building something similar and get a response this good already. Then it's not that big of a deal to iterate a little bit, right? You don't have to start from scratch. Just make a few changes if required, and you're done.
What You Should Know Before Using These Tools
As good as these tools are, they’re not something you can just trust blindly. They’ll get you off to a solid start, but you’ll still need to tweak a few things before calling it production-ready.
Kombai does a great job cloning Figma designs and writing clean, modular code. It breaks components into smaller files and generally follows good structure.
The only issue I noticed is that it sometimes slips on naming conventions. Since it scans your entire codebase to stay consistent with your setup, it can be a bit slower to generate code, but that’s also what makes it smarter. You’re not just getting a Figma cloner, you’re getting an assistant that actually understands your frontend.
Figma MCP is fast and does a decent job matching the UI, although the results depend a lot on the model you use for generation. If your main goal is to clone Figma designs quickly and you don’t mind refining the output, it’s a good option.
In short, both tools can save you a ton of time, but they’re not plug-and-play replacements for a frontend workflow. Treat them as part of your toolkit, and you’ll get the best results.
Final Verdict, and What's Next?
Now that you’ve got the gist of what these tools can do, go ahead and try them out. You can turn your Figma designs into working frontends in just a few minutes without all the endless play with CSS.
To sum up, here’s the quick rundown:
If you want production-ready code that actually looks like your Figma design and you mostly live in VS Code, Cursor, or any GUI IDE, go with Kombai. It nails the details and even understands your codebase, which is completely missing in Figma MCP.
If you just want to clone a Figma design quickly and don’t mind if things are slightly off, Figma MCP is totally fine. It gets the job done pretty well.
Basically, choose Kombai if you care about precision and code quality with codebase understanding.
Choose Figma MCP if you want something quick, that works and looks decent enough. 🤷♂️
Conclusion
So, what do you think? Pretty cool, right? This was a fun little experiment to see how close tools like Figma MCP and Kombai can get to cloning real frontends straight from Figma.
If you’re into building frontends and want to save yourself a few hours of CSS pain, definitely give them a try. Just don’t expect them to be perfect in one try – their output still needs review and likely a little refining.
That’s all for this one. Thank you for reading! ✌️
Have you tried asking GitHub Copilot about Aspire 13 or the new Agent Framework and found it either hallucinated an answer or told you that those things didn’t exist? This happens because the model used was trained before those things did exist, so it doesn’t know how to answer or help you. As you continue to innovate and move at the speed of AI, you need a development assistant that can keep up with the latest information.
Introducing the MS Learn Model Context Protocol (MCP) server tools. In this post, we’ll explore how the Learn MCP server enhances the developer experience with Copilot, showcase practical examples, and provide straightforward integration instructions for Visual Studio, Visual Studio Code, the Copilot Command Line Interface, and the Copilot Coding Agent.
What Is the MS Learn MCP Server?
The MS Learn MCP server is a managed content provider designed to seamlessly provide Copilot with high-quality, up-to-date, context-aware Microsoft product documentation, code samples, and learning resources to ensure it has the latest information to provide the developer with the best results. Whether you’re building a new AI agent or optimizing an existing WinForms application, the Learn MCP server ensures Copilot has the information it needs.
Enhancing the Developer Experience
By integrating the Learn MCP server with Copilot, .NET developers benefit from a more intelligent and responsive coding environment. Here’s how it makes a difference:
Improved Code Suggestions: Copilot delivers code suggestions and explanations backed by trusted Microsoft Learn content, reducing the risk of outdated or incorrect guidance.
Context Awareness: The MCP server returns documentation and code samples specific to your scenario—whether you’re working with .NET 10, experimenting with Aspire, or building APIs in C#.
Faster Problem Solving: Instead of leaving your editor to search for documentation, you get instant, in-place answers and code references, accelerating your workflow.
Learning While Coding: Accessing MS Learn modules and tutorials helps you upskill in real time as you work on projects.
Key Use Cases: MCP Server in Action with Copilot
On-Demand API References: While implementing authentication in ASP.NET Core, Copilot—powered by the Learn MCP server—provides inline references to the latest Microsoft Identity documentation and code samples specific to your framework version.
Best Practice Recommendations: As you write a new MCP Server, Copilot surfaces best practices from MS Learn, ensuring your implementation follows current guidelines.
Learning New Frameworks or Libraries: When experimenting with technologies like gRPC or SignalR, Copilot can recommend relevant MS Learn modules and code samples, accelerating onboarding and knowledge acquisition.
Integration Instructions
Ready to harness the power of the Learn MCP server with Copilot? Below are step-by-step guides for integrating the MCP server into your favorite tools.
Visual Studio
Make sure you are on Visual Studio 2026 or Visual Studio 2022 version 17.14.
The MS Learn MCP Server is built-in and is available for you to use, just make sure they are turned on when you submit your chat.
Visual Studio Code
Open VS Code and go to the Extensions view.
Ensure you have the GitHub Copilot extension installed.
Go to the MCP Server section and select the search icon.
Integrating the Microsoft Learn MCP server with Copilot supercharges your development workflow, providing trusted, up-to-date, context-aware content exactly when and where you need it. Whether you’re new to .NET or a seasoned developer, this enhanced experience means faster solutions, better code quality, and continuous learning without leaving your preferred tools. Try integrating the Learn MCP server today and experience a smarter, more connected way to develop with .NET!