Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151343 stories
·
33 followers

18 new features coming to Windows 11 in 2026, confirmed by Microsoft

1 Share

As it turns out, Microsoft’s “Windows quality” push is bigger than expected. The company’s blog sure felt like an admission that Windows 11 had gone off track, and more importantly, it was a clear plan to fix the OS.

Windows chief, Pavan Davuluri, laid out the roadmap, but it didn’t stop there. Engineers, designers, and product leads started responding to users directly on X, confirming features, explaining decisions, and in some cases, openly agreeing with criticism. It’s a system-wide reset for Windows 11.

Windows 11 connectivity

Microsoft is targeting almost every part of the OS at once. UI consistency, performance under load, reliability across hardware, Windows Update behavior, developer tooling, and even how first-party apps are built (which is what excites me the most). A few changes are already rolling out in Insider builds, some are coming in April, and the rest are planned throughout 2026.

It’s also the first time in years where multiple teams inside Microsoft seem aligned on a single goal to make Windows faster, calmer, and more predictable to use.

This is easily the most comprehensive set of changes planned for Windows 11 so far. So, we did the hard research, scoured through the X posts and replies, and made a full list of all confirmed features coming to Windows this year.

1. Taskbar finally becomes customizable again

Movable taskbar is one of those features that should’ve never been removed in the first place. Fortunately, Windows 11 is finally bringing back the ability to move the taskbar to the top, left, or right side of the screen.

Windows 11 taskbar top
Image Source: Microsoft | Screenshot captured by Windows Latest

This has been one of the most requested features since Windows 11’s launch, especially from users with vertical monitors and multi-display setups. Soon, you’ll be able to reposition it directly from the right-click menu.

On top of that, Microsoft is also working on proper taskbar resizing. Not just smaller icons, but a compact taskbar mode similar to Windows 10. Early builds also suggest multiple size options, which should make the UI more usable on smaller screens.

2. Start menu is getting speed, control, and a native code

The Start menu is finally going back to basics. Microsoft is moving core parts of the Start menu away from React-based components to native WinUI. This is a big change. The current Start menu uses a mix of web-based layers, which is one of the reasons it sometimes feels slower than it should.

The Start menu Recommendation section was also based on React, and is now getting options to be disabled or control what appears there.

Recommendations in Start menu

By moving to WinUI 3, Microsoft can reduce interaction latency at the platform level, making it potentially feel as snappy as earlier versions of Windows.

Windows Search results will prioritize installed apps and system components instead of mixing in irrelevant web suggestions. Microsoft is also tweaking the ranking system, so frequently used apps actually show up where you expect them to.

Microsoft Store in Windows Search

3. Copilot is being scaled back and made optional

Microsoft is finally dialing things down with Copilot. Over the past year, AI features were pushed into almost every part of Windows, including apps like Notepad, Photos, Snipping Tool, and File Explorer.

Notepad with Copilot icon
Notepad with Copilot icon

Now, Microsoft has confirmed it’s removing unnecessary Copilot entry points across these apps and focusing only on scenarios where it adds value. The goal is to make AI feel intentional.

Note that Copilot can already be uninstalled like any other app.

But this doesn’t mean that Microsoft is ditching Copilot. Features like Narrator working with Copilot across devices, which is coming soon, show that Microsoft still sees AI as important, just not everywhere.

4. Windows Update is being completely rethought

Windows Update might finally stop being a meme. Microsoft is introducing long-requested changes that give users control over updates. You’ll be able to pause updates for as long as you want, without the system forcing a restart in the background.

Pause Windows Updates Settings

The company is also moving toward a single monthly reboot model, which is also called the Patch Tuesday update, that comes on the second Tuesday of each month.

5. Windows setup (OOBE) is becoming faster and less restrictive

Setting up a new Windows laptop or PC has felt so long to the point of massive annoyance. Windows Latest recently did a full breakdown of all the things that happen during Windows 11 Setup, and we found the process took more than an hour. Microsoft has now confirmed that this is finally changing.

The company is streamlining the entire out-of-box experience. Fewer steps, no reboots, and less clutter during setup. Instead of pushing services, apps, and sign-ins at every screen, the new OOBE will get you to the desktop faster.

Microsoft ad for Microsoft 365 Personal during Windows 11 setup
Microsoft ad for Microsoft 365 Personal during Windows 11 setup

There’s also internal pushback against forcing a Microsoft account during setup. Senior engineers have openly said they’re working on an MSA-free setup option, which may be, if not, the most criticized part of Windows 11.

There is no way to skip sign in during Windows 11 setup
There is no way to skip sign in during Windows 11 setup

This is clearly a response to two things. Setup taking too long, and users feel like they’re being pushed into an ecosystem before they even start using the PC.

Also, Windows updates during OOBE are responsible for it taking 45 minutes. Soon, you’ll be able to skip updates entirely. All these may finally make Windows 11 setup feel as fast as setting up a new MacBook!

Windows update is downloading during Setup
Windows update is downloading during Setup

6. File Explorer is getting real performance fixes

Microsoft has already improvied launch speed of the File Explorer by preloading parts of Explorer in the background. Now, more improvements to the app are coming soon. UI flicker is being reduced, navigation is becoming smoother, and basic interactions are being optimized.

File Explorer is getting faster in 2026
File Explorer is getting faster in 2026

Search inside Explorer is also getting faster and more reliable, which has been a long-standing issue. Large file transfers, another weak point, are being made more stable to avoid slowdowns and random failures.

Galaxy Book 6 Ultra crashes during extreme file transfer. Source Max Tech via YouTube
Galaxy Book 6 Ultra crashes during extreme file transfer. Source Max Tech via YouTube

On top of that, smaller usability updates are being added. Voice typing for renaming files is already rolling out, and more incremental improvements are expected over the next few months.

Renaming files with Voice Typing

7. Windows is finally fixing dark mode inconsistencies

Windows 11 looks modern until you open the wrong dialog box. Microsoft is now going through legacy and system UI elements and bringing proper dark mode support across the board. This includes the Run dialog, account dialogs, file property windows, Registry Editor, and various operation pop-ups that still default to light mode.

Microsoft Account dialogue box is in light mode, despite the system preference being set to dark mode
Microsoft Account dialogue box is in light mode, despite the system preference being set to dark mode

Design inconsistency has been one of the most visible issues in Windows 11, and fixing it requires touching dozens of small components across the OS.

Windows Design and Research lead talks about extending dark mode to more parts of the OS
Windows Design and Research lead talks about extending dark mode to more parts of the OS

Microsoft’s Partner Director of Design, March Rogers, said that the company is focusing on fixing the designs of various elements, pages, and settings in Windows 11. It’s great to know that Windows is getting a much-needed design treatment.

8. Settings app redesign while Control Panel migration continues

As first noted by Windows Latest, Microsoft is still working toward replacing the Control Panel, but now it’s clear why it’s taking so long.

More Settings pages are being redesigned for clarity, including cleaner layouts and better grouping of options. Network and printer settings, which still depend heavily on the Control Panel, are gradually being moved into the modern Settings app.

Some network settings are still tied to the Control Panel
Some network settings are still tied to the Control Panel

But the transition isn’t simple.

As Microsoft’s Partner Director of Design has explained, a lot of these legacy controls are tied to drivers, hardware behavior, and enterprise workflows. Moving them too quickly risks breaking devices that still count on older systems. So the migration is slow by design.

Control Panel in Windows 11

Windows can’t just drop legacy systems the way macOS does. It has to carry them forward without breaking compatibility, and that makes every UI change more complicated.

9. System performance and responsiveness improvements

A lot of Windows 11’s problems come down to how it feels to use, not just what it looks like. Microsoft is reducing baseline RAM usage across the OS, which should free up memory for apps and improve multitasking, especially on lower-end devices. At a time when RAM prices are soaring, Windows using too much memory even when idle isn’t ideal.

Microsoft is fixing high RAM usage in Windows 11

At the same time, the company is reducing interaction latency by moving more components to native WinUI 3. Right now, many parts of Windows use WebView2 and other layered UI systems, which introduce delays between input and response.

With a native UI, Microsoft can cut down this overhead. The result should be faster clicks, smoother animations, and a more consistent feel across the system.

10. Hardware reliability fixes across the system

Performance doesn’t matter much if basic hardware doesn’t work reliably. Microsoft is focusing heavily on stability across drivers and connected devices, like reducing system crashes, improving driver quality, and making for better hardware interactions.

Bluetooth is getting fixes for random disconnects and pairing issues. USB reliability is being improved to reduce connection failures. Camera and microphone behavior is also being stabilized, particularly for work and video calls.

Bluetooth & devices page in Settings
Bluetooth & devices page in Settings

For an OS that runs on a massive range of hardware, this is a very important part of the entire list of fixes coming to Windows 11.

11. New Bluetooth and audio capabilities

A new shared audio feature will let you play sound through two Bluetooth devices at the same time. That means you can connect two headphones or speakers and mirror audio without third-party tools.

Shared audio settings in Windows 11 for Bluetooth devices

At the same time, Quick Actions is being fixed. Bluetooth randomly disappearing, pairing failures, and inconsistent device switching are all being addressed. Microsoft has already confirmed this is part of a wider push to make connections faster and more reliable.

12. Windows Hello is getting more reliable

Microsoft is improving Windows Hello biometric authentication, targeting both facial recognition and fingerprint sensors. Enhancements include more reliable facial recognition so users can trust sign-in to work consistently when needed, faster and more dependable fingerprint sign-in with fewer retries and failed attempts, and better support for different hardware setups.

Windows Hello Settings

Microsoft is also making secure sign-in easier on gaming handhelds (such as the ROG Xbox Ally X) by adding full gamepad support for PIN creation during initial setup and in Settings.

13. Better haptic feedback and touchpad features

Windows is getting haptic feedback for window actions like snapping, resizing, and closing. At a time when more Windows 11 laptops are getting haptic touchpads, this is a very welcome change.

Existing gesture controls in Windows 11
Existing gesture controls in Windows 11

Speaking of touchpads, a new update will also add an option to choose how large the right-click zone size is on the touchpad. There would be default, small, medium, and large options which, when clicked with a single finger, would trigger a right-click.

14. WSL is getting major developer-focused upgrades

Microsoft isn’t ignoring developers while fixing Windows. In fact, WSL is getting some of the most meaningful upgrades in this entire roadmap.

File access between Linux and Windows, especially through /mnt/c, is being optimized for faster read and write speeds. This has been one of the biggest pain points for developers working across environments.

Ubuntu running via Windows Subsystem for Linux
Ubuntu running via Windows Subsystem for Linux. Source: Ubuntu

Network performance is also improving, with better throughput and more reliable localhost communication in WSL2, which matters for anyone running dev servers, containers, or backend services locally.

Network issue in WSL
Source: Ask Ubuntu forum

Onboarding is being simplified as well. Fewer steps to get started, less friction when installing distributions, and better defaults.

For enterprise use, Microsoft is adding stronger policy controls and security layers, making WSL more viable in managed environments. Microsoft wants to keep developers on Windows instead of losing them to macOS or Linux.

15. Microsoft is making 100% native first-party apps for Windows 11

Microsoft is putting together a dedicated team to build fully native Windows apps, reducing reliance on WebView2 and web-based wrappers. This was confirmed by Partner Architect Rudy Huyn, who is actively hiring for this initiative.

Microsoft Clipchamp open in Windows desktop
Microsoft Clipchamp open in Windows desktop

After years of leaning on web technologies, Microsoft is moving back toward native performance and tighter OS integration.

Windows Latest’s analysis made it clear why Windows 11 keeps getting web apps instead of native apps. If Microsoft wants developers to take native frameworks seriously again, it has to lead by example. Building first-party native apps without web layers is a step in the right direction.

16. New Feature Flags system can replace ViVeTool

For years, power users used third-party tools like ViVeTool to enable hidden Windows features. Microsoft is now bringing that capability into the OS itself.

A new Feature Flags page in Settings will let Insider users toggle experimental features directly, without external tools.

Microsoft is adding a Feature Flags page under Windows Insider Program settings
Microsoft is adding a Feature Flags page under Windows Insider Program settings. Source: phantomofearth via X

This is clearly aimed at testers and enthusiasts, as it has a warning sign, but it also shows a change in how Microsoft handles experimentation. Instead of hiding everything behind unofficial tools, it’s making the process more transparent and accessible.

17. Feedback Hub and Insider experience improvements

The Feedback Hub is getting a redesign with faster submission, a cleaner interface, and better interaction with other users. The goal is to make reporting issues feel less like a chore.

Upcoming Feedback Hub update in Windows 11 showing the redesigned home screen and new feedback form. Source: Microsoft
Upcoming Feedback Hub update in Windows 11 showing the redesigned home screen and new feedback form. Source: Microsoft

The Windows Insider Program is also being updated. Clearer channel selection, better explanations of what each build offers, and more visibility into how feedback is used.

18. A quieter Windows with fewer ads and interruptions

Let’s just say I saved the best for last. Windows is becoming a quieter OS.

That includes reducing upsells for Edge, Bing, and Microsoft 365, cutting down intrusive prompts, and making the overall experience less aggressive. This has been directly acknowledged by Microsoft leadership, including Scott Hanselman.

It also ties into changes in Widgets, notifications, and even setup. The OS is being redesigned to interrupt less and stay out of your way.

Present day Discover feed in Widgets in Windows 11
Present day Discover feed in Widgets in Windows 11

Widgets are being dialed back. Instead of pushing content aggressively, Microsoft is introducing quieter defaults, better personalization, and more control over what shows up in the feed. The Discover section, in particular, is being cleaned up to feel less like a content dump.

When is Windows 11 getting all these updates?

Microsoft isn’t shipping this as one big update, and that’s probably a good thing.

The first wave is already rolling out to Windows Insiders, with more features landing through April. From there, everything moves into monthly updates throughout 2026, first as optional preview updates, and then into standard Patch Tuesday releases.

Some changes, like File Explorer fixes, Start menu improvements, and reduced Copilot clutter, are arriving early. Others, like deeper performance optimizations, native app transitions, and system-wide consistency fixes, will take longer and roll out gradually over the year.

Microsoft is making a year-long grind to rebuild Windows 11, and for the first time, this feels like a genuine effort.

The post 18 new features coming to Windows 11 in 2026, confirmed by Microsoft appeared first on Windows Latest

Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft’s executive shake-up continues as developer division chief resigns

1 Share
Vector illustration of the Microsoft logo.

Microsoft is losing another veteran executive. Julia Liuson, head of Microsoft's developer division (DevDiv), is resigning from the software giant after 34 years. Liuson spent the past 12 years leading Microsoft's developer business, during a period Microsoft focused more on open source projects and acquired GitHub for $7.5 billion.

Liuson will continue as head of DevDiv until the end of June, and then move to an "advisory role" reporting to Microsoft CoreAI chief Jay Parikh, according to an internal memo seen by The Verge. It's not immediately clear who will replace Liuson, or whether the DevDiv team will simply report up to Parikh in the …

Read the full story at The Verge.

Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Reclaim Developer Hours through Smarter Vulnerability Prioritization with Docker and Mend.io

1 Share

We recently announced the integration between Mend.io and Docker Hardened Images (DHI) provides a seamless framework for managing container security. By automatically distinguishing between base image vulnerabilities and application-layer risks, it uses VEX statements to differentiate between exploitable vulnerabilities and non-exploitable vulnerabilities, allowing your team to prioritize what really matters.

TL;DR: The Developer Value Proposition

The hallmark of this integration is its zero-configuration setup.

  • Automatic Detection: Mend.io identifies DHI base images automatically upon scanning. No manual tagging or configuration is required by the developer.
  • Visual Indicators: Within the Mend UI, DHI-protected packages are marked with a dedicated Docker icon and informative tooltips, providing immediate transparency into which components are managed by Docker’s hardened foundation.

Transparent Layers: Users can inspect findings by package, layer, and risk factor, ensuring a clear audit trail from the base OS to the custom application binaries.

Dynamic Risk Triage: VEX + Reachability

Standard scanners flag thousands of vulnerabilities that are present in the file system but never executed. This integration uses two layers of intelligence to filter the noise:

  • Risk Factor Integration: Mend.io incorporates Docker’s VEX (Vulnerability Exploitability eXchange) data as a primary source of “Risk Factor” identification.
  • The “Not Affected” Filter: If a CVE is marked as not_affected by Docker’s VEX data or determined to be Unreachable by Mend’s analysis, it is deprioritized.

Bulk Suppression: Developers can suppress non-functional risks in bulk—potentially clearing thousands of non-exploitable vulnerabilities with a single click—allowing teams to focus on the 1% of reachable, exploitable risks in their custom layers.

Operationalizing Security with Workflows

Mend.io allows organizations to move beyond simple scanning into automated governance:

  • SLA & Violation Management: Automatically trigger violations and set remediation deadlines (SLAs) based on vulnerability severity.
  • Custom Alerts: Configure workflows to receive instant notifications (via email or Jira) whenever a new DHI is added to the environment.

Pipeline Gating: Use Mend’s workflow engine to fail builds only when high-risk, reachable vulnerabilities are introduced in custom code, keeping the CI/CD pipeline moving.

Continuous Patching & AI-Assisted Migration

  • Automated Synchronization: For Enterprise DHI users, patched base images are automatically mirrored to Docker Hub private repositories. Mend.io verifies these updates, confirming that base-level risks have been mitigated without requiring a manual Pull Request.
  • Ask Gordon: Leverage Docker’s AI agent to analyze existing Dockerfiles and recommend the most suitable DHI foundation, reducing the friction of migrating legacy applications to a secure environment.

The Mend.io and Docker integration operationalizes this by providing an auditable trail of security declarations, ensuring compliance is a byproduct of the standard development workflow rather than a separate, manual task.

Learn more

Learn more about the integration and Docker’s VEX statements in the following links:

Read Mend’s point of view on the benefits of VEX: https://www.mend.io/blog/benefits-of-vex-for-sboms/

Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Agentic RAG with Ed Charbeneau

1 Share
How do you make your agents more knowledgeable about your company data? Carl and Richard talk to Ed Charbeneau about Progress Agentic RAG-as-a-Service, using NucliaDB as a vector data store to organize your company information into a form an agent can work with efficiently. Ed talks about the various approaches available today for providing timely company data to agents and the power of a dedicated data store and service model so that you spend less time on plumbing and more time building a great agentic app. The products are open source and have great .NET SDKs - check them out!



Download audio: https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/71125111/dotnetrocks_1997_agentic_rag.mp3
Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What's new in Foundry Labs - April 2026

1 Share

AI Innovation is accelerating — and Foundry Labs is where you can stay up-to-date.

The pace of AI innovation isn't just fast — it's fundamentally different from anything we've seen before. New architectures, new modalities, new benchmarks being broken week after week. For developers, keeping up isn't a nice-to-have; it's a competitive necessity.

But staying at the cutting edge is hard when the cutting-edge keeps moving.

That's exactly why we created Microsoft Foundry Labs. It's the place where Microsoft's earliest AI experiments and research prototypes become accessible to builders — a sandbox where you get a first-hand look to explore, evaluate, and experiment with what's next.

Today, we're sharing a roundup of recent additions to Foundry Labs — from speech and vision to multimodal AI that's redefining what's possible at the edge.

MAI-Transcribe-1, MAI-Voice-1 & MAI-Image-2: Microsoft's First-Party AI Stack, Now in Foundry

Recently, we released 3 models from Microsoft AI (MAI) that are exclusively available to builders in Foundry in public preview:

  • MAI-Transcribe-1 is Microsoft's first-generation speech recognition model, delivering enterprise-grade accuracy across 25 languages at approximately 50% lower GPU cost than leading alternatives. It achieves an industry-leading 3.9% average Word Error Rate on the FLEURS benchmark — outperforming GPT-Transcribe, Gemini 3.1 Flash, and Whisper-large-v3 — while running at 2.5x the batch transcription speed of Microsoft's existing Azure Fast offering.
  • MAI-Voice-1 is a high-fidelity speech generation model capable of producing 60 seconds of expressive, natural-sounding audio in under one second on a single GPU. It preserves speaker identity and emotional nuance across long-form content — and now supports custom voice creation from just a few seconds of audio.
  • MAI-Image-2 is Microsoft's highest-capability text-to-image model, debuting at #3 on the Arena.ai leaderboard for image model families. It delivers at least 2x faster image generation on Foundry and Copilot compared to its predecessor, with improvements in natural lighting, skin tone accuracy, and in-image text clarity. Enterprise partners like WPP are already building with it at scale.

Together, these models give developers a complete end-to-end audio and visual AI stack — all under one platform, with the reliability and pricing transparency that enterprises need.

Harrier-oss-v1: State-of-the-Art Multilingual Text Embeddings

Search, retrieval, and semantic understanding are at the core of virtually every AI-powered application — and the quality of your text embeddings determines how well those experiences work across languages and domains. That's why we're excited to introduce harrier-oss-v1 , a new family of open-source multilingual text embedding models , on Microsoft Foundry.

Harrier uses a decoder-only architecture with last-token pooling and L2 normalization to produce dense text embeddings — a design that enables it to excel across a wide range of downstream tasks including retrieval, clustering, semantic similarity, classification, bitext mining, and reranking.

The family comes in three sizes to fit different latency and accuracy requirements:

Model

Parameters

Embedding Dimension

Max Tokens

MTEB v2 Score

harrier-oss-v1-270m

270M

640

32,768

66.5

harrier-oss-v1-0.6b

0.6B

1,024

32,768

69.0

harrier-oss-v1-27b

27B

5,376

32,768

74.3

 

All three variants achieve state-of-the-art results on the Multilingual MTEB v2 benchmark as of their release date. The 270M and 0.6B variants are further enhanced through knowledge distillation from the larger 27B model — meaning you get competitive performance even at smaller scale. 

With support for 94 languages — including Arabic, Chinese, Japanese, Korean, Hindi, Indonesian, and dozens of European languages — Harrier is purpose-built for global applications. And because it's instruction-tuned, you can customize embedding behavior for different scenarios simply by prepending a one-sentence natural language instruction to your query — no fine-tuning required.

Whether you're building multilingual RAG pipelines, cross-lingual document search, or semantic similarity features, Harrier gives you a production-ready embedding model that scales from edge to enterprise.

Phi-4-Reasoning-Vision-15B: Small Model, Big Reasoning

Vision models have historically been great at perception — identifying objects, reading text, describing scenes. But perception alone isn't enough for the next generation of agentic applications. What developers need is a model that can reason over what it sees.

That's exactly what Phi-4-Reasoning-Vision-15B delivers.

This new addition to the Phi-4 family combines high-resolution visual perception with selective, task-aware reasoning — giving developers the ability to toggle reasoning on or off at runtime, balancing latency and accuracy based on their use case.

Key use cases include:

  • Diagram-based math and document understanding — parse charts, tables, and visual problem sets with structured inference
  • GUI interpretation and grounding — ideal for computer-use agent (CUA) scenarios where the model needs to interpret screens and drive actions
  • Scientific and analytical reasoning — process complex visual inputs and produce multi-step, grounded conclusions
  • Education — build tutoring apps where students upload worksheets or diagrams and receive guided, step-by-step explanations

Despite being a compact 15B-parameter model, Phi-4-Reasoning-Vision-15B holds its own against significantly larger models — achieving 88.2% on ScreenSpot_v2 and 83.3% on ChartQA in internal benchmarks.

It's the right model when you need vision reasoning that's fast, efficient, and production-ready.

VibeVoice ASR: Longform, Structured Speech Recognition at Scale

Real-world audio is messy. Hour-long meetings, multi-speaker conversations, domain-specific jargon, and seamless code-switching between languages — these are the scenarios where most speech recognition systems fall apart. VibeVoice ASR was built specifically to solve that. 

Developed by Microsoft Research, VibeVoice ASR is a unified speech-to-text model that transcribes up to 60 minutes of continuous audio in a single pass — no manual chunking, no stitching, no context loss.

What makes it different is the richness of its output. Rather than returning a wall of text, VibeVoice ASR jointly performs:

  • Transcription — what was said
  • Speaker diarization — who said it
  • Timestamping — when they said it

All in one unified inference pass, without requiring any post-processing pipeline.

Additional capabilities include:

  • Customized hotwords — inject domain-specific vocabulary, names, or technical terms to improve accuracy in specialized contexts
  • 50+ language support — with native code-switching, no explicit language configuration required

VibeVoice ASR is also fully integrated with the Hugging Face Transformers ecosystem and discoverable in the Foundry Model Catalog, making it easy to evaluate and deploy using familiar tooling.

GigaTIME: Population-Scale Tumor Immune Microenvironment Modeling

Understanding how tumors interact with the immune system is one of the most complex — and consequential — challenges in precision oncology. Multiplex immunofluorescence (mIF) imaging can illuminate that relationship at the cellular level, but at thousands of dollars per sample, it's rarely feasible at scale.

GigaTIME changes that.

Developed by Microsoft Research in collaboration with Providence and the University of Washington, GigaTIME is a multimodal AI model that translates routine, low-cost hematoxylin and eosin (H&E) pathology slides — already a standard part of cancer care at just $5–$10 per sample — into high-resolution virtual multiplex immunofluorescence (mIF) images across 21 protein channels.

Trained on 40 million cells with paired H&E and mIF data, GigaTIME was applied to 14,256 cancer patients across 51 hospitals, generating a virtual population of ~300,000 mIF images spanning 24 cancer types and 306 cancer subtypes. The result: 1,234 statistically significant associations between tumor immune cell states and clinical attributes like biomarkers, staging, and survival — independently validated on 10,200 patients from The Cancer Genome Atlas (TCGA).

This was the first population-scale study of the tumor immune microenvironment based on spatial proteomics — a class of study previously out of reach due to mIF data scarcity.

GigaTIME is now publicly available on Foundry Labs and Hugging Face, open for researchers and developers to explore and build on.

What's Next

Foundry Labs is where Microsoft's most ambitious AI research becomes accessible to builders. Whether you're building voice agents, multimodal pipelines, or intelligent document processors — the tools are here, and they're only getting better.

Stay tuned — there's more coming soon:

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Build and Host MCP Apps on Azure App Service

1 Share

MCP Apps are here, and they're a game-changer for building AI tools with interactive UIs. If you've been following the Model Context Protocol (MCP) ecosystem, you've probably heard about the MCP Apps spec — the first official MCP extension that lets your tools return rich, interactive UIs that render directly inside AI chat clients like Claude Desktop, ChatGPT, VS Code Copilot, Goose, and Postman.

And here's the best part: you can host them on Azure App Service. In this post, I'll walk you through building a weather widget MCP App and deploying it to App Service. You'll have a production-ready MCP server serving interactive UIs in under 10 minutes.

What Are MCP Apps?

MCP Apps extend the Model Context Protocol by combining tools (the functions your AI client can call) with UI resources (the interactive interfaces that display the results). The pattern is simple:

  1. A tool declares a _meta.ui.resourceUri in its metadata
  2. When the tool is invoked, the MCP host fetches that UI resource
  3. The UI renders in a sandboxed iframe inside the chat client

The key insight? MCP Apps are just web apps — HTML, JavaScript, and CSS served through MCP. And that's exactly what App Service does best.

The MCP Apps spec supports cross-client rendering, so the same UI works in Claude Desktop, VS Code Copilot, ChatGPT, and other MCP-enabled clients. Your weather widget, map viewer, or data dashboard becomes a universal component in the AI ecosystem.

Why App Service for MCP Apps?

Azure App Service is a natural fit for hosting MCP Apps. Here's why:

  • Always On — No cold starts. Your UI resources are served instantly, every time.
  • Easy Auth — Secure your MCP endpoint with Entra ID authentication out of the box, no code required.
  • Custom domains + TLS — Professional MCP server endpoints with your own domain and managed certificates.
  • Deployment slots — Canary and staged rollouts for MCP App updates without downtime.
  • Sidecars — Run backend services (Redis, message queues, monitoring agents) alongside your MCP server.
  • App Insights — Built-in telemetry to see which tools and UIs are being invoked, response times, and error rates.

Now, these are all capabilities you can add to a production MCP App, but the sample we're building today keeps things simple. We're focusing on the core pattern: serving MCP tools with interactive UIs from App Service. The production features are there when you need them.

When to Use Functions vs App Service for MCP Apps

Before we dive into the code, let's talk about Azure Functions. The Functions team has done great work with their MCP Apps quickstart, and if serverless is your preferred model, that's a fantastic option. Functions and App Service both host MCP Apps beautifully — they just serve different needs.

 Azure FunctionsAzure App Service
Best forNew, purpose-built MCP Apps that benefit from serverless scalingMCP Apps that need always-on hosting, persistent state, or are part of larger web apps
ScalingScale to zero, pay per invocationDedicated plans, always running
Cold startPossible (mitigated by premium plan)None (Always On)
Deploymentazd up with Functions templateazd up with App Service template
MCP Apps quickstartAvailableThis blog post!
Additional capabilitiesEvent-driven triggers, durable functionsEasy Auth, custom domains, deployment slots, sidecars

Think of it this way: if you're building a new MCP App from scratch and want serverless economics, go with Functions. If you're adding MCP capabilities to an existing web app, need zero cold starts, or want production features like Easy Auth and deployment slots, App Service is your friend.

Build the Weather Widget MCP App

Let's build a simple MCP App that fetches weather data from the Open-Meteo API and displays it in an interactive widget. The sample uses ASP.NET Core for the MCP server and Vite for the frontend UI.

Here's the structure:

app-service-mcp-app-sample/
├── src/
│   ├── Program.cs              # MCP server setup
│   ├── WeatherTool.cs          # Weather tool with UI metadata
│   ├── WeatherUIResource.cs    # MCP resource serving the UI
│   ├── WeatherService.cs       # Open-Meteo API integration
│   └── app/                    # Vite frontend (weather widget)
│       └── src/
│           └── weather-app.ts  # MCP Apps SDK integration
├── .vscode/
│   └── mcp.json                # VS Code MCP server config
├── azure.yaml                  # Azure Developer CLI config
└── infra/                      # Bicep infrastructure

Program.cs — MCP Server Setup

The MCP server is an ASP.NET Core app that registers tools and UI resources:

using ModelContextProtocol;

var builder = WebApplication.CreateBuilder(args);

// Register WeatherService
builder.Services.AddSingleton<WeatherService>(sp =>
    new WeatherService(WeatherService.CreateDefaultClient()));

// Add MCP Server with HTTP transport, tools, and resources
builder.Services.AddMcpServer()
    .WithHttpTransport(t => t.Stateless = true)
    .WithTools<WeatherTool>()
    .WithResources<WeatherUIResource>();

var app = builder.Build();

// Map MCP endpoints (no auth required for this sample)
app.MapMcp("/mcp").AllowAnonymous();

app.Run();

AddMcpServer() configures the MCP protocol handler. WithHttpTransport() enables Streamable HTTP with stateless mode (no session management needed). WithTools<WeatherTool>() registers our weather tool, and WithResources<WeatherUIResource>() registers the UI resource that the MCP host will fetch and render. MapMcp("/mcp") maps the MCP endpoint at /mcp.

WeatherTool.cs — Tool with UI Metadata

The WeatherTool class defines the tool and uses the [McpMeta] attribute to declare a ui metadata block containing the resourceUri. This tells the MCP host where to fetch the interactive UI:

using System.ComponentModel;
using ModelContextProtocol.Server;

[McpServerToolType]
public class WeatherTool
{
    private readonly WeatherService _weatherService;

    public WeatherTool(WeatherService weatherService)
    {
        _weatherService = weatherService;
    }

    [McpServerTool]
    [Description("Get current weather for a location via Open-Meteo. Returns weather data that displays in an interactive widget.")]
    [McpMeta("ui", JsonValue = """{"resourceUri": "ui://weather/index.html"}""")]
    public async Task<object> GetWeather(
        [Description("City name to check weather for (e.g., Seattle, New York, Miami)")]
        string location)
    {
        var result = await _weatherService.GetCurrentWeatherAsync(location);
        return result;
    }
}

The key line is the [McpMeta("ui", ...)] attribute. This adds _meta.ui.resourceUri to the tool definition, pointing to the ui://weather/index.html resource. When the AI client calls this tool, the host fetches that resource and renders it in a sandboxed iframe alongside the tool result.

WeatherUIResource.cs — UI Resource

The UI resource class serves the bundled HTML as an MCP resource with the ui:// scheme and text/html;profile=mcp-app MIME type required by the MCP Apps spec:

using ModelContextProtocol.Protocol;
using ModelContextProtocol.Server;

[McpServerResourceType]
public class WeatherUIResource
{
    [McpServerResource(
        UriTemplate = "ui://weather/index.html",
        Name = "weather_ui",
        MimeType = "text/html;profile=mcp-app")]
    public static ResourceContents GetWeatherUI()
    {
        var filePath = Path.Combine(
            AppContext.BaseDirectory, "app", "dist", "index.html");
        var html = File.ReadAllText(filePath);

        return new TextResourceContents
        {
            Uri = "ui://weather/index.html",
            MimeType = "text/html;profile=mcp-app",
            Text = html
        };
    }
}

The [McpServerResource] attribute registers this method as the handler for the ui://weather/index.html resource. When the host fetches it, the bundled single-file HTML (built by Vite) is returned with the correct MIME type.

WeatherService.cs — Open-Meteo API Integration

The WeatherService class handles geocoding and weather data from the Open-Meteo API. Nothing MCP-specific here — it's just a standard HTTP client that geocodes a city name and fetches current weather observations.

The UI Resource (Vite Frontend)

The app/ directory contains a TypeScript app built with Vite that renders the weather widget. It uses the @modelcontextprotocol/ext-apps SDK to communicate with the host:

import { App } from "@modelcontextprotocol/ext-apps";

const app = new App({ name: "Weather Widget", version: "1.0.0" });

// Handle tool results from the server
app.ontoolresult = (params) => {
  const data = parseToolResultContent(params.content);
  if (data) render(data);
};

// Adapt to host theme (light/dark)
app.onhostcontextchanged = (ctx) => {
  if (ctx.theme) applyTheme(ctx.theme);
};

await app.connect();

The SDK's App class handles the postMessage communication with the host. When the tool returns weather data, ontoolresult fires and the widget renders the temperature, conditions, humidity, and wind. The app also adapts to the host's theme so it looks native in both light and dark mode.

The frontend is bundled into a single index.html file using Vite and the vite-plugin-singlefile plugin, which inlines all JavaScript and CSS. This makes it easy to serve as a single MCP resource.

Run Locally

To run the sample locally, you'll need the .NET 9 SDK and Node.js 18+ installed. Clone the repo and run:

# Clone the repo
git clone https://github.com/seligj95/app-service-mcp-app-sample.git
cd app-service-mcp-app-sample

# Build the frontend
cd src/app
npm install
npm run build

# Run the MCP server
cd ..
dotnet run

The server starts on http://localhost:5000. Now connect from VS Code Copilot:

  1. Open your workspace in VS Code
  2. The sample includes a .vscode/mcp.json that configures the local MCP server:
    {
      "servers": {
        "local-mcp-appservice": {
          "type": "http",
          "url": "http://localhost:5000/mcp"
        }
      }
    }
  3. Open the GitHub Copilot Chat panel
  4. Ask: "What's the weather in Seattle?"

Copilot will invoke the GetWeather tool, and the interactive weather widget will render inline in the chat:

Weather widget MCP App rendering inline in VS Code Copilot Chat

Deploy to Azure

Deploying to Azure is even easier. The sample includes an azure.yaml file and Bicep templates for App Service, so you can deploy with a single command:

cd app-service-mcp-app-sample
azd auth login
azd up

azd up will:

  1. Provision an App Service plan and web app in your subscription
  2. Build the .NET app and Vite frontend
  3. Deploy the app to App Service
  4. Output the public MCP endpoint URL

After deployment, azd will output a URL like https://app-abc123.azurewebsites.net. Update your .vscode/mcp.json to point to the remote server:

{
  "servers": {
    "remote-weather-app": {
      "type": "http",
      "url": "https://app-abc123.azurewebsites.net/mcp"
    }
  }
}

From that point forward, your MCP App is live. Any AI client that supports MCP Apps can invoke your weather tool and render the interactive widget — no local server required.

What's Next?

You've now built and deployed an MCP App to Azure App Service. Here's what you can explore next:

And remember: App Service gives you a full production hosting platform for your MCP Apps. You can add Easy Auth to secure your endpoints with Entra ID, wire up App Insights for telemetry, configure custom domains and TLS certificates, and set up deployment slots for blue/green rollouts. These features make App Service a great choice when you're ready to take your MCP App to production.

If you build something cool with MCP Apps and App Service, let me know — I'd love to see what you create!

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories