Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151339 stories
·
33 followers

What's new in Foundry Labs - April 2026

1 Share

AI Innovation is accelerating — and Foundry Labs is where you can stay up-to-date.

The pace of AI innovation isn't just fast — it's fundamentally different from anything we've seen before. New architectures, new modalities, new benchmarks being broken week after week. For developers, keeping up isn't a nice-to-have; it's a competitive necessity.

But staying at the cutting edge is hard when the cutting-edge keeps moving.

That's exactly why we created Microsoft Foundry Labs. It's the place where Microsoft's earliest AI experiments and research prototypes become accessible to builders — a sandbox where you get a first-hand look to explore, evaluate, and experiment with what's next.

Today, we're sharing a roundup of recent additions to Foundry Labs — from speech and vision to multimodal AI that's redefining what's possible at the edge.

MAI-Transcribe-1, MAI-Voice-1 & MAI-Image-2: Microsoft's First-Party AI Stack, Now in Foundry

Recently, we released 3 models from Microsoft AI (MAI) that are exclusively available to builders in Foundry in public preview:

  • MAI-Transcribe-1 is Microsoft's first-generation speech recognition model, delivering enterprise-grade accuracy across 25 languages at approximately 50% lower GPU cost than leading alternatives. It achieves an industry-leading 3.9% average Word Error Rate on the FLEURS benchmark — outperforming GPT-Transcribe, Gemini 3.1 Flash, and Whisper-large-v3 — while running at 2.5x the batch transcription speed of Microsoft's existing Azure Fast offering.
  • MAI-Voice-1 is a high-fidelity speech generation model capable of producing 60 seconds of expressive, natural-sounding audio in under one second on a single GPU. It preserves speaker identity and emotional nuance across long-form content — and now supports custom voice creation from just a few seconds of audio.
  • MAI-Image-2 is Microsoft's highest-capability text-to-image model, debuting at #3 on the Arena.ai leaderboard for image model families. It delivers at least 2x faster image generation on Foundry and Copilot compared to its predecessor, with improvements in natural lighting, skin tone accuracy, and in-image text clarity. Enterprise partners like WPP are already building with it at scale.

Together, these models give developers a complete end-to-end audio and visual AI stack — all under one platform, with the reliability and pricing transparency that enterprises need.

Harrier-oss-v1: State-of-the-Art Multilingual Text Embeddings

Search, retrieval, and semantic understanding are at the core of virtually every AI-powered application — and the quality of your text embeddings determines how well those experiences work across languages and domains. That's why we're excited to introduce harrier-oss-v1 , a new family of open-source multilingual text embedding models , on Microsoft Foundry.

Harrier uses a decoder-only architecture with last-token pooling and L2 normalization to produce dense text embeddings — a design that enables it to excel across a wide range of downstream tasks including retrieval, clustering, semantic similarity, classification, bitext mining, and reranking.

The family comes in three sizes to fit different latency and accuracy requirements:

Model

Parameters

Embedding Dimension

Max Tokens

MTEB v2 Score

harrier-oss-v1-270m

270M

640

32,768

66.5

harrier-oss-v1-0.6b

0.6B

1,024

32,768

69.0

harrier-oss-v1-27b

27B

5,376

32,768

74.3

 

All three variants achieve state-of-the-art results on the Multilingual MTEB v2 benchmark as of their release date. The 270M and 0.6B variants are further enhanced through knowledge distillation from the larger 27B model — meaning you get competitive performance even at smaller scale. 

With support for 94 languages — including Arabic, Chinese, Japanese, Korean, Hindi, Indonesian, and dozens of European languages — Harrier is purpose-built for global applications. And because it's instruction-tuned, you can customize embedding behavior for different scenarios simply by prepending a one-sentence natural language instruction to your query — no fine-tuning required.

Whether you're building multilingual RAG pipelines, cross-lingual document search, or semantic similarity features, Harrier gives you a production-ready embedding model that scales from edge to enterprise.

Phi-4-Reasoning-Vision-15B: Small Model, Big Reasoning

Vision models have historically been great at perception — identifying objects, reading text, describing scenes. But perception alone isn't enough for the next generation of agentic applications. What developers need is a model that can reason over what it sees.

That's exactly what Phi-4-Reasoning-Vision-15B delivers.

This new addition to the Phi-4 family combines high-resolution visual perception with selective, task-aware reasoning — giving developers the ability to toggle reasoning on or off at runtime, balancing latency and accuracy based on their use case.

Key use cases include:

  • Diagram-based math and document understanding — parse charts, tables, and visual problem sets with structured inference
  • GUI interpretation and grounding — ideal for computer-use agent (CUA) scenarios where the model needs to interpret screens and drive actions
  • Scientific and analytical reasoning — process complex visual inputs and produce multi-step, grounded conclusions
  • Education — build tutoring apps where students upload worksheets or diagrams and receive guided, step-by-step explanations

Despite being a compact 15B-parameter model, Phi-4-Reasoning-Vision-15B holds its own against significantly larger models — achieving 88.2% on ScreenSpot_v2 and 83.3% on ChartQA in internal benchmarks.

It's the right model when you need vision reasoning that's fast, efficient, and production-ready.

VibeVoice ASR: Longform, Structured Speech Recognition at Scale

Real-world audio is messy. Hour-long meetings, multi-speaker conversations, domain-specific jargon, and seamless code-switching between languages — these are the scenarios where most speech recognition systems fall apart. VibeVoice ASR was built specifically to solve that. 

Developed by Microsoft Research, VibeVoice ASR is a unified speech-to-text model that transcribes up to 60 minutes of continuous audio in a single pass — no manual chunking, no stitching, no context loss.

What makes it different is the richness of its output. Rather than returning a wall of text, VibeVoice ASR jointly performs:

  • Transcription — what was said
  • Speaker diarization — who said it
  • Timestamping — when they said it

All in one unified inference pass, without requiring any post-processing pipeline.

Additional capabilities include:

  • Customized hotwords — inject domain-specific vocabulary, names, or technical terms to improve accuracy in specialized contexts
  • 50+ language support — with native code-switching, no explicit language configuration required

VibeVoice ASR is also fully integrated with the Hugging Face Transformers ecosystem and discoverable in the Foundry Model Catalog, making it easy to evaluate and deploy using familiar tooling.

GigaTIME: Population-Scale Tumor Immune Microenvironment Modeling

Understanding how tumors interact with the immune system is one of the most complex — and consequential — challenges in precision oncology. Multiplex immunofluorescence (mIF) imaging can illuminate that relationship at the cellular level, but at thousands of dollars per sample, it's rarely feasible at scale.

GigaTIME changes that.

Developed by Microsoft Research in collaboration with Providence and the University of Washington, GigaTIME is a multimodal AI model that translates routine, low-cost hematoxylin and eosin (H&E) pathology slides — already a standard part of cancer care at just $5–$10 per sample — into high-resolution virtual multiplex immunofluorescence (mIF) images across 21 protein channels.

Trained on 40 million cells with paired H&E and mIF data, GigaTIME was applied to 14,256 cancer patients across 51 hospitals, generating a virtual population of ~300,000 mIF images spanning 24 cancer types and 306 cancer subtypes. The result: 1,234 statistically significant associations between tumor immune cell states and clinical attributes like biomarkers, staging, and survival — independently validated on 10,200 patients from The Cancer Genome Atlas (TCGA).

This was the first population-scale study of the tumor immune microenvironment based on spatial proteomics — a class of study previously out of reach due to mIF data scarcity.

GigaTIME is now publicly available on Foundry Labs and Hugging Face, open for researchers and developers to explore and build on.

What's Next

Foundry Labs is where Microsoft's most ambitious AI research becomes accessible to builders. Whether you're building voice agents, multimodal pipelines, or intelligent document processors — the tools are here, and they're only getting better.

Stay tuned — there's more coming soon:

Read the whole story
alvinashcraft
11 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Build and Host MCP Apps on Azure App Service

1 Share

MCP Apps are here, and they're a game-changer for building AI tools with interactive UIs. If you've been following the Model Context Protocol (MCP) ecosystem, you've probably heard about the MCP Apps spec — the first official MCP extension that lets your tools return rich, interactive UIs that render directly inside AI chat clients like Claude Desktop, ChatGPT, VS Code Copilot, Goose, and Postman.

And here's the best part: you can host them on Azure App Service. In this post, I'll walk you through building a weather widget MCP App and deploying it to App Service. You'll have a production-ready MCP server serving interactive UIs in under 10 minutes.

What Are MCP Apps?

MCP Apps extend the Model Context Protocol by combining tools (the functions your AI client can call) with UI resources (the interactive interfaces that display the results). The pattern is simple:

  1. A tool declares a _meta.ui.resourceUri in its metadata
  2. When the tool is invoked, the MCP host fetches that UI resource
  3. The UI renders in a sandboxed iframe inside the chat client

The key insight? MCP Apps are just web apps — HTML, JavaScript, and CSS served through MCP. And that's exactly what App Service does best.

The MCP Apps spec supports cross-client rendering, so the same UI works in Claude Desktop, VS Code Copilot, ChatGPT, and other MCP-enabled clients. Your weather widget, map viewer, or data dashboard becomes a universal component in the AI ecosystem.

Why App Service for MCP Apps?

Azure App Service is a natural fit for hosting MCP Apps. Here's why:

  • Always On — No cold starts. Your UI resources are served instantly, every time.
  • Easy Auth — Secure your MCP endpoint with Entra ID authentication out of the box, no code required.
  • Custom domains + TLS — Professional MCP server endpoints with your own domain and managed certificates.
  • Deployment slots — Canary and staged rollouts for MCP App updates without downtime.
  • Sidecars — Run backend services (Redis, message queues, monitoring agents) alongside your MCP server.
  • App Insights — Built-in telemetry to see which tools and UIs are being invoked, response times, and error rates.

Now, these are all capabilities you can add to a production MCP App, but the sample we're building today keeps things simple. We're focusing on the core pattern: serving MCP tools with interactive UIs from App Service. The production features are there when you need them.

When to Use Functions vs App Service for MCP Apps

Before we dive into the code, let's talk about Azure Functions. The Functions team has done great work with their MCP Apps quickstart, and if serverless is your preferred model, that's a fantastic option. Functions and App Service both host MCP Apps beautifully — they just serve different needs.

 Azure FunctionsAzure App Service
Best forNew, purpose-built MCP Apps that benefit from serverless scalingMCP Apps that need always-on hosting, persistent state, or are part of larger web apps
ScalingScale to zero, pay per invocationDedicated plans, always running
Cold startPossible (mitigated by premium plan)None (Always On)
Deploymentazd up with Functions templateazd up with App Service template
MCP Apps quickstartAvailableThis blog post!
Additional capabilitiesEvent-driven triggers, durable functionsEasy Auth, custom domains, deployment slots, sidecars

Think of it this way: if you're building a new MCP App from scratch and want serverless economics, go with Functions. If you're adding MCP capabilities to an existing web app, need zero cold starts, or want production features like Easy Auth and deployment slots, App Service is your friend.

Build the Weather Widget MCP App

Let's build a simple MCP App that fetches weather data from the Open-Meteo API and displays it in an interactive widget. The sample uses ASP.NET Core for the MCP server and Vite for the frontend UI.

Here's the structure:

app-service-mcp-app-sample/
├── src/
│   ├── Program.cs              # MCP server setup
│   ├── WeatherTool.cs          # Weather tool with UI metadata
│   ├── WeatherUIResource.cs    # MCP resource serving the UI
│   ├── WeatherService.cs       # Open-Meteo API integration
│   └── app/                    # Vite frontend (weather widget)
│       └── src/
│           └── weather-app.ts  # MCP Apps SDK integration
├── .vscode/
│   └── mcp.json                # VS Code MCP server config
├── azure.yaml                  # Azure Developer CLI config
└── infra/                      # Bicep infrastructure

Program.cs — MCP Server Setup

The MCP server is an ASP.NET Core app that registers tools and UI resources:

using ModelContextProtocol;

var builder = WebApplication.CreateBuilder(args);

// Register WeatherService
builder.Services.AddSingleton<WeatherService>(sp =>
    new WeatherService(WeatherService.CreateDefaultClient()));

// Add MCP Server with HTTP transport, tools, and resources
builder.Services.AddMcpServer()
    .WithHttpTransport(t => t.Stateless = true)
    .WithTools<WeatherTool>()
    .WithResources<WeatherUIResource>();

var app = builder.Build();

// Map MCP endpoints (no auth required for this sample)
app.MapMcp("/mcp").AllowAnonymous();

app.Run();

AddMcpServer() configures the MCP protocol handler. WithHttpTransport() enables Streamable HTTP with stateless mode (no session management needed). WithTools<WeatherTool>() registers our weather tool, and WithResources<WeatherUIResource>() registers the UI resource that the MCP host will fetch and render. MapMcp("/mcp") maps the MCP endpoint at /mcp.

WeatherTool.cs — Tool with UI Metadata

The WeatherTool class defines the tool and uses the [McpMeta] attribute to declare a ui metadata block containing the resourceUri. This tells the MCP host where to fetch the interactive UI:

using System.ComponentModel;
using ModelContextProtocol.Server;

[McpServerToolType]
public class WeatherTool
{
    private readonly WeatherService _weatherService;

    public WeatherTool(WeatherService weatherService)
    {
        _weatherService = weatherService;
    }

    [McpServerTool]
    [Description("Get current weather for a location via Open-Meteo. Returns weather data that displays in an interactive widget.")]
    [McpMeta("ui", JsonValue = """{"resourceUri": "ui://weather/index.html"}""")]
    public async Task<object> GetWeather(
        [Description("City name to check weather for (e.g., Seattle, New York, Miami)")]
        string location)
    {
        var result = await _weatherService.GetCurrentWeatherAsync(location);
        return result;
    }
}

The key line is the [McpMeta("ui", ...)] attribute. This adds _meta.ui.resourceUri to the tool definition, pointing to the ui://weather/index.html resource. When the AI client calls this tool, the host fetches that resource and renders it in a sandboxed iframe alongside the tool result.

WeatherUIResource.cs — UI Resource

The UI resource class serves the bundled HTML as an MCP resource with the ui:// scheme and text/html;profile=mcp-app MIME type required by the MCP Apps spec:

using ModelContextProtocol.Protocol;
using ModelContextProtocol.Server;

[McpServerResourceType]
public class WeatherUIResource
{
    [McpServerResource(
        UriTemplate = "ui://weather/index.html",
        Name = "weather_ui",
        MimeType = "text/html;profile=mcp-app")]
    public static ResourceContents GetWeatherUI()
    {
        var filePath = Path.Combine(
            AppContext.BaseDirectory, "app", "dist", "index.html");
        var html = File.ReadAllText(filePath);

        return new TextResourceContents
        {
            Uri = "ui://weather/index.html",
            MimeType = "text/html;profile=mcp-app",
            Text = html
        };
    }
}

The [McpServerResource] attribute registers this method as the handler for the ui://weather/index.html resource. When the host fetches it, the bundled single-file HTML (built by Vite) is returned with the correct MIME type.

WeatherService.cs — Open-Meteo API Integration

The WeatherService class handles geocoding and weather data from the Open-Meteo API. Nothing MCP-specific here — it's just a standard HTTP client that geocodes a city name and fetches current weather observations.

The UI Resource (Vite Frontend)

The app/ directory contains a TypeScript app built with Vite that renders the weather widget. It uses the @modelcontextprotocol/ext-apps SDK to communicate with the host:

import { App } from "@modelcontextprotocol/ext-apps";

const app = new App({ name: "Weather Widget", version: "1.0.0" });

// Handle tool results from the server
app.ontoolresult = (params) => {
  const data = parseToolResultContent(params.content);
  if (data) render(data);
};

// Adapt to host theme (light/dark)
app.onhostcontextchanged = (ctx) => {
  if (ctx.theme) applyTheme(ctx.theme);
};

await app.connect();

The SDK's App class handles the postMessage communication with the host. When the tool returns weather data, ontoolresult fires and the widget renders the temperature, conditions, humidity, and wind. The app also adapts to the host's theme so it looks native in both light and dark mode.

The frontend is bundled into a single index.html file using Vite and the vite-plugin-singlefile plugin, which inlines all JavaScript and CSS. This makes it easy to serve as a single MCP resource.

Run Locally

To run the sample locally, you'll need the .NET 9 SDK and Node.js 18+ installed. Clone the repo and run:

# Clone the repo
git clone https://github.com/seligj95/app-service-mcp-app-sample.git
cd app-service-mcp-app-sample

# Build the frontend
cd src/app
npm install
npm run build

# Run the MCP server
cd ..
dotnet run

The server starts on http://localhost:5000. Now connect from VS Code Copilot:

  1. Open your workspace in VS Code
  2. The sample includes a .vscode/mcp.json that configures the local MCP server:
    {
      "servers": {
        "local-mcp-appservice": {
          "type": "http",
          "url": "http://localhost:5000/mcp"
        }
      }
    }
  3. Open the GitHub Copilot Chat panel
  4. Ask: "What's the weather in Seattle?"

Copilot will invoke the GetWeather tool, and the interactive weather widget will render inline in the chat:

Weather widget MCP App rendering inline in VS Code Copilot Chat

Deploy to Azure

Deploying to Azure is even easier. The sample includes an azure.yaml file and Bicep templates for App Service, so you can deploy with a single command:

cd app-service-mcp-app-sample
azd auth login
azd up

azd up will:

  1. Provision an App Service plan and web app in your subscription
  2. Build the .NET app and Vite frontend
  3. Deploy the app to App Service
  4. Output the public MCP endpoint URL

After deployment, azd will output a URL like https://app-abc123.azurewebsites.net. Update your .vscode/mcp.json to point to the remote server:

{
  "servers": {
    "remote-weather-app": {
      "type": "http",
      "url": "https://app-abc123.azurewebsites.net/mcp"
    }
  }
}

From that point forward, your MCP App is live. Any AI client that supports MCP Apps can invoke your weather tool and render the interactive widget — no local server required.

What's Next?

You've now built and deployed an MCP App to Azure App Service. Here's what you can explore next:

And remember: App Service gives you a full production hosting platform for your MCP Apps. You can add Easy Auth to secure your endpoints with Entra ID, wire up App Insights for telemetry, configure custom domains and TLS certificates, and set up deployment slots for blue/green rollouts. These features make App Service a great choice when you're ready to take your MCP App to production.

If you build something cool with MCP Apps and App Service, let me know — I'd love to see what you create!

Read the whole story
alvinashcraft
17 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Hello, World - Welcome to the Copilot Studio Blog!

1 Share

We’re so excited you’re here.

Today marks the launch of the Copilot Studio Tech Community Blog, a space for the builders and admins shaping the agent era in the real world.

Agents are moving from demos to production, so we’ll focus on practical patterns for building, shipping, and governing at scale, beyond what docs and product announcements cover. Makers will find templates and build tactics; IT and security will get governance guidance; developers will get deeper dives on extensibility and production operations.

Hit Follow at the top of the page and introduce yourself in the https://aka.ms/MCSdiscussions with what you’re building

What Is Microsoft Copilot Studio?

Microsoft Copilot Studio is Microsoft’s platform for building and governing AI agents across the enterprise, from prototyping to production. For the full product overview and getting-started guidance, visit the Copilot Studio website.

What’s New in Copilot Studio

We’re not starting this blog quietly. Here’s a look at three of the biggest updates that have shipped recently.

1. Agent Evaluation — Now Generally Available

Testing agents manually, one conversation at a time, doesn’t scale. Agent Evaluation gives makers a built-in, no-code way to test and monitor agent quality, safety, and reliability at scale. Create evaluation sets using AI-generated queries, past test sessions, or your own QA pairs — then run them automatically to catch regressions before they reach users.

2. Computer-using agents — more secure UI automation at scale

Computer-using agents (CUA) can now automate tasks through user interfaces—clicking, typing, and navigating apps when an API isn’t available—while delivering a more secure approach for UI automation at scale (with stronger controls for admin governance and credential handling).

3. Multi-agent orchestration, connected experiences, and faster prompt iteration

One of the biggest recent updates is improved multi-agent orchestration, alongside new connected experiences and faster prompt iteration—so you can coordinate specialized agents more effectively and refine behavior faster as you move from prototype to production.

Resources to Bookmark

Resource

What It's For

Copilot Studio Documentation

Official product docs, tutorials, and references

2026 Release Wave 1 Plan

What's shipping April–September 2026

Copilot Studio Discussion Space

Ask questions, share ideas, connect with peers

Next steps

1. Hit Follow at the top of the page and introduce yourself in the https://aka.ms/MCSdiscussions  with what you’re building

2. New to Copilot Studio? Sign up for the free trial and bookmark the resources below for docs, release plans, training, and governance guidance.

We can’t wait to see what you create.

Read the whole story
alvinashcraft
25 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft ODBC Driver 18.6.2 for SQL

1 Share

What Is the Microsoft ODBC Driver for SQL?

The Microsoft ODBC Driver for SQL provides native connectivity from Windows, Linux, and macOS applications to SQL Server, Azure SQL Database, Azure SQL Managed Instance, and Microsoft Fabric. It is the recommended driver for new application development using the ODBC API, and it supports , Always Encrypted, distributed transactions, and modern authentication methods including Microsoft Entra ID (formerly Azure Active Directory).

Whether you're building high-throughput data pipelines, managing enterprise databases, or developing cloud-native applications on Microsoft Fabric, the ODBC driver is a foundational component of the SQL Server connectivity stack.

What's New in 18.6.2

Improved Vector Parameter Handling for Prepared Statements

Version 18.6.2 improves the handling of output and input/output vector parameters when using prepared statements. This enhancement benefits applications that rely on parameterized queries with array bindings — a common pattern in batch processing and high-performance data access layers.

Microsoft Fabric Redirection Support (Up to 10 Redirections)

The driver now allows up to 10 server redirections per connection attempt, up from previous limits. This change directly supports Microsoft Fabric redirection scenarios, where connections may be transparently routed through multiple endpoints before reaching the target workspace. If your applications connect to Fabric SQL endpoints, this update ensures more reliable connectivity in complex routing topologies.

Alpine Linux Packaging Improvements

Architecture detection and packaging have been improved for Alpine Linux environments, making it easier to deploy the driver in lightweight, container-based workloads that use Alpine as a base image.

Bug Fixes

This release addresses several important issues reported by the community and identified through internal testing:

Parameter Array Processing

  • SQL_ATTR_PARAMS_PROCESSED_PTR accuracy — Fixed an issue where the number of processed parameter sets was not reported correctly when executing parameter arrays. Applications that inspect SQL_ATTR_PARAMS_PROCESSED_PTR after batch execution will now see the correct count.
  • SQL_PARAM_IGNORE handling — Fixed SQL_ATTR_PARAMS_PROCESSED_PTR and row counting when SQL_PARAM_IGNORE is used within parameter arrays, ensuring that ignored parameters are accounted for properly.

Crash Fixes

  • SQLNumResultCols segmentation fault — Resolved a segfault that occurred when calling SQLNumResultCols in describe-only scenarios where no parameter bindings are present.
  • Table-valued parameter (TVP) NULL handling — Fixed a segmentation fault triggered by NULL values in TVP arguments. Applications passing TVPs with nullable columns should no longer experience unexpected crashes.

bcp_bind Consecutive Field Terminators (Known Issue from 18.6.1)

  • bcp_bind fix — Corrected bcp_bind to properly handle consecutive field terminators without misinterpreting them as empty fields. This resolves a known issue introduced in version 18.6.1, where consecutive terminators were incorrectly interpreted as NULL values instead of empty strings. If you deferred upgrading to 18.6.1 because of this issue, 18.6.2 is the recommended target version.

Linux Packaging

  • Debian EULA acceptance — Fixed Debian package installation to correctly honor EULA acceptance and complete successfully, eliminating a friction point for automated deployments.
  • RPM side-by-side installation — Fixed RPM packaging rules to allow installing multiple driver versions side by side, which is important for environments that need to maintain backward compatibility or perform staged rollouts.

Distributed Transactions

  • XA recovery — Fixed XA recovery to compute transaction IDs correctly, avoiding scenarios where recoverable transactions could be missed during the recovery process. This is a critical fix for applications using distributed transactions with XA transaction managers.

Upgrading from Older Versions

If you are upgrading from a version prior to 18.6.1, you will also benefit from the features introduced in that release:

  • Vector data type support — Native support for the vector data type (float32), enabling AI and machine learning scenarios directly through ODBC.
  • ConcatNullYieldsNull property — Connection-level control over null concatenation behavior.
  • New platform support — Azure Linux 3.0 ARM, Debian 13, Red Hat 10, and Ubuntu 25.10.

Version 18.6.2 builds on these additions with the stability and correctness fixes described above.

Download & Installation

Windows

PlatformDownload Link
x64Download
x86Download
ARM64Download

Linux & macOS

Installation packages for supported Linux distributions and macOS are available on Microsoft Learn:

Documentation & Release Notes

For the full list of changes, platform support details, and known issues, see the official release notes:

Get Started

We encourage all users to upgrade to version 18.6.2.1 to take advantage of the fixes and improvements in this release — particularly if you are using parameter arrays, table-valued parameters, bcp operations, or connecting to Microsoft Fabric endpoints.

As always, we welcome your feedback. If you encounter issues, please report them through the SQL Server feedback channel or open an issue on the Microsoft ODBC Driver GitHub repository.

Happy coding!

Read the whole story
alvinashcraft
32 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Expanding Swift's IDE Support

1 Share

You can now write Swift in a broader range of popular IDEs, including Cursor, VSCodium, AWS’s Kiro, and Google’s Antigravity. By leveraging VS Code extension compatibility, these editors tap directly into the Open VSX Registry, where the official Swift extension is now live.

Swift has long supported development using multiple IDEs including VS Code, Xcode, Neovim, and Emacs. Swift is also compatible with editors that implement the Language Server Protocol (LSP). This growing ecosystem of editor support is particularly significant as Swift continues to show its versatility across platforms and development environments, including agentic IDEs.

Swift on Open VSX

The Swift extension for VS Code is now officially available on the Open VSX Registry, the vendor-neutral, open source extension registry hosted by the Eclipse Foundation. The extension adds first-class language support for projects built with Swift Package Manager, enabling seamless cross-platform development on macOS, Linux, and Windows. This milestone brings Swift support, including code completion, refactoring, full debugging support, a test explorer, as well as DocC support, to a broader ecosystem of compatible editors and allows agentic IDEs like Cursor and Antigravity to automatically install Swift, with no manual download required.


Swift in Cursor, powered by the Swift extension on Open VSX.

Swift in Cursor, powered by the Swift extension on Open VSX.


Get Started

To start using the Swift extension in any Open VSX-compatible editor, simply open the Extensions panel, search for ‘Swift’ and install the extension.

If you’re using Cursor, getting started is easier than ever. Check out our new dedicated guide: Setting up Cursor for Swift Development. It walks you through the setup, features and includes how to configure custom Swift skills for your AI workflows.

Swift now has support for a wider range of modern editors and IDEs to meet developers where they are. Download the extension, try it out in your editor of choice, and don’t forget to share your feedback!

Read the whole story
alvinashcraft
44 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Swift 6.3 officially brings Android support

1 Share

Swift 6.3 officially brings Android support

Swift 6.3 has been released, and it includes the first official Swift SDK for Android. This release moves Swift on Android from nightly preview builds into the official Swift release line.

With Swift 6.3, developers can target Android with Swift, update Swift packages to build for Android, and integrate Swift code into existing Android apps written in Kotlin or Java by using Swift Java and Swift Java JNI Core. Swift.org describes this as a significant milestone for cross platform development in the language.

This release follows the Android SDK preview announced in October 2025. At that stage, Swift introduced nightly preview releases of the Swift SDK for Android and positioned the Android Workgroup as the group driving the platform forward.

Swift has also published official setup documentation for the Android SDK. The documented setup requires three components: the Swift toolchain, the Swift SDK for Android, and the Android NDK. The same guide explains that Android builds are cross compiled from a desktop host such as macOS or Linux to an Android target.

Here is the guide:
https://www.swift.org/documentation/articles/swift-sdk-for-android-getting-started.html

Swift’s platform support page currently lists Android as a deployment only platform, with a minimum deployment target of Android 9, API 28. That means Swift can officially target Android, but Android is not listed as a platform that runs the Swift development tools themselves.

From a runtime perspective, Swift compiles directly to native machine code on Android and bundles the native runtime needed for the standard library and core libraries such as Dispatch and Foundation. Swift also relies on Java interoperability tooling to work with Android APIs exposed through Java and Kotlin.

Read the whole story
alvinashcraft
50 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories