Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152848 stories
·
33 followers

TypeScript 7 Beta Now Enabled by Default in Visual Studio 2026 18.6 Insiders 3

1 Share

TypeScript 7 Beta Now Enabled by Default in Visual Studio 2026 18.6 Insiders 3

In Visual Studio 2026 18.6 Insiders 3 we have updated the built-in TypeScript SDK to TypeScript 7 Beta (native preview). The TypeScript SDK provides the compiler and language service used for TypeScript and JavaScript support in Visual Studio. This update impacts any project that uses the built-in SDK, including TypeScript projects, ASP.NET Core projects with npm packages, and any TypeScript or JavaScript files you are editing. If your project doesn’t have a specific TypeScript version installed, Visual Studio will use the new native compiler by default. In this post we will go over what this change means for you, how to use a different version of TypeScript if needed, and the known issues we are currently working on. You can download the latest Insiders release with the link below.

What is the TypeScript 7 native preview?

TypeScript 7 is a native port of the TypeScript compiler and tools. This is a significant change that brings native execution speed and shared-memory parallelism to the TypeScript compiler and language service. We have seen compile time improvements of up to 10x for large code bases, along with substantially reduced memory usage. If you are working with large TypeScript or JavaScript projects, you should see a noticeable improvement across your entire development experience.

In addition to faster compile times, the TypeScript language service has significant performance improvements as well. We have seen that the time to load projects has decreased roughly 8x. The improvements are not limited to load times; you should see a general speed improvement across the board with any features which interact with the TypeScript language service. Some of the Visual Studio features that benefit from these improvements include.

  • IntelliSense and completions. Code completions and parameter info should appear faster, especially in large projects where you may have previously noticed a delay.
  • Find All References. Searching for references across your solution is significantly faster.
  • Go to Definition. Navigating to definitions is more responsive.
  • Error diagnostics. Squiggles and error lists update more quickly as you type.
  • Project load times. Opening TypeScript and JavaScript projects in Visual Studio should be noticeably faster, with load times decreasing by roughly 8x.

If you are working with large code bases, you should see a noticeable improvement to your entire development experience. You will spend less time waiting for the IDE to respond and more time being productive working on your applications.

For more details on TypeScript 7 and the performance improvements, see the Announcing TypeScript 7.0 Beta blog post.

Using a different TypeScript version

Visual Studio ships with a built-in version of the TypeScript compiler and language service for cases where the project doesn’t specify a specific version to be used. Starting with this release, that built-in version is TypeScript 7 Beta. If you prefer to use a different version, you can install it in your project and Visual Studio will always use the project-local version over the built-in one.

Disabling TypeScript 7 native preview

If you want to go back to using the previous TypeScript language service, you can disable the native preview in Visual Studio. Go to Tools > Options > Preview Features and search for “native preview”. Uncheck the Enable JavaScript/TypeScript Native Language Service Preview option and restart Visual Studio.

Using TypeScript 6.x (GA)

To use the current stable release, install the typescript package in your project.

npm install -D typescript@^6.0.0

Using a specific TypeScript 7 native preview version

If you want to pin to a specific version of the native preview, install the @typescript/native-preview package.

npm install -D @typescript/native-preview@beta

In both cases, Visual Studio will detect the version in your node_modules and use that instead of the built-in SDK.

Known issues

TypeScript 7 brings significant performance improvements to Visual Studio, and we are continuing to refine the experience. Below are the known issues that we are actively working on. This is not an exhaustive list.

  • IntelliSense. You may notice completions not appearing in some cases. In .cshtml files, the TypeScript completion list may not appear inside a <script> tag. When accepting a completion for the last argument of a function, the closing parenthesis may be removed. Pressing Ctrl+Space can work around this.
  • Code Actions & Refactoring. Quick fixes (Ctrl+.) are not available yet. Only Copilot AI-based suggestions may appear. The Organize Imports command (Ctrl+R, Ctrl+G) is also not available.
  • Navigation & Search. The navigation bar dropdowns at the top of the editor do not show document symbols. Find All References (Shift+F12) shows a flat list without semantic grouping (read/write/declaration), and cross-file references may be incomplete. Code search results may show mismatched titles and descriptions.
  • CodeLens. Reference counts (e.g., “19 references”) do not appear above interface and class declarations.
  • Hover tooltips. Hover tooltips are missing the symbol icon and have different text coloring compared to the previous language service.
  • Snippets. Insert Snippet (Ctrl+K, Ctrl+X) does not work in JavaScript files.
  • JSDoc. Typing /** above a function with parameters does not auto-generate the JSDoc template with @param entries.
  • Formatting. Unchecking “Format on open block {” in Tools > Options > Text Editor > JavaScript/TypeScript > Formatting does not take effect.
  • Task List. If a TypeScript file contains both a TODO comment and a variable named “TODO”, the Task List may incorrectly show duplicate tasks.
  • File and folder rename. Renaming a file or folder in a TypeScript project does not consistently update import paths in other files.
  • File watching. When files are modified outside of Visual Studio, changes are not detected until the file is opened and modified inside the IDE. Errors from external edits will not appear in the Error List.

We appreciate your feedback as we work toward full parity.

Reporting feedback

If you have feedback on the TypeScript compiler, or language service, the best place to file feedback is the typescript-go GitHub repo.

If you are running into an issue that is specific to Visual Studio, you can share feedback with us via Developer Community: report any bugs or issues via report a problem and share your suggestions for new features or improvements to existing ones.

We would love if you could try out the new experience and let us know how it’s working for you. Please try it out and share your feedback with us.

 

The post TypeScript 7 Beta Now Enabled by Default in Visual Studio 2026 18.6 Insiders 3 appeared first on Visual Studio Blog.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Building an AI-Powered Conference App with .NET’s Composable AI Stack

1 Share

Building AI features into .NET applications often means stitching together models, vector databases, ingestion pipelines, and agent frameworks from different ecosystems. Each one has its own patterns, its own client libraries, and its own breaking changes when the next version ships. We’ve been working on a set of composable, extensible building blocks that give you stable abstractions across all of these concerns.

We’re excited to walk you through how we used them together. For a session at MVP Summit, we built an interactive conference assistant called ConferencePulse. It runs live polls, answers audience questions in real time, generates insights from engagement data, and summarizes the session when it wraps up. We built the app using the exact technologies we were there to present: Microsoft.Extensions.AI, Microsoft.Extensions.DataIngestion, Microsoft.Extensions.VectorData, Model Context Protocol (MCP), and Microsoft Agent Framework.

This post walks through the app and shows how each building block fits.

What we built

ConferencePulse is a Blazor Server app for live conference sessions. Attendees scan a QR code, join the session, and interact with the presenter through polls and Q&A. On the backend, AI powers several features:

  • Live polls that the AI generates based on session content. Attendees vote and results appear in real time.
  • Audience Q&A where AI answers questions using a RAG pipeline that pulls from the session knowledge base, Microsoft Learn docs, and GitHub wiki content.
  • Auto-generated insights that surface patterns in poll results and audience questions as they come in.
  • Session summary that runs when the presenter ends the session. Multiple AI agents analyze polls, questions, and insights concurrently, then merge their findings.

ConferencePulse presenter dashboard showing real-time poll results, audience questions, and AI-generated insights

We wanted an interactive session, not a slide deck. We wanted polls and audience insights. And we wanted to automate the preparation: point the app at a GitHub repo, and it downloads the markdown, processes it through a pipeline, and builds a searchable knowledge base. Polls, talking points, and Q&A answers are all grounded in that content.

The app runs on .NET 10, Blazor Server, and Aspire. Five projects cover the stack:

src/
├── ConferenceAssistant.Web/          ← Blazor Server (UI + orchestration)
├── ConferenceAssistant.Core/         ← Models, interfaces, session state
├── ConferenceAssistant.Ingestion/    ← Data ingestion pipeline + vector search
├── ConferenceAssistant.Agents/       ← AI agents, workflows, tools
├── ConferenceAssistant.Mcp/          ← MCP server tools + MCP client
└── ConferenceAssistant.AppHost/      ← .NET Aspire (Qdrant, PostgreSQL, Azure OpenAI)

Now let’s walk through the building blocks.

Microsoft.Extensions.AI: one interface, any provider

Microsoft.Extensions.AI gives you IChatClient, a unified abstraction that works with OpenAI, Azure OpenAI, Ollama, Foundry Local, and other providers. Every AI call in ConferencePulse goes through a single middleware pipeline.

var openaiBuilder = builder.AddAzureOpenAIClient("openai");

openaiBuilder.AddChatClient("chat")
    .UseFunctionInvocation()
    .UseOpenTelemetry()
    .UseLogging();

openaiBuilder.AddEmbeddingGenerator("embedding");

That’s it. Six lines. If you’ve worked with ASP.NET Core middleware, this pattern will feel familiar. Each .Use*() call wraps the inner client with additional behavior. UseFunctionInvocation() handles tool-call loops. UseOpenTelemetry() traces every call. UseLogging() captures request/response pairs.

Want to swap Azure OpenAI for Ollama? Change the inner client. The middleware stays the same.

This matters because IChatClient shows up everywhere in the app. Poll generation, Q&A, insights, ingestion enrichment, and multi-agent workflows all share this pipeline. You register it once and use it throughout.

DataIngestion + VectorData: the knowledge layer

AI models need context to give useful answers. Microsoft.Extensions.DataIngestion provides a pipeline for processing documents into searchable chunks. Microsoft.Extensions.VectorData provides a provider-agnostic abstraction over vector stores.

When ConferencePulse imports content from a GitHub repo, it runs the files through an ingestion pipeline:

IngestionDocumentReader reader = new MarkdownReader();

var tokenizer = TiktokenTokenizer.CreateForModel("gpt-4o");
var chunkerOptions = new IngestionChunkerOptions(tokenizer)
{
    MaxTokensPerChunk = 500,
    OverlapTokens = 50
};
IngestionChunker<string> chunker = new HeaderChunker(chunkerOptions);

var enricherOptions = new EnricherOptions(_chatClient) { LoggerFactory = _loggerFactory };

using var writer = new VectorStoreWriter<string>(
    _searchService.VectorStore,
    dimensionCount: 1536,
    new VectorStoreWriterOptions
    {
        CollectionName = "conference_knowledge",
        IncrementalIngestion = true
    });

using IngestionPipeline<string> pipeline = new(
    reader, chunker, writer, new IngestionPipelineOptions(), _loggerFactory)
{
    ChunkProcessors =
    {
        new SummaryEnricher(enricherOptions),
        new KeywordEnricher(enricherOptions, ReadOnlySpan<string>.Empty),
        frontMatterProcessor
    }
};

The pipeline reads markdown, chunks it by headers, enriches each chunk with AI-generated summaries and keywords, then embeds and stores the results in Qdrant. Each step is a pluggable component. You can swap MarkdownReader for a PDF reader, HeaderChunker for a fixed-size chunker, or Qdrant for Azure AI Search. The pipeline composition stays the same.

Notice that SummaryEnricher and KeywordEnricher both take EnricherOptions(_chatClient). They use the same IChatClient from the previous section. AI enriching its own context. The summary enricher generates a concise description of each chunk, and the keyword enricher extracts searchable terms. Both improve retrieval quality later.

On the query side, Microsoft.Extensions.VectorData gives you VectorStoreCollection for semantic search over any backend:

var results = collection.SearchAsync(query, topK);

await foreach (var result in results)
{
    var content = result.Record["content"] as string;
    // Use the content...
}

Similar to how you can swap database providers in EF Core, you can swap vector store providers here. Qdrant today, Azure AI Search tomorrow. Same API.

ConferencePulse also ingests data in real time as the session progresses. Poll responses, audience questions, Q&A pairs, and AI-generated insights all go into the knowledge base:

public async Task<int> IngestResponseAsync(
    string pollId, string topicId, string question,
    Dictionary<string, int> results, List<string>? otherResponses = null)
{
    var sb = new StringBuilder();
    sb.AppendLine($"Poll: {question}");
    sb.AppendLine("Results:");
    var total = results.Values.Sum();
    foreach (var (option, count) in results)
    {
        var percentage = total > 0 ? (count * 100.0 / total).ToString("F1") : "0";
        sb.AppendLine($"  - {option}: {count} votes ({percentage}%)");
    }

    await _searchService.UpsertAsync(sb.ToString(), source: "response", documentId: $"response-{pollId}");
    return 1;
}

By the end of a session, the knowledge base contains the original imported content, every poll result, every audience question, and every AI-generated insight.

AI Assistant generating real-time insights from poll results and audience questions

IChatClient with tools: choosing the right level of complexity

One of the design principles we followed: use the simplest approach that gets the job done. IChatClient with tools handles a lot of scenarios before you need a dedicated agent framework. At the same time, when orchestration gets complex, a framework earns its place. The key is choosing the right tool.

ConferencePulse has three AI-powered features at different levels of complexity. All three use the same IChatClient.

Insight generation: a single call

When a poll closes, ConferencePulse generates an insight. The implementation is a single GetResponseAsync call:

var response = await chatClient.GetResponseAsync(
[
    new(ChatRole.System,
        "You are a conference analytics assistant generating real-time insights from audience data."),
    new(ChatRole.User, prompt)  // prompt contains the poll results
]);

var content = response.Text?.Trim();
if (!string.IsNullOrWhiteSpace(content))
{
    ctx.AddInsight(new Insight
    {
        TopicId = poll.TopicId,
        PollId = pollId,
        Content = content,
        Type = InsightType.PollAnalysis
    });
}

No tools, no framework. A prompt with poll results as context, and the middleware pipeline handles telemetry and logging.

Poll generation: IChatClient with tools

Generating a poll needs more context. The AI checks the current topic, looks at what’s been covered, and creates something relevant. That means tools:

public class PollGenerationWorkflow(IChatClient chatClient, AgentTools tools)
{
    public async Task<string> ExecuteAsync(string topicId)
    {
        var options = new ChatOptions
        {
            Tools = [tools.GetCurrentTopic, tools.SearchKnowledge,
                     tools.GetAudienceQuestions, tools.GetAllPollResults,
                     tools.GetAllInsights, tools.CreatePoll]
        };

        var messages = new List<ChatMessage>
        {
            new(ChatRole.System, AgentDefinitions.SurveyArchitectInstructions),
            new(ChatRole.User, $"Generate an engaging poll for topic: {topicId}...")
        };

        var response = await chatClient.GetResponseAsync(messages, options);
        return response.Text ?? "Unable to generate poll.";
    }
}

Each tool is a strongly-typed AITool property created from a C# method:

public class AgentTools
{
    public AITool SearchKnowledge { get; }
    public AITool GetCurrentTopic { get; }
    public AITool CreatePoll { get; }
    // ...

    public AgentTools(IPollService pollService, ISemanticSearchService searchService, ...)
    {
        SearchKnowledge = AIFunctionFactory.Create(SearchKnowledgeCore,
            new AIFunctionFactoryOptions
            {
                Name = nameof(SearchKnowledge),
                Description = "Search the session knowledge base for content related to the query"
            });
        // ...
    }
}

The model decides it needs context, calls GetCurrentTopic and SearchKnowledge, then generates a poll and calls CreatePoll to save it. The UseFunctionInvocation() middleware handles the tool loop automatically.

AI assistant conducting a poll in the conference room view

Q&A answering: RAG across multiple sources

The Q&A service brings multiple building blocks together. When an audience member asks a question, the app searches the local knowledge base, queries Microsoft Learn docs via MCP, and asks DeepWiki about relevant GitHub repos via MCP. Then it synthesizes an answer:

// 1. Search local knowledge base
var searchResults = await searchService.SearchAsync(questionText, topK: 5);
var localContext = string.Join("\n\n---\n\n",
    searchResults.Select(r => r.Content).Where(c => !string.IsNullOrWhiteSpace(c)));

// 2. Search Microsoft Learn for documentation context (via MCP)
var docsContext = await mcpClient.SearchDocsAsync(questionText);

// 3. Ask DeepWiki about relevant .NET repos (via MCP)
var deepWikiContext = await mcpClient.AskDeepWikiAsync("dotnet/extensions", questionText);

VectorData for local search, MCP for external context, IChatClient for generation.

AI Assistant QA Interface

Now let’s look at how MCP works.

MCP: consuming and providing context

Model Context Protocol is a standard for AI applications to discover and use external tools and context. Similar to how HTTP lets any client talk to any server, MCP lets any AI app connect to any context provider using the same protocol.

ConferencePulse uses MCP in both directions.

As a consumer

The McpContentClient connects to two MCP servers at startup: Microsoft Learn and DeepWiki.

public async Task InitializeAsync(CancellationToken ct = default)
{
    var learnTransport = new HttpClientTransport(new HttpClientTransportOptions
    {
        Endpoint = new Uri("https://learn.microsoft.com/api/mcp"),
        TransportMode = HttpTransportMode.StreamableHttp
    }, loggerFactory);
    _learnClient = await McpClient.CreateAsync(learnTransport, null, loggerFactory, ct);

    var deepWikiTransport = new HttpClientTransport(new HttpClientTransportOptions
    {
        Endpoint = new Uri("https://mcp.deepwiki.com/mcp"),
        TransportMode = HttpTransportMode.StreamableHttp
    }, loggerFactory);
    _deepWikiClient = await McpClient.CreateAsync(deepWikiTransport, null, loggerFactory, ct);
}

Once connected, calling a tool on any MCP server uses the same pattern:

var result = await _learnClient.CallToolAsync(
    "microsoft_docs_search",
    new Dictionary<string, object?> { ["query"] = query },
    cancellationToken: ct);

Any server that speaks MCP works with this client code.

As a provider

ConferencePulse is also an MCP server. Any MCP-compatible client (GitHub Copilot, Claude, a custom tool) can connect and query session data.

[McpServerToolType]
public class ConferenceTools
{
    [McpServerTool(Name = "get_session_status", ReadOnly = true),
     Description("Returns the current conference session status.")]
    public static string GetSessionStatus(ISessionService sessionService)
    {
        var session = sessionService.CurrentSession;
        if (session is null) return "No active conference session.";
        // ... build status string
    }

    [McpServerTool(Name = "search_session_knowledge", ReadOnly = true),
     Description("Searches the session knowledge base for relevant content.")]
    public static async Task<string> SearchSessionKnowledge(
        ISemanticSearchService searchService,
        [Description("The search query.")] string query,
        [Description("Max results. Defaults to 5.")] int maxResults = 5)
    {
        var results = await searchService.SearchAsync(query, maxResults);
        // ... format results
    }
}

Registration takes a few lines in Program.cs:

builder.Services
    .AddMcpServer(options => { options.ServerInfo = new() { Name = "ConferencePulse", Version = "1.0.0" }; })
    .WithToolsFromAssembly(typeof(ConferenceTools).Assembly)
    .WithHttpTransport();

app.MapMcp("/mcp");

The app consumes external knowledge to answer questions and provides its own data for external tools. Same protocol in both directions.

Microsoft Agent Framework: multi-agent orchestration

For most of ConferencePulse’s features, IChatClient with tools was the right choice. But the session summary needed something more: three specialized agents running concurrently, each with scoped tools, feeding their results into a synthesis step. That’s where Microsoft Agent Framework comes in.

public class SessionSummaryWorkflow(IChatClient chatClient, AgentTools tools)
{
    public async Task<string> ExecuteAsync()
    {
        ChatClientAgent pollAnalyst = new(chatClient,
            name: "PollAnalyst",
            description: "Analyzes poll results and trends",
            instructions: "You are a poll analyst. Use GetAllPollResults to retrieve every poll...",
            tools: [tools.GetAllPollResults]);

        ChatClientAgent questionAnalyst = new(chatClient,
            name: "QuestionAnalyst",
            description: "Analyzes audience questions and themes",
            instructions: "You are an audience question analyst...",
            tools: [tools.GetAudienceQuestions]);

        ChatClientAgent insightAnalyst = new(chatClient,
            name: "InsightAnalyst",
            description: "Analyzes generated insights and knowledge patterns",
            instructions: "You are an insight analyst...",
            tools: [tools.GetAllInsights, tools.SearchKnowledge]);

Each ChatClientAgent wraps the same IChatClient. The agents get scoped tools (PollAnalyst only sees poll data, QuestionAnalyst only sees questions) and specialized instructions.

The orchestration uses AgentWorkflowBuilder.BuildConcurrent for the fan-out, then WorkflowBuilder to compose the full pipeline:

        // Fan-out: three analysts run concurrently
        var analysisWorkflow = AgentWorkflowBuilder.BuildConcurrent(
            [pollAnalyst, questionAnalyst, insightAnalyst],
            MergeAgentOutputs);

        // Fan-in: synthesizer merges all findings
        ChatClientAgent synthesizer = new(chatClient,
            name: "Synthesizer",
            instructions: "Synthesize the analyses into one cohesive session summary...");

        // Compose: concurrent analysis → sequential synthesis
        var analysisExec = new SubworkflowBinding(analysisWorkflow, "Analysis");
        ExecutorBinding synthExec = synthesizer;

        var composedWorkflow = new WorkflowBuilder(analysisExec)
            .WithName("SessionSummaryPipeline")
            .BindExecutor(synthExec)
            .AddEdge(analysisExec, synthExec)
            .WithOutputFrom([synthExec])
            .Build();

        var run = await InProcessExecution.Default.RunAsync(
            composedWorkflow,
            "Analyze the conference session data and provide your specialized findings.");

Compare this with the poll generation workflow from earlier, which is about 10 lines using IChatClient and tools. The session summary is about 40 lines because it genuinely needs concurrent agents with scoped tools and a synthesis step.

In ConferencePulse, the Agent Framework was the right choice for exactly one workflow. Everything else worked well with IChatClient directly. Both approaches use the same underlying abstraction.

How the building blocks fit together

Aspire Dashboard showing ConferencePulse services: web app, Qdrant, PostgreSQL, and Azure OpenAI

During the MVP Summit session, attendees interacted with features powered by different layers of the stack:

Feature Powered by
Polls IChatClient + tools (MEAI)
Knowledge grounding IngestionPipeline + VectorStoreWriter
Q&A answers VectorData + IChatClient + MCP
Auto-generated insights IChatClient (single call)
Session summary Microsoft Agent Framework (fan-out/fan-in)
Observability UseOpenTelemetry() + Aspire Dashboard
Infrastructure Aspire: Qdrant + PostgreSQL + Azure OpenAI

Each building block handles one concern and composes with the others. IChatClient shows up inside the ingestion enrichers, inside the agent tools, inside the MCP-augmented Q&A, and inside the Agent Framework’s ChatClientAgent. You learn it once and use it everywhere.

Providers will change and models will evolve. The building blocks give you a stable layer to build on, and you swap implementations underneath without rewriting application code.

Get started

We’re excited to see what you build with these building blocks.

Now that you’ve seen how these building blocks compose, give them a try and let us know what you think.

The post Building an AI-Powered Conference App with .NET’s Composable AI Stack appeared first on .NET Blog.

Read the whole story
alvinashcraft
14 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

AI Crash Course: MCP Servers, Agents, AI Assistants and Skills

1 Share

Let’s sort out the differences between MCP servers, AI agents or assistants, and AI skills.

If you’ve been working with AI tools lately, you’ve probably noticed that there are a lot of ways we can expand the capabilities of our models to get better, more focused results in a specific domain. Many of the popular developer tools and libraries now offer their own MCP (or Model Context Protocol) servers, AI agents or assistants, or AI skills—but what’s the difference between these, and where do the fit into your AI workflow? Let’s break it down.

Agents

AI Agents are capable of advanced reasoning and actions beyond that of a standard model. Originally, tool usage was the defining characteristics to consider something “agentic,” but the lines are beginning to blur as the features of user-facing conversational models become more and more advanced. These days, when someone is discussing an “agent” or using a model in “agentic mode,” what they usually mean is that the system is capable of proactive autonomous function—the ability to perform multiple steps and work without human instruction.

What does “advanced reasoning” mean when we’re talking about generative AI? Most often, this is referring to an agent’s ability to use step-by-step “thinking” to break down a task into a series of smaller actions, create a plan and execute on it.

For example: it may seem like a fairly straightforward task (for a human) to “Add a one-hour meeting called ‘Q3 KPI Review’ to my calendar for 3 p.m. this Tuesday afternoon and invite Mandy.” But in reality, it involves a great many sub-steps. What day is it today and what is the date of “this” Tuesday? Do I have any tools that allow me to update calendars? What are the login credentials to access that calendar? Is there already anything booked in that time slot? Do I have any tools that allow me to read the user’s address book? Is there a contact named Mandy saved there? What is their email address?

Assistants

AI assistants are agents that have been specialized to help a user complete tasks (usually within a specific domain). For example, a coding assistant can help users generate, review and implement code, while a scheduling assistant might help users manage their daily appointments and calendar updates. Assistants also tend to be less autonomous; while they are capable of tool use and chain-of-thought processing, they’re generally designed to work with a user in a more collaborative way: responding to requests, handling inputs and making suggestions.

It would be fair to characterize agents as proactive and assistants as reactive. An agent might notice you have a calendar conflict and automatically rearrange your schedule, whereas an assistant could notice the conflict and send an alert about it, but not autonomously take action.

However, like many AI technologies right now, the line between agents and assistants is thin and ever-changing. Most tools you encounter will sit somewhere on the agent/assistant spectrum, depending on how they’re designed and integrated.

MCP Servers

MCP Servers are a way for products or applications to share information or capabilities with AI models via a standardized protocol. If you’re building something that you want AI models to be able to make use of or interface with, then an MCP Server is (at least for now) the best way to facilitate that. The MCP documentation identifies three core “building blocks” of functionality that MCP servers can provide: tools, resources and prompts.

  • Tools are functions that an AI agent can call in order to extend their native capabilities. For example, an AI agent out-of-the-box may not be able add an appointment to a calendar in your application—but an MCP Server for your application might include a tool that “instructs” the model on how to do so and gives it the required access/permissions. This would allow your users to prompt their AI tool and (via the MCP Server) add or remove meetings from their calendar in your app.
  • Resources are data sources that the agent can access via the MCP Server that they didn’t have access to before. Imagine the same hypothetical application as before that includes a user calendar. Even if you don’t want to give AI the tools to update calendar appointments, maybe you do want it to be able to read them and leverage the content. This would allow your users to generate a summary overview of their week, or ask questions like “Do I have availability at 3 p.m. this Tuesday?” (P.S. No, you don’t—you’re stuck in that KPI review meeting we added to the calendar earlier.)
  • Prompts are the ways that MCP Servers help “translate” user requests into specific model actions—they’re pre-written instructions that tell the model when to call specific tools or access specific resources. These reusable prompt templates are made available for clients to invoke as needed.

Agent Skills

Agent skills are a lightweight way to provide helpful context and workflows to AI agents without the need to create a full MCP Server. Let’s say we ask our AI agent to perform the same task each week: we give it a list of our to-dos for the week (including deadlines) and ask it to estimate how long it will take to complete each one and then organize the list by priority and time requirement. We could walk the agent through this task every week, answering its follow-up questions and reminding it of standard information (like how many hours we work in total, or how long it’s taken us in the past to complete similar work) … or we could write a skill that contains all this information and the step-by-step workflow it needs to complete the task.

Skills are markdown files that agents can reference like tools in order to complete tasks. Similarly to tools, a skill includes a name and brief description that will be used by the agent to assess when it’s relevant to leverage. Unlike tools, skills do not add new capabilities to an agent—they only provide the context and detailed instructions that can help an agent use the capabilities they already have more effectively.

You can think of it like the assembly instructions for flatpack furniture: the instructions tell you when and how to use a screwdriver to put together this desk … but they won’t include the actual screwdriver. To truly extend this metaphor to the breaking point—you can either use a screwdriver you already own (leveraging the agent’s innate capabilities) or you can go get one (making use of an MCP Server), but the skill will only tell you when to use it in this particular project. But, you can follow those detailed instructions multiple times to complete the same task again and again!

Read the whole story
alvinashcraft
24 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Automatic Super Resolution Preview Comes to the ROG Xbox Ally X for Docked Play

1 Share

We previously introduced Automatic Super Resolution (Auto SR) on select Windows 11 Copilot+ PCs, to make games look sharper and play smoother. Today, we’re excited to give Xbox Insiders the opportunity to help us test and refine this feature on the ROG Xbox Ally X for docked play, where balancing framerate (FPS) and image quality can be especially challenging.

Imagine this… You’ve just had a great gaming session on your ROG Xbox Ally X on the go. Everything looks incredibly sharp and plays smoothly on your 7-inch screen. You dock it, lean back, and look up at your TV to keep playing, but now, stretching across a much larger screen, the image looks softer. You push the resolution and graphics settings higher to bring back detail, and FPS drops. You dial it back to keep things smooth, but you lose detail again. You’re left choosing between visuals and FPS, but you want both!

That’s where Auto SR comes in. By upscaling frames rendered at lower resolutions, Auto SR can help deliver smooth gameplay without losing sharpness. On the ROG Xbox Ally X, that means 1440p-like visuals and higher FPS on larger screens, where this balance matters most.

Take a look at Auto SR in action. Below are two frames from Forza Horizon 5: one rendered natively at 1440p, and the other enhanced with Auto SR. Look closely — can you tell the difference?

FH5 1440P AutoSR NoSound still 01 full web image

Not easy, right? Auto SR is the frame on the right, delivering 1440p-like visuals with more than a 30% FPS boost. What matters most: with Auto SR you get both.

Key Highlights

  • Auto SR is now available in preview to Xbox Insiders on the ROG Xbox Ally X for docked play, where players will see the most value from super resolution.
  • Auto SR brings high-quality super resolution to games that don’t have it, and for some games that already do, it can deliver even higher image quality and FPS.
  • Players can find the Auto SR status and controls in Game Bar, where you can see if Auto SR is supported, find guidance for enabling, and turn on the feature.
  • Guidance on which super resolution technology to use and when, to get the most out of your ROG Xbox Ally X.
  • Instructions on how Xbox Insiders can get started with the Auto SR Preview and recommended games, to help us shape what comes next.

Why We’re Starting with Docked Play

Docked play means larger screens and higher resolutions, where drops in image quality are more noticeable or where some games struggle to maintain smooth FPS. That’s exactly the problem Auto SR was designed to solve, so we’re starting the preview with docked mode where we expect players will see the most value.

Super Resolution Today

Super resolution works by rendering at a lower resolution to boost FPS, then upscales the frames to restore detail. It is often a core part of how many modern games render, and players expect it.

Previously, super resolution came in two forms:

Why Auto SR Matters on the ROG Xbox Ally X

For super resolution to deliver the most value, it needs to provide both high image quality and FPS gains at once. Game-integrated super resolution does an excellent job delivering that balance, but gaps remain, most notably on gaming handhelds like the ROG Xbox Ally X.

Windows integration expands high-quality super resolution coverage: Not every game ships with game-integrated super resolution. Because Auto SR is built into Windows, it can broadly apply high-quality super resolution to existing games, especially those without game-integrated super resolution.

Larger super resolution models avoid memory bandwidth bottlenecks that can limit FPS: Super resolution means fewer pixels for the GPU to render, which traditionally means higher FPS. But game-integrated super resolution still relies on the game to produce more detailed surface textures than a lower resolution render would normally need. Otherwise, the upscaled image looks soft, like zooming in on a low-quality photo. All that texture data must move through memory every frame. On handheld PCs, where memory bandwidth is constrained, this directly limits the FPS gains super resolution is designed to deliver. Until now, the only option has been reducing texture quality at the cost of visual quality. Auto SR takes a different approach to super resolution, and uses larger models that can reconstruct texture detail rather than relying on the game to provide it, avoiding the bandwidth demands that limit FPS on these devices.

NPU enables higher quality and higher FPS super resolution: The longer it takes to render a frame, the lower your FPS. When super resolution runs on the GPU, it counts towards frame time. To avoid impacting FPS, models are limited to a minuscule 1–2ms, constraining their size and quality. Game-integrated super resolution fits in this window and still delivers quality by relying on the game to provide more detailed texture data. Auto SR sidesteps this limit by running larger models on the NPU in parallel with the GPU. This gives Auto SR an entire extra frame of time to run the model — critical for devices like the ROG Xbox Ally X, that couldn’t otherwise run these models without significantly impacting FPS. This also lets the GPU move straight to the next frame, so there is essentially no frame time overhead, giving Auto SR the potential to deliver high-quality super resolution at the theoretical maximum FPS in exchange for a frame of latency. GPU-based super resolution can’t do this.

What does this add up to? Texture-heavy games at higher resolutions and graphics settings are where super resolution is needed the most, but on gaming handhelds, that’s also where super resolution is hardest to deliver. Game-integrated super resolution remains the preferred choice. Auto SR steps in where game-integrated super resolution isn’t available or when hardware constraints prevent it from simultaneously delivering quality and FPS.

Choosing the Right Super Resolution Option On My ROG Xbox Ally X

Alongside Auto SR, the AMD Ryzen AI Z2 Extreme processor also supports AMD FSR Upscaling, RSR, AMD FSR Frame Generation, and AMD Fluid Motion Frames (AFMF). Here’s a quick guide developed in collaboration with AMD to help you choose the best option based on your play goals.

Scenario Player Guidance
Games run below 60 FPS Enable Super Resolution 

Both Auto SR and AMD FSR Upscaling deliver substantial gains across a wide range of games. Choose the upscaling that best fits your image quality and FPS needs.

If neither is available, use RSR.

Games run below 60 FPS with super resolution enabled Enable Auto SR + AFMF

Disable other super resolution and frame generation options when using this combination.

 

Examples

Now, let’s see some more examples! Forza Horizon 5 runs smoothly on the ROG Xbox Ally X’s internal screen, hitting 60 FPS at 1080p using “High” settings. Dock to a larger screen, Auto SR helps deliver higher visual detail by enabling the game’s “Ultra” settings with a 30% FPS boost over native 1440p at similar visual quality.

When compared to 720p, the visual improvement is striking, as shown below. Auto SR brings back much of the texture and detail you’d expect from higher resolutions, turning what would be a soft 720p image into something far sharper and more detailed. Auto SR delivers 1440P level image quality at framerates typically equivalent to if the game rendered natively at 720P, though framerates may run slightly below under heavy power loads.

Getting Started

  1. Xbox Game Bar Auto SR widget showing Current Status: On for Forza Horizon 5Enroll in Xbox Insider on PC to get started with Auto SR on your ROG Xbox Ally X.
  2. Confirm Auto SR is available:
    • Open Xbox Game Bar (press the Xbox button)
    • Navigate to the Display Widget and look for the Auto SR tab
  3. Make sure your device is up to date. If the Auto SR tab isn’t showing, the rollout may still be reaching your device.
    • Game Bar: Exit Xbox mode (Game Bar > Settings > Exit Xbox mode), then check Microsoft Store > Downloads for updates.
    • Auto SR package: Install the latest from the Microsoft Store.

Need Step-by-Step Instructions to Enable Auto SR?

Visit the Auto SR support page

 

Preview Notes and Feedback

As a preview feature, Auto SR is still evolving. Every PC game behaves a little differently, and there’s no one-size-fits-all setup. You may need to follow different steps depending on the game, and you might notice minor quirks along the way. Keep an eye on Game Bar status for guidance, and refer back to the support page if needed.

Help shape Auto SR

How is it working in your games? How does the setup and control feel?

Tell us at autosr@microsoft.com

Games to Try Auto SR On

Auto SR is most useful for titles running below 60 FPS. If your game is already running smoothly, Auto SR lets you turn up the resolution or graphics settings to get even better visuals while keeping FPS smooth.

Suggested Games

Once you are set up, try Auto SR with your favorite DirectX games (DX10 or later) or one of these: Assassin’s Creed: Mirage, Assassin’s Creed: Valhalla, Assetto Corsa, Avowed, Control, Dead Island: Definitive Edition, DOOM: The Dark Ages, Far Cry 6, Frostpunk 2, Grounded 2, Psychonauts 2, Rise of the Tomb Raider, The Outer Worlds 2, Tom Clancy’s Rainbow Six Siege, Tom Clancy’s The Division 2 and War Thunder.

What’s Next

Auto SR improves visual quality and framerates across supported games today. With this preview, Auto SR becomes a tool that puts players in control, letting them enable it based on their preference. Your feedback will help us make that control easier to discover and use. We’re also exploring expanding the scenarios Auto SR supports and continuing to improve quality and performance. Stay tuned for more.

The post Automatic Super Resolution Preview Comes to the ROG Xbox Ally X for Docked Play appeared first on DirectX Developer Blog.

Read the whole story
alvinashcraft
32 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Fast .NET CLI ISO Downloader with Integrity Validation

1 Share

This post is a follow-up to my previous fast .NET CLI downloader with the additional feature of ISO integrity validation. Just point it at the ISO you want to download, and it will look in the usual spot for the SHA sum file.

I often download versions of Linux distributions to try out. These are very large and usually come with a checksum file that you have to manually check after downloading. The checksum verification is important because a corrupted ISO can cause all sorts of weird issues when you try to use it.

It happened to me once, back when I used to burn ISOs to DVDs. The install kept failing, and I kept blaming the DVD until I finally realized the ISO file was corrupted; I hadn’t run the checksum verification.

Using it is as simple as -

./downloader.cs https://releases.ubuntu.com/resolute/ubuntu-26.04-desktop-amd64.iso

But the command supports some additional options as well -

Usage: ./downloader.cs <url> [output-file] [chunks]
 url - URL to download
 output-file - Output filename (default: derived from URL)
 chunks - Number of parallel streams (default: 8)

Downloading Ubuntu 26.04
Downloading Ubuntu 26.04
 1#!/usr/bin/dotnet run
 2
 3using System.Diagnostics;
 4using System.Net.Http.Headers;
 5using System.Security.Cryptography;
 6
 7const int DefaultChunks = 8;
 8const int MaxRetries = 5;
 9const int RetryDelayMs = 1000;
 10const int ProgressUpdateMs = 250;
 11
 12if (args.Length < 1 || args[0] is "-h" or "--help")
 13{
 14 PrintUsage();
 15 return args.Length < 1 ? 1 : 0;
 16}
 17
 18var url = args[0];
 19var outputFile = args.Length > 1 ? args[1] : Path.GetFileName(new Uri(url).LocalPath);
 20var chunks = args.Length > 2 ? int.Parse(args[2]) : DefaultChunks;
 21
 22if (string.IsNullOrWhiteSpace(outputFile) || outputFile == "/")
 23 outputFile = "download";
 24
 25Console.WriteLine($"URL: {url}");
 26Console.WriteLine($"Output: {outputFile}");
 27Console.WriteLine($"Streams: {chunks}");
 28Console.WriteLine();
 29
 30var isIso = string.Equals(Path.GetExtension(outputFile), ".iso", StringComparison.OrdinalIgnoreCase);
 31
 32using var client = new HttpClient { Timeout = TimeSpan.FromMinutes(30) };
 33client.DefaultRequestHeaders.UserAgent.ParseAdd("DownloaderCLI/1.0");
 34
 35string? checksumFilePath = null;
 36if (isIso)
 37{
 38 checksumFilePath = await TryDownloadChecksumFileForIso(client, url, outputFile);
 39 Console.WriteLine();
 40}
 41
 42// Probe the server for content-length and range support
 43using var headReq = new HttpRequestMessage(HttpMethod.Head, url);
 44using var headResp = await client.SendAsync(headReq);
 45headResp.EnsureSuccessStatusCode();
 46
 47var totalSize = headResp.Content.Headers.ContentLength ?? -1;
 48var acceptRanges = headResp.Headers.Contains("Accept-Ranges")
 49 && headResp.Headers.GetValues("Accept-Ranges").Any(v => v.Contains("bytes", StringComparison.OrdinalIgnoreCase));
 50
 51if (totalSize <= 0 || !acceptRanges)
 52{
 53 Console.WriteLine(totalSize <= 0
 54 ? "Server did not report content length - falling back to single-stream download."
 55 : "Server does not support range requests - falling back to single-stream download.");
 56 Console.WriteLine();
 57 await SingleStreamDownload(client, url, outputFile);
 58 if (isIso)
 59 {
 60 var valid = await ValidateIsoAsync(outputFile, checksumFilePath);
 61 return valid ? 0 : 2;
 62 }
 63
 64 return 0;
 65}
 66
 67Console.WriteLine($"Size: {FormatBytes(totalSize)}");
 68Console.WriteLine($"Ranges: supported");
 69Console.WriteLine();
 70
 71var sw = Stopwatch.StartNew();
 72var chunkInfos = BuildChunks(totalSize, chunks);
 73var progress = new long[chunkInfos.Count];
 74
 75// Progress reporter
 76using var cts = new CancellationTokenSource();
 77var progressTask = Task.Run(async () =>
 78{
 79 while (!cts.Token.IsCancellationRequested)
 80 {
 81 PrintProgress(progress, chunkInfos, totalSize, sw.Elapsed);
 82 try { await Task.Delay(ProgressUpdateMs, cts.Token); } catch (TaskCanceledException) { break; }
 83 }
 84});
 85
 86// Download all chunks in parallel
 87var tempFiles = new string[chunkInfos.Count];
 88var downloadTasks = new Task[chunkInfos.Count];
 89
 90for (int i = 0; i < chunkInfos.Count; i++)
 91{
 92 var idx = i;
 93 var (start, end) = chunkInfos[idx];
 94 tempFiles[idx] = $"{outputFile}.part{idx}";
 95 downloadTasks[idx] = DownloadChunk(client, url, start, end, tempFiles[idx], progress, idx);
 96}
 97
 98await Task.WhenAll(downloadTasks);
 99
100cts.Cancel();
101await progressTask;
102PrintProgress(progress, chunkInfos, totalSize, sw.Elapsed);
103Console.WriteLine();
104Console.WriteLine();
105
106// Reassemble
107Console.Write("Reassembling... ");
108await Reassemble(tempFiles, outputFile);
109Console.WriteLine("done.");
110
111// Cleanup temp files
112foreach (var f in tempFiles)
113 if (File.Exists(f)) File.Delete(f);
114
115sw.Stop();
116var info = new FileInfo(outputFile);
117Console.WriteLine($"Completed in {sw.Elapsed.TotalSeconds:F1}s - {FormatBytes(info.Length)} @ {FormatBytes((long)(info.Length / sw.Elapsed.TotalSeconds))}/s");
118
119if (isIso)
120{
121 var valid = await ValidateIsoAsync(outputFile, checksumFilePath);
122 return valid ? 0 : 2;
123}
124
125return 0;
126
127// ---- helper methods ----
128
129static List<(long Start, long End)> BuildChunks(long totalSize, int count)
130{
131 var chunkSize = totalSize / count;
132 var result = new List<(long, long)>(count);
133 for (int i = 0; i < count; i++)
134 {
135 var start = i * chunkSize;
136 var end = (i == count - 1) ? totalSize - 1 : start + chunkSize - 1;
137 result.Add((start, end));
138 }
139 return result;
140}
141
142static async Task DownloadChunk(HttpClient client, string url, long start, long end,
143 string tempFile, long[] progress, int index)
144{
145 for (int attempt = 1; attempt <= MaxRetries; attempt++)
146 {
147 try
148 {
149 // Resume from where we left off if retrying
150 long existingBytes = 0;
151 if (File.Exists(tempFile))
152 {
153 existingBytes = new FileInfo(tempFile).Length;
154 if (existingBytes >= end - start + 1)
155 {
156 progress[index] = end - start + 1;
157 return; // already complete
158 }
159 }
160
161 using var req = new HttpRequestMessage(HttpMethod.Get, url);
162 req.Headers.Range = new RangeHeaderValue(start + existingBytes, end);
163
164 using var resp = await client.SendAsync(req, HttpCompletionOption.ResponseHeadersRead);
165 resp.EnsureSuccessStatusCode();
166
167 await using var stream = await resp.Content.ReadAsStreamAsync();
168 await using var fs = new FileStream(tempFile, existingBytes > 0 ? FileMode.Append : FileMode.Create,
169 FileAccess.Write, FileShare.None, 81920, useAsync: true);
170
171 var buffer = new byte[81920];
172 long downloaded = existingBytes;
173 int bytesRead;
174
175 while ((bytesRead = await stream.ReadAsync(buffer)) > 0)
176 {
177 await fs.WriteAsync(buffer.AsMemory(0, bytesRead));
178 downloaded += bytesRead;
179 Interlocked.Exchange(ref progress[index], downloaded);
180 }
181
182 return; // success
183 }
184 catch (Exception ex) when (attempt < MaxRetries)
185 {
186 Console.Error.WriteLine($"\n [chunk {index}] attempt {attempt} failed: {ex.Message} - retrying...");
187 await Task.Delay(RetryDelayMs * attempt);
188 }
189 }
190}
191
192static async Task SingleStreamDownload(HttpClient client, string url, string outputFile)
193{
194 var sw = Stopwatch.StartNew();
195 using var resp = await client.GetAsync(url, HttpCompletionOption.ResponseHeadersRead);
196 resp.EnsureSuccessStatusCode();
197
198 var total = resp.Content.Headers.ContentLength ?? -1;
199 await using var stream = await resp.Content.ReadAsStreamAsync();
200 await using var fs = new FileStream(outputFile, FileMode.Create, FileAccess.Write, FileShare.None, 81920, true);
201
202 var buffer = new byte[81920];
203 long downloaded = 0;
204 int bytesRead;
205 var lastUpdate = DateTimeOffset.UtcNow;
206
207 while ((bytesRead = await stream.ReadAsync(buffer)) > 0)
208 {
209 await fs.WriteAsync(buffer.AsMemory(0, bytesRead));
210 downloaded += bytesRead;
211
212 if ((DateTimeOffset.UtcNow - lastUpdate).TotalMilliseconds >= ProgressUpdateMs)
213 {
214 lastUpdate = DateTimeOffset.UtcNow;
215 var pct = total > 0 ? (double)downloaded / total * 100 : 0;
216 var speed = downloaded / sw.Elapsed.TotalSeconds;
217 Console.Write($"\r [{pct,5:F1}%] {FormatBytes(downloaded)}{(total > 0 ? $" / {FormatBytes(total)}" : "")} {FormatBytes((long)speed)}/s ");
218 }
219 }
220
221 sw.Stop();
222 Console.WriteLine($"\r [100.0%] {FormatBytes(downloaded)} {FormatBytes((long)(downloaded / sw.Elapsed.TotalSeconds))}/s - done. ");
223}
224
225static async Task Reassemble(string[] parts, string outputFile)
226{
227 await using var outFs = new FileStream(outputFile, FileMode.Create, FileAccess.Write, FileShare.None, 81920, true);
228 foreach (var part in parts)
229 {
230 await using var inFs = new FileStream(part, FileMode.Open, FileAccess.Read, FileShare.Read, 81920, true);
231 await inFs.CopyToAsync(outFs);
232 }
233}
234
235static async Task<string?> TryDownloadChecksumFileForIso(HttpClient client, string isoUrl, string outputFile)
236{
237 var outputDir = Path.GetDirectoryName(outputFile);
238 if (string.IsNullOrWhiteSpace(outputDir))
239 outputDir = ".";
240
241 Directory.CreateDirectory(outputDir);
242
243 var isoUri = new Uri(isoUrl);
244 var baseUri = new Uri(isoUri, ".");
245 string[] candidateNames = ["SHA256SUMS", "SHA256SUMS.txt", "sha256sum.txt", "SHA256SUM", "sha256sums"];
246
247 Console.WriteLine("ISO detected: looking for checksum file in source directory...");
248 foreach (var candidate in candidateNames)
249 {
250 try
251 {
252 var checksumUri = new Uri(baseUri, candidate);
253 using var resp = await client.GetAsync(checksumUri);
254 if (!resp.IsSuccessStatusCode)
255 continue;
256
257 var content = await resp.Content.ReadAsStringAsync();
258 if (string.IsNullOrWhiteSpace(content))
259 continue;
260
261 var localPath = Path.Combine(outputDir, candidate);
262 await File.WriteAllTextAsync(localPath, content);
263 Console.WriteLine($"Checksum file downloaded: {candidate}");
264 return localPath;
265 }
266 catch
267 {
268 // Try next known checksum filename.
269 }
270 }
271
272 Console.WriteLine("No checksum file found; ISO integrity check will use structure validation.");
273 return null;
274}
275
276static async Task<bool> ValidateIsoAsync(string isoPath, string? checksumFilePath)
277{
278 Console.WriteLine();
279 Console.WriteLine("Validating ISO image...");
280
281 if (!File.Exists(isoPath))
282 {
283 Console.Error.WriteLine("Validation failed: ISO file not found.");
284 return false;
285 }
286
287 var actualSha256 = await ComputeSha256Async(isoPath);
288
289 if (!string.IsNullOrWhiteSpace(checksumFilePath) && File.Exists(checksumFilePath))
290 {
291 var expectedSha256 = await GetExpectedSha256ForFileAsync(checksumFilePath, Path.GetFileName(isoPath));
292 if (!string.IsNullOrWhiteSpace(expectedSha256))
293 {
294 var valid = string.Equals(actualSha256, expectedSha256, StringComparison.OrdinalIgnoreCase);
295 Console.WriteLine($"Expected SHA256: {expectedSha256}");
296 Console.WriteLine($"Actual SHA256: {actualSha256}");
297 Console.WriteLine(valid ? "ISO checksum validation passed." : "ISO checksum validation failed.");
298 return valid;
299 }
300
301 Console.WriteLine("Checksum file was downloaded but no matching hash entry was found for this ISO.");
302 }
303
304 var structureValid = await HasIso9660SignatureAsync(isoPath);
305 Console.WriteLine($"Computed SHA256: {actualSha256}");
306 Console.WriteLine(structureValid
307 ? "ISO structure validation passed (ISO9660 signature found)."
308 : "ISO structure validation failed (ISO9660 signature not found).");
309 return structureValid;
310}
311
312static async Task<string> ComputeSha256Async(string path)
313{
314 await using var fs = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read, 81920, useAsync: true);
315 var hash = await SHA256.HashDataAsync(fs);
316 return Convert.ToHexString(hash).ToLowerInvariant();
317}
318
319static async Task<string?> GetExpectedSha256ForFileAsync(string checksumFilePath, string fileName)
320{
321 var lines = await File.ReadAllLinesAsync(checksumFilePath);
322 foreach (var rawLine in lines)
323 {
324 var line = rawLine.Trim();
325 if (string.IsNullOrWhiteSpace(line) || line.StartsWith('#'))
326 continue;
327
328 var parts = line.Split(' ', StringSplitOptions.RemoveEmptyEntries);
329 if (parts.Length < 2)
330 continue;
331
332 var hash = parts[0];
333 if (hash.Length != 64 || !hash.All(Uri.IsHexDigit))
334 continue;
335
336 var listedName = parts[^1].TrimStart('*');
337 listedName = Path.GetFileName(listedName);
338 if (string.Equals(listedName, fileName, StringComparison.OrdinalIgnoreCase))
339 return hash.ToLowerInvariant();
340 }
341
342 return null;
343}
344
345static async Task<bool> HasIso9660SignatureAsync(string path)
346{
347 const int signatureOffset = 0x8001;
348 const int signatureLength = 5;
349
350 await using var fs = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read, 81920, useAsync: true);
351 if (fs.Length < signatureOffset + signatureLength)
352 return false;
353
354 fs.Seek(signatureOffset, SeekOrigin.Begin);
355 var buffer = new byte[signatureLength];
356 var bytesRead = await fs.ReadAsync(buffer);
357 if (bytesRead < signatureLength)
358 return false;
359
360 return buffer[0] == (byte)'C'
361 && buffer[1] == (byte)'D'
362 && buffer[2] == (byte)'0'
363 && buffer[3] == (byte)'0'
364 && buffer[4] == (byte)'1';
365}
366
367static void PrintProgress(long[] progress, List<(long Start, long End)> chunks, long totalSize, TimeSpan elapsed)
368{
369 long totalDownloaded = progress.Sum();
370 var pct = (double)totalDownloaded / totalSize * 100;
371 var speed = elapsed.TotalSeconds > 0 ? totalDownloaded / elapsed.TotalSeconds : 0;
372 var eta = speed > 0 ? TimeSpan.FromSeconds((totalSize - totalDownloaded) / speed) : TimeSpan.Zero;
373
374 // Per-chunk mini bars
375 var bars = new List<string>();
376 for (int i = 0; i < chunks.Count; i++)
377 {
378 var chunkSize = chunks[i].End - chunks[i].Start + 1;
379 var chunkPct = (double)progress[i] / chunkSize;
380 const int miniBarWidth = 8;
381 var filled = chunkPct <= 0
382 ? 0
383 : Math.Clamp((int)Math.Ceiling(chunkPct * miniBarWidth), 1, miniBarWidth);
384 bars.Add(new string('█', filled) + new string('░', miniBarWidth - filled));
385 }
386
387 Console.Write($"\r [{pct,5:F1}%] {FormatBytes(totalDownloaded)} / {FormatBytes(totalSize)} " +
388 $"{FormatBytes((long)speed)}/s ETA {eta:mm\\:ss} " +
389 $"[{string.Join('|', bars)}] ");
390}
391
392static string FormatBytes(long bytes)
393{
394 string[] units = ["B", "KB", "MB", "GB", "TB"];
395 double val = bytes;
396 int unit = 0;
397 while (val >= 1024 && unit < units.Length - 1) { val /= 1024; unit++; }
398 return $"{val:F1} {units[unit]}";
399}
400
401static void PrintUsage()
402{
403 Console.Error.WriteLine("Usage: ./downloader.cs <url> [output-file] [chunks]");
404 Console.Error.WriteLine(" url - URL to download");
405 Console.Error.WriteLine(" output-file - Output filename (default: derived from URL)");
406 Console.Error.WriteLine($" chunks - Number of parallel streams (default: {DefaultChunks})");
407 Console.Error.WriteLine();
408 Console.Error.WriteLine("Examples:");
409 Console.Error.WriteLine(" ./downloader.cs 'https://example.com/file.iso'");
410 Console.Error.WriteLine(" ./downloader.cs 'https://example.com/file.iso?x=1&y=2'");
411 Console.Error.WriteLine();
412 Console.Error.WriteLine("PowerShell note:");
413 Console.Error.WriteLine(" URLs containing '&' must be quoted, escaped as '`&', or passed after '--%'.");
414 Console.Error.WriteLine(" Example:");
415 Console.Error.WriteLine(" ./downloader.cs --% https://example.com/file.iso?x=1&y=2");
416}
Read the whole story
alvinashcraft
51 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

What Microsoft’s 10-Q Says About OpenAI

1 Share

Buried on page nine of Microsoft’s 10-Q for the quarter ended March 31, 2026 is a paragraph worthy of attention. Why? What does it reveal? A lot.

For starters, Microsoft now holds approximately 27 percent of OpenAI on an as-converted basis, accounted for under the equity method. The total funding commitment is $13 billion, of which $11.8 billion has been funded as of March 31, 2026. The October 2025 OpenAI recapitalization produced a dilution gain. Microsoft recorded $5.9 billion of net gains from OpenAI investments over the nine months, primarily from that dilution gain. The prior nine-month period reflected $2.7 billion of net losses on the same investment.

In plain English, even though Microsoft owns less of OpenAI, that smaller stake is worth more, and it produced a gain. Why? Because the implied valuation of OpenAI rose faster than Microsoft’s ownership percentage fell. Microsoft booked the markup. Money for nothing, and chips for free.


Poring over the 10-Q and the footnotes, it became obvious that investing in foundational AI labs is a crazy profitable business. A single set of transactions can bring three separate benefits. At least on Microsoft’s financial statements.

Cash leaves Microsoft as an investment. It returns as cloud revenue. And then some.

First, OpenAI burns Microsoft’s cash on Azure compute. The Azure consumption shows up as revenue inside the AI business line. As a result, the AI business line is now at a $37 billion run rate. It helps justify the $190 billion 2026 capex commitment.

Thanks to the magic money of AI private valuations, Microsoft’s equity stake in OpenAI gets marked up to reflect the latest funding valuation. The markup is “other income.” And as stated above, the dilution gains flow to “other income” as well.

Somehow a series of interrelated transactions does the trick. This is so gangsta. Nothing is improper, even though you know something isn’t right. Still, the accountants have done their job.


Microsoft’s “AI business” annual run rate is the headline number Satya Nadella repeated on the earnings call. It is up 123 percent year over year. Microsoft does not give you the breakdown. Here is the back of the envelope math.

It has been reported that Microsoft has roughly 20 million paid Copilot enterprise seats. The standard M365 Copilot price is $30 per user per month. That is approximately $7 billion in annualized Copilot revenue. Add roughly $1.5 to $2 billion for GitHub Copilot and adjacent tooling. Total commercial Copilot revenue is somewhere between $8 and $10 billion. Being generous, I would say Copilot is at most one-quarter of the $37 billion AI business.

The remaining $27 to $30 billion is Azure consumption. The composition, working from public disclosures and reasonable inference (no pun intended): OpenAI’s Azure spend is the largest single line. OpenAI runs on Azure. All the money it got from Microsoft has been burned on Microsoft compute. The rest is third-party enterprise customers using Azure OpenAI Service, plus other AI lab and AI startup compute, much of which Microsoft has at least partially funded through M12, the OpenAI Startup Fund, or various co-investment vehicles.

The customer concentration in the Azure AI revenue line is not disclosed. It does not have to be. But the structure of the disclosure tells you the answer. If 80 percent of the $37 billion came from a broad base of independent enterprises, Microsoft would say so on the earnings call. The silence is the answer.

I know it is not the same, but I am feeling nostalgic for vendor financing. Those crazy days of Nortel and Lucent. Lucent perfected the practice during the late 1990s telecom boom, extending credit to competitive local exchange carriers so they could buy Lucent equipment. The financing showed up as an asset on Lucent’s balance sheet. The equipment sales showed up as revenue. Lucent’s reported earnings looked excellent.

The structure worked until the customers ran out of money. By 2001, Lucent had taken billions in writedowns on its customer financing book. The CLECs went bankrupt in waves. Lucent’s stock fell from $84 to under $1.

I know it is not the same.

The current AI version is different. The instrument is convertible preferred stock and dilution gains, not vendor finance receivables. The asset on the hyperscaler balance sheet is an equity investment, not a loan. The customer is an AI lab, not a CLEC. The product being financed is GPU time, not switching equipment.

Still, as an old hand, I am feeling the nostalgia of the old mechanism. The funder, the customer, and the source of the markup are all part of the same closed system.

The same shape exists at Alphabet with Anthropic on Google Cloud. The same shape exists at Amazon with Anthropic on Trainium. Three of the four hyperscalers booked enormous non-cash gains this quarter from their stakes in AI labs. Alphabet booked $36.8 billion of equity gains. Amazon booked $16.8 billion in pre-tax gains on Anthropic. Combined, roughly $50 billion plus of non-cash income flowed through Q1 2026 income statements from AI lab marks and dilution gains.

What the OpenAI restructuring really did

The October 2025 OpenAI recapitalization, which produced Microsoft’s dilution gain, was framed publicly as a governance reform and a step toward a more conventional corporate structure. As I wrote at the time, the fix was in. OpenAI formed a public benefit corporation. Microsoft’s licensing arrangement shifted from exclusive to non-exclusive. The new agreement extended the partnership.

OpenAI will continue to pay Microsoft a 20 percent revenue share through 2030, but only up to a fixed cap, after which the obligation extinguishes. The IP license Microsoft holds on OpenAI’s models, previously exclusive and tied to the elastic concept of “AGI achievement,” is now non-exclusive with a hard 2032 expiration. Under the revised agreement, OpenAI can sell API access to its models through any cloud provider.

PitchBook reads the restructuring as a precondition for OpenAI’s IPO push. The Wall Street Journal reported in January that the company is laying groundwork for a Q4 2026 listing.

When I read the news, I was left asking the same question. Why did Microsoft do this? And what do they get out of it?

The restructuring, mechanically, gave Microsoft a clean accounting event. The structure of the recap let Microsoft book the gain, reduce its proportional ownership to a still-substantial 27 percent, and free OpenAI to raise more capital from other investors at higher valuations.

Microsoft is no longer the controlling investor. It is a large minority equity holder of a public benefit corporation, with a non-exclusive licensing arrangement and a $13 billion total funding commitment that is nearly fully funded. And come the IPO, it can sell as little or as much of its OpenAI equity as it wants, without any sense of moral obligation.

Microsoft has been quietly converting its OpenAI exposure from an operating dependency into a simple financial position. The accounting now flatters Microsoft as long as OpenAI’s valuation rises.

OpenAI needs Azure for a while, which is great for Microsoft as it builds up its AI business. All the while, OpenAI as an entity becomes a headache for Amazon or whomever else wants to do business with them.

Microsoft’s move cannot be viewed in isolation. On April 27, the Wall Street Journal reported that OpenAI missed multiple monthly revenue targets earlier this year, losing ground to Anthropic in coding and enterprise. ChatGPT fell short of its internal target to reach one billion weekly active users by the end of 2025, with growth flattening around 900 million. CFO Sarah Friar reportedly told colleagues she is worried OpenAI may not be able to fund future computing contracts if revenue does not accelerate.

Altman and Friar issued a joint statement calling the report “ridiculous” and saying they are “totally aligned on buying as much compute as we can.” The denial said they agree on wanting compute. The Wall Street Journal report wondered whether OpenAI can afford it, or whether it can IPO this year.

Altman’s statement and Friar’s comment mean nothing. SoftBank fell almost 10 percent in Tokyo. Oracle dropped more than 5 percent. CoreWeave fell 7 percent. AMD and Broadcom each took roughly 4 percent. The whole AI infrastructure stock ecosystem had a massive convulsion.

What if the company at the center of the structure cannot pay its bills?

PitchBook’s Harrison Rolfes calculated that OpenAI’s infrastructure obligations now exceed $1.15 trillion across Oracle, Microsoft, and Amazon. Current annualized revenue is roughly $25 billion. The ratio is forty to one. “If revenue growth doesn’t reaccelerate,” Rolfes said, “those contracts become the most expensive fixed-cost bet in technology history.”

In a sense, Friar is not wrong when she tells the board it is going to be hard to go public with those numbers. A CFO comment to board members does not leak to the Wall Street Journal unless someone wants it to leak. To slow down the IPO, or to shank it entirely.

The leak is the story.

The Q4 2026 IPO timeline matters because everything in the financial structure assumes the valuation machine keeps churning at max speed. A successful OpenAI IPO means not only new money but also actual liquidity for Microsoft and other early investors.

PitchBook’s most recent analyst note suggests the realistic IPO window has shifted from Q4 2026 to mid-to-late 2027, citing the same revenue miss and the $1.15 trillion in infrastructure obligations that will need to convert into free cash flow before public market investors get comfortable. If that delay holds, every hyperscaler holding equity gains based on private OpenAI marks is sitting on paper that has to keep being remarked upward to keep working.

This is the announcement economy at the financial-engineering level. Promises about future revenue support current accounting. Current accounting supports the next round. The next round supports the marks. The marks support the parent company income statement. And then the cycle repeats, faster.

There is no doubt in my mind that if the IPO is delayed, there will be a new funding round. If it prices above the implied valuation from the October recap, the cycle continues. If it prices flat or down, the dilution-gain mechanic reverses.

Over the next few quarters I will be watching what Microsoft has to say about its AI run rate, and whether it provides more details. Given the sheer scale of the money, I am surprised sell-side analysts aren’t pushing for further disclosure. Or maybe they did and I missed it.

I would also be keeping an eye on Azure gross margins. If OpenAI’s compute consumption is priced at preferential rates, as has been widely reported, the gross margin on the largest single piece of the AI business is structurally lower than the rest of Azure. As OpenAI scales further, the blended Azure margin will move with it.

The platform shift Satya Nadella described is real. Workloads are moving from end-user-driven to agent-driven. Token consumption may well grow at machine scale rather than human scale. The capex commitment may be the right call.

But the financial structure underneath the AI revenue line is a delicate balance.


Previously:

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories