Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147651 stories
·
33 followers

The great computer science exodus (and where students are going instead)

1 Share
Students are losing some interest in computer science broadly but gaining interest in AI-specific majors and courses.
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Vim 9.2 Released

1 Share
"More than two years after the last major 9.1 release, the Vim project has announced Vim 9.2," reports the blog Linuxiac: A big part of this update focuses on improving Vim9 Script as Vim 9.2 adds support for enums, generic functions, and tuple types. On top of that, you can now use built-in functions as methods, and class handling includes features like protected constructors with _new(). The :defcompile command has also been improved to fully compile methods, which boosts performance and consistency in Vim9 scripts. Insert mode completion now includes fuzzy matching, so you get more flexible suggestions without extra plugins. You can also complete words from registers using CTRL-X CTRL-R. New completeopt flags like nosort and nearest give you more control over how matches are shown. Vim 9.2 also makes diff mode better by improving how differences are lined up and shown, especially in complex cases. Plus on Linux and Unix-like systems, Vim "now adheres to the XDG Base Directory Specification, using $HOME/.config/vim for user configuration," according to the release notes. And Phoronix Mcites more new features: Vim 9.2 features "full support" for Wayland with its UI and clipboard handling. The Wayland support is considered experimental in this release but it should be in good shape overall... Vim 9.2 also brings a new vertical tab panel alternative to the horizontal tab line. The Microsoft Windows GUI for Vim now also has native dark mode support. You can find the new release on Vim's "Download" page.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft pushes back “switch to MacBook” for a compact adapter, says Surface has slim chargers too in a viral X post

1 Share

Microsoft’s social media pages aren’t ones to shy away from bold claims on platforms like X, but unfortunately, the comments and replies are usually negative towards the company, mostly due to people’s abhorrence towards AI in Windows 11, and the issues the OS had in 2025 didn’t help either.

This time, however, the official Microsoft Surface account made a lighthearted jab that was far less controversial than the reaction it received. The original post by an X user, who is a developer, showed a side-by-side image of a new MacBook charger and an old Dell Windows laptop charger, with the caption “First reason to switch to MacBook”.

Microsoft Surface social team's reply to a post showing the difference between a new MacBook Charger and an old Dell Windows laptop charger
Microsoft Surface social team’s reply to a post showing comparing a new MacBook Charger and an old Windows laptop charger

The Surface social media team replied with “Surface has entered the chat”, which hints at the compact chargers that all portable Surface PCs come with. And in all fairness, most Windows laptops ship with compact chargers these days, except the dirt-cheap ones and the powerful gaming laptops, both of which are understandable.

Microsoft’s reply has gone on to garner almost a million views, but sadly, despite saying the truth, the comments section is filled with hostility towards Microsoft and Windows, which isn’t new, to be honest. But the issue here is that the original post by the developer has reached a staggering 3.8 million views despite showing a false claim.

Surface Laptop 7 with its Compact Power Adapter.
Surface Laptop 7 with its Compact Power Adapter. Source: Crimson Tech on YouTube

Of course, we are not denying the frustration many users feel toward Microsoft, and a lot of it stems from the rough patches Windows 11 went through over the past year. The company has already signaled that 2026 will focus heavily on performance tuning, reliability, and security changes to rebuild confidence in the platform.

But when criticism spills over into situations where the claim itself is accurate, it risks reinforcing outdated perceptions.

Someone casually coming across that viral post could scroll away thinking all Windows laptops still ship with bulky adapters and treat that as a deciding factor, just like the original poster said, without realizing that charger size is dictated by hardware class and power demand.

Most modern Windows laptops already ship with compact USB-C chargers

Over the past several years, most thin-and-light and mid-range Windows laptops have already moved to USB-C Power Delivery adapters that are very similar in size and output to what ships with a MacBook. In fact, Windows laptops in the price range of MacBooks have more compact chargers than what Apple provides, with even higher power delivery.

For example, the following image shows the charger that comes with a Dell XPS 14 from 2025. Yes, the developer in his X post also shows a Dell charger, but we’re not sure which decade it is from.

Dell XPS 14 2025 100W charger in box by Andrew Mark David on YouTube
Dell XPS 14 2025 100W charger in box. Source: Andrew Mark David on YouTube

Note that this charger is already smaller than the MacBook charger shown in the X post, while also delivering 100W of power output. It’s neatly designed with a detachable power cord that connects to the mains, and the adapter can be safely placed on a table or wherever convenient.

In comparison, the following image shows a 96W USB-C Power Adapter by Apple. It’s not something that looks compact, and it connects directly to the mains, so it wouldn’t be ideal if there is furniture (desk, table, or sofa) right in front of the power outlet.

96W Apple USB-C Power Adapter
96W Apple USB-C Power Adapter

If you think that this is compact, then look at the absolute behemoth shown below.

140W Apple USB-C Adapter
140W Apple USB-C Adapter

One look at the tiny USB-C port is enough to understand that this is a proper brick that Apple expects we connect directly to the power outlet. Honestly, not very practical. Apple is known for siding with form over function and this is one such example. Such a large adapter could’ve been better if it had a separate cable that can connect to the power outlet, like what dell did here:

Dell 130W USB-C GaN Slim Adapter
Dell 130W USB-C GaN Slim Adapter

This is Dell’s 130W power adapter which is much more compact and practical to use. But Dell isn’t done here. The following image shows an incredible 280W power adapter from Dell. It’s double the power delivery of Apple’s charging brick, while still being actually compact.

Dell 280W USB-C GaN AC Adapter
Dell 280W USB-C GaN AC Adapter

Yes, there are what Dell offers for their laptops priced similar to MacBook Pro. The company has even smaller ones, but you’ve seen enough of Dell already, so here is one from Samsung:

65W Compact charger that comes as standard with Samsung Galaxy Book 5 Pro by Shane Symonds
65W Compact charger that comes as standard with Samsung Galaxy Book 5 Pro. Source: Shane Symonds on YouTube
Compact charger that comes bundled with Samsung Galaxy Book 5 Pro
Compact charger that comes bundled with Samsung Galaxy Book 5 Pro

No further evidence is required to demonstrate how profoundly mistaken the original poster was. So maybe a short explanation of how the Windows laptop industry beat Apple in their own game of minimalist charging adapters would suffice.

How Windows laptop power adapters became compact?

While the change was gradual, Windows laptops have been shipping compact chargers for a few years now. So, it isn’t anything new.

  1. The industry standardized around USB Power Delivery, which allows devices and chargers to negotiate voltage and current dynamically, so the laptop asks for exactly the amount of power it needs, and the charger supplies it.
  2. The second reason Windows laptop chargers have shrunk is the adoption of GaN (gallium nitride) semiconductors. GaN components switch faster and waste less energy as heat than traditional silicon, which allows manufacturers to build smaller, lighter adapters without reducing output capacity.

Apple still doesn’t ship GaN power adapters with MacBooks, unlike Dell, Samsung, or pretty much all mainstream PC brands that sell laptops around the same price as MacBooks.

Of course, Apple certainly helped popularize the idea of minimalist laptop charging, but it would be wildly inaccurate to suggest Windows laptops still use bulky adapters, like how the original poster mistakenly claims.

Note that charger size is directly tied to how much power a laptop needs. A thin ultrabook designed for Office work and light development might draw 45W to 65W, which is easy to deliver through a very small USB-C adapter.

A high-performance laptop or gaming laptop can demand well over 140W sustained power to run high-performance CPUs and GPUs without throttling. That extra power has to come from somewhere, and physics still applies, which means larger components, more thermal headroom, and therefore a bigger adapter.

This is why gaming laptops from every brand, whether it is ASUS ROG, Alienware, Lenovo Legion, or MSI, still include sizeable power bricks. They are not designed to be minimalist travel machines. They are effectively portable desktops, so a comparison with MacBooks would be futile.

The official Surface account’s reply was spot on, but the conversation missed the context

The Surface account’s response was pointing to the category it actually competes in. Surface Laptop, Surface Pro, and similar devices are thin productivity machines that already come with compact USB-C-based chargers (or the Surface Connect), very much in line with other premium ultraportables on the market.

Microsoft Surface Pro
Microsoft Surface Pro. Source: Microsoft

Microsoft’s socials team was highlighting that modern premium Windows hardware, including Surface devices, has already moved to the same class of compact, travel-friendly charging adapters people associate with MacBooks. The viral comparison used an older, high-wattage barrel charger that does not represent what most ultrabooks ship with today.

What followed online, however, showed a broader sentiment about Windows rather than the specific claim being made.

This lingering negativity toward Microsoft and Windows also means some genuinely important developments risk getting buried in the noise. The PC ecosystem is in the middle of a major hardware transition, with Intel Panther Lake processors and continued momentum behind ARM-based Windows PCs, all targeting better performance per watt and longer battery life. Those advances matter far more than the size of a charger, yet they rarely get the same attention when discourse is dominated by past frustrations.

At the same time, Microsoft has already said that 2026 will focus on stabilizing Windows, with work underway to address reliability problems, reduce AI interventions, improve File Explorer, and background behavior during gaming. The company has even begun reconsidering long-requested changes like a movable and resizable taskbar.

Visual representation of taskbar at the top
Visual representation of the taskbar at the top

Microsoft undeniably dropped the ball in 2025. But the current roadmap suggests a course correction for Windows 11. If those promised improvements land as expected, 2026 may end up being remembered less for social media debates and more for Windows getting back to fundamentals.

The post Microsoft pushes back “switch to MacBook” for a compact adapter, says Surface has slim chargers too in a viral X post appeared first on Windows Latest

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Boost Your .NET Projects with Spargine: Centralized Time Handling with the Clock Type

1 Share
The Clock type in the DotNetTips.Spargine.Core assembly and NuGet package centralizes time-related functions to enhance application consistency and reduce fragmentation. It provides a comprehensive and reliable abstraction for developers, ensuring efficient time handling while minimizing bugs. By streamlining time operations, Clock fosters cleaner, more predictable code across applications.



Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

🧠 Building RAG in .NET with Local Embeddings — 3 Approaches, Zero Cloud Calls

1 Share

⚠ This blog post was created with the help of AI tools. Yes, I used a bit of magic from language models to organize my thoughts and automate the boring parts, but the geeky fun and the 🤖 in C# are 100% mine.

Hi! 👋

One of the questions I get most often is: “Bruno, can I build a RAG (Retrieval-Augmented Generation) app in .NET without sending my data to the cloud?”

The answer is a resounding YES. 🚀

In this post, I’ll walk you through three different ways to build RAG applications using ElBruno.LocalEmbeddings — a .NET library that generates text embeddings locally using ONNX Runtime. No external API calls for embeddings. Everything runs on your machine.

Each approach uses a different level of abstraction:

#SamplePatternLLMComplexity
1RagChatRetrieval-only (no LLM)NoneVectorData + DI
2RagOllamaTurnkey RAGOllama (phi4-mini)Kernel Memory orchestrates everything
3RagFoundryLocalManual RAG pipelineFoundry Local (phi-4-mini)Full control, core library only

📦 The Library: ElBruno.LocalEmbeddings

Before we start, here’s the quick setup. The core NuGet package:

dotnet add package ElBruno.LocalEmbeddings

And the companion packages we’ll use across the samples:

# For Microsoft.Extensions.VectorData integration (Sample 1)
dotnet add package ElBruno.LocalEmbeddings.VectorData
# For Microsoft Kernel Memory integration (Sample 2)
dotnet add package ElBruno.LocalEmbeddings.KernelMemory

The library implements IEmbeddingGenerator<string, Embedding<float>> from Microsoft.Extensions.AI, so it plugs into any .NET AI pipeline that uses that abstraction. It downloads and caches HuggingFace sentence-transformer models automatically — no manual model management needed.

💡 Default model: sentence-transformers/all-MiniLM-L6-v2 — 384-dimensional embeddings, ~90 MB download, cached locally after first run.


🔍 Sample 1: RagChat — Semantic Search with VectorData (No LLM!)

The idea: Embed a set of FAQ documents, store them in an in-memory vector store, and let the user search by typing natural language queries. The system returns the most relevant documents ranked by cosine similarity. No LLM is involved — this is pure embedding-based retrieval.

This sample uses the ElBruno.LocalEmbeddings.VectorData companion package, which integrates with Microsoft.Extensions.VectorData abstractions and includes a built-in InMemoryVectorStore.

Step 1: Define the Document Model

First, we define a Document class using VectorData attributes:

using Microsoft.Extensions.VectorData;
public sealed class Document
{
[VectorStoreKey]
public required string Id { get; init; }
[VectorStoreData]
public required string Title { get; init; }
[VectorStoreData]
public required string Content { get; init; }
[VectorStoreVector(384, DistanceFunction = DistanceFunction.CosineSimilarity)]
public ReadOnlyMemory<float> Vector { get; set; }
[VectorStoreData]
public string? Category { get; init; }
}

Notice the [VectorStoreVector(384)] attribute — that matches the 384 dimensions of the default MiniLM model. The DistanceFunction.CosineSimilarity tells the vector store how to rank results.

Step 2: Wire Up DI and Load Documents

using ElBruno.LocalEmbeddings.VectorData.Extensions;
using Microsoft.Extensions.AI;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.VectorData;
// Step 1: Configure DI
var services = new ServiceCollection();
services.AddLocalEmbeddingsWithInMemoryVectorStore(options =>
{
options.ModelName = "sentence-transformers/all-MiniLM-L6-v2";
options.MaxSequenceLength = 256;
options.EnsureModelDownloaded = true;
})
.AddVectorStoreCollection<string, Document>("faq");
using var serviceProvider = services.BuildServiceProvider();
// Step 2: Resolve embedding generator + vector collection
var embeddingGenerator = serviceProvider
.GetRequiredService<IEmbeddingGenerator<string, Embedding<float>>>();
var faqCollection = serviceProvider
.GetRequiredService<VectorStoreCollection<string, Document>>();

One line — AddLocalEmbeddingsWithInMemoryVectorStore() — registers both the local embedding generator and the in-memory vector store. Then we add a typed collection called "faq" for our Document model.

Step 3: Batch Embed and Upsert

// Step 3: Load FAQ documents, batch-embed, upsert into vector store
var documents = SampleData.GetFaqDocuments(); // 20 FAQ docs
var embeddings = await embeddingGenerator
.GenerateAsync(documents.Select(d => d.Content).ToList());
for (var i = 0; i < documents.Count; i++)
documents[i].Vector = embeddings[i].Vector;
await faqCollection.UpsertAsync(documents);

We batch-embed all 20 documents at once (efficient!), assign vectors, and upsert them into the vector store.

Step 4: Search Loop

while (true)
{
var input = Console.ReadLine();
// Embed the user query
var queryEmbedding = (await embeddingGenerator.GenerateAsync([input]))[0];
// Search the vector store
var results = await faqCollection
.SearchAsync(queryEmbedding, top: 3)
.ToListAsync();
// Filter by minimum similarity score
results = results
.Where(r => (r.Score ?? 0d) >= 0.2d)
.OrderByDescending(r => r.Score ?? 0d)
.ToList();
foreach (var result in results)
Console.WriteLine($" [{result.Score:P0}] {result.Record.Title}");
}

That’s it! The user types a question, we embed it, search the vector collection with SearchAsync, and display matches with their similarity scores. No LLM, no cloud calls, no API keys.


🦙 Sample 2: RagOllama — Full RAG with Kernel Memory + Ollama

The idea: Use Microsoft Kernel Memory to orchestrate the entire RAG pipeline — chunking, embedding, storage, retrieval, prompt building, and LLM response — with a single .WithLocalEmbeddings() call for the embedding part and Ollama running phi4-mini locally for text generation.

This is the “turnkey” approach — Kernel Memory handles everything. You just import text and ask questions.

The Before/After Pattern

This sample first asks the question without any memory (baseline), then asks the same question with RAG to show the difference:

using ElBruno.LocalEmbeddings.KernelMemory.Extensions;
using Microsoft.KernelMemory;
using Microsoft.KernelMemory.AI.Ollama;
using Microsoft.KernelMemory.Configuration;
using OllamaSharp;
var ollamaEndpoint = "http://localhost:11434";
var modelIdChat = "phi4-mini";
var question = "What is Bruno's favourite super hero?";
// ❌ Ask WITHOUT memory — the model doesn't know the answer
var ollama = new OllamaApiClient(ollamaEndpoint)
{
SelectedModel = modelIdChat
};
Console.WriteLine("Answer WITHOUT memory:");
await foreach (var token in ollama.GenerateAsync(question))
Console.Write(token.Response);

Without context, the LLM just guesses. Now let’s build the RAG pipeline:

Build Kernel Memory with Local Embeddings

// Configure Ollama for text generation
var config = new OllamaConfig
{
Endpoint = ollamaEndpoint,
TextModel = new OllamaModelConfig(modelIdChat)
};
// Build Kernel Memory: Ollama for chat + local embeddings for vectors
var memory = new KernelMemoryBuilder()
.WithOllamaTextGeneration(config)
.WithLocalEmbeddings() // 👈 This is the magic line!
.WithCustomTextPartitioningOptions(new TextPartitioningOptions
{
MaxTokensPerParagraph = 256,
OverlappingTokens = 50
})
.Build();

.WithLocalEmbeddings() is an extension method from the ElBruno.LocalEmbeddings.KernelMemory companion package. Under the hood, it creates a LocalEmbeddingGenerator with default options and wraps it in a LocalEmbeddingTextGenerator adapter that implements Kernel Memory’ ITextEmbeddingGenerator interface. One line, zero configuration.

Import Facts and Ask with Memory

// Import facts into memory
var facts = new[]
{
"Gisela's favourite super hero is Batman",
"Gisela watched Venom 3 2 weeks ago",
"Bruno's favourite super hero is Invincible",
"Bruno went to the cinema to watch Venom 3",
"Bruno doesn't like the super hero movie: Eternals",
"ACE and Goku watched the movies Venom 3 and Eternals",
};
for (var i = 0; i < facts.Length; i++)
await memory.ImportTextAsync(facts[i], (i + 1).ToString());
// ✅ Ask WITH memory — now the model knows!
Console.WriteLine("\nAnswer WITH memory:");
await foreach (var result in memory.AskStreamingAsync(question))
{
Console.Write(result.Result);
if (result.RelevantSources.Count > 0)
foreach (var source in result.RelevantSources)
Console.WriteLine($" [source: #{source.Index}] {source.SourceUrl}");
}

When you call ImportTextAsync, Kernel Memory automatically:

  1. Chunks the text (256 tokens per paragraph, 50 overlapping)
  2. Embeds each chunk using our local ONNX model
  3. Stores the chunks and vectors in its built-in store

When you call AskStreamingAsync, it:

  1. Embeds the question
  2. Retrieves the most relevant chunks
  3. Builds a prompt with the context
  4. Streams the LLM response from Ollama

All in one call. The answer now correctly says “Bruno’s favourite super hero is Invincible” — with source citations! 🎉

Prerequisites

  • Ollama running locally with phi4-mini pulled

🏗 Sample 3: RagFoundryLocal — Manual RAG with Foundry Local

The idea: Build the entire RAG pipeline by hand — embed facts, search with FindClosest(), construct a prompt template, and stream the LLM response. This sample uses only the core ElBruno.LocalEmbeddings package (no companion packages) and Microsoft AI Foundry Local for the LLM.

This is the “full control” approach — every step is explicit.

Start the Model and Ask Without Context

using ElBruno.LocalEmbeddings;
using ElBruno.LocalEmbeddings.Extensions;
using Microsoft.AI.Foundry.Local;
using Microsoft.Extensions.AI;
using OpenAI;
using System.ClientModel;
var modelAlias = "phi-4-mini";
var question = "What is Bruno's favourite super hero?";
const int topK = 3;
// Start Foundry Local model
await using var manager = await FoundryLocalManager.StartModelAsync(modelAlias);
// Resolve the alias to the actual model ID registered on the server
var modelIdChat = await ResolveModelIdAsync(manager.Endpoint, modelAlias);
var openAiClient = new OpenAIClient(
new ApiKeyCredential(manager.ApiKey),
new OpenAIClientOptions { Endpoint = manager.Endpoint });
IChatClient chatClient = openAiClient
.GetChatClient(modelIdChat)
.AsIChatClient();
// ❌ Ask without context (baseline)
await foreach (var update in chatClient.GetStreamingResponseAsync(
[new ChatMessage(ChatRole.User, question)]))
Console.Write(update.Text);

Foundry Local starts a local inference server and exposes an OpenAI-compatible API. We use IChatClient from Microsoft.Extensions.AI — the same abstraction you’d use with Azure OpenAI or any other provider.

Build the RAG Pipeline Step by Step

// Same facts as the Ollama sample
string[] facts =
[
"Gisela's favourite super hero is Batman",
"Gisela watched Venom 3 2 weeks ago",
"Bruno's favourite super hero is Invincible",
"Bruno went to the cinema to watch Venom 3",
"Bruno doesn't like the super hero movie: Eternals",
"ACE and Goku watched the movies Venom 3 and Eternals",
];
// Step 1: Embed all facts locally
using var embeddingGenerator = new LocalEmbeddingGenerator();
var factEmbeddings = await embeddingGenerator.GenerateAsync(facts);
// Step 2: Zip facts with their embeddings
var indexedFacts = facts.Zip(
factEmbeddings,
(fact, embedding) => (Item: fact, Embedding: embedding));
// Step 3: Embed the question and find closest matches
var queryEmbedding = await embeddingGenerator.GenerateEmbeddingAsync(question);
var contextDocs = indexedFacts
.FindClosest(queryEmbedding, topK: topK)
.Select(match => match.Item);

Here we use two key extension methods from the core library:

  • GenerateEmbeddingAsync(string) — convenience method that returns a single Embedding<float> directly (no array indexing needed)
  • FindClosest() — extension on IEnumerable<(T Item, Embedding<float>)> that performs cosine similarity ranking and returns the top-K matches

No vector store, no DI container — just LINQ and extension methods.

Build the Prompt and Stream the Response

// Step 4: Build the prompt with retrieved context
static string BuildPrompt(string question, IEnumerable<string> contextDocs)
{
var context = string.Join("\n- ", contextDocs);
return $"""
You are a helpful assistant. Use the provided context
to answer briefly and accurately.
Context:
- {context}
Question: {question}
""";
}
// Step 5: Ask the LLM with context ✅
await foreach (var update in chatClient.GetStreamingResponseAsync(
[new ChatMessage(ChatRole.User, BuildPrompt(question, contextDocs))]))
Console.Write(update.Text);

We build a simple prompt template using C# raw string literals, inject the retrieved context, and stream the response. The LLM now has the relevant facts and answers correctly.

Prerequisites


📊 Comparison: Which Approach Should You Use?

AspectRagChatRagOllamaRagFoundryLocal
LLMNone (retrieval only)Ollama phi4-miniFoundry Local phi-4-mini
Embedding integrationDI + VectorDataKernel Memory companionCore library directly
RAG orchestrationManual (VectorData SearchAsync)Automatic (Kernel Memory)Manual (embed → search → prompt)
Vector storeInMemoryVectorStore (built-in)Kernel Memory’s built-in storeIn-memory via LINQ
Companion packagesElBruno.LocalEmbeddings.VectorDataElBruno.LocalEmbeddings.KernelMemoryNone — core only
Key extension methodAddLocalEmbeddingsWithInMemoryVectorStore().WithLocalEmbeddings()FindClosest()
Lines of RAG code~20~15~25
Best forSearch-only, FAQ, no LLM costTurnkey RAG with minimal codeFull pipeline control

My recommendation:

  • Start with RagChat if you just need semantic search and don’t want an LLM dependency
  • Use RagOllama if you want a complete RAG system with minimal plumbing
  • Go with RagFoundryLocal if you need to customize every step of the pipeline

All three share the same foundation: embeddings generated locally on your machine, no cloud calls, no API keys for the embedding part.


🔗 References and Resources

Project

Sample Source Code

External Projects

Happy coding!

Greetings

El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno






Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Exploring .NET 11 Preview 1 Runtime Async: A dive into the Future of Async in .NET

1 Share

.NET 11 Preview 1 ships a groundbreaking feature: Runtime Async. Instead of relying solely on the C# compiler to rewrite async/await methods into state machines, the .NET runtime itself now understands async methods as a first-class concept. This article explores what Runtime Async is, why it matters, what changed in Preview 1, and how you can experiment with it today.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories