Read more of this story at Slashdot.
Microsoft’s social media pages aren’t ones to shy away from bold claims on platforms like X, but unfortunately, the comments and replies are usually negative towards the company, mostly due to people’s abhorrence towards AI in Windows 11, and the issues the OS had in 2025 didn’t help either.
This time, however, the official Microsoft Surface account made a lighthearted jab that was far less controversial than the reaction it received. The original post by an X user, who is a developer, showed a side-by-side image of a new MacBook charger and an old Dell Windows laptop charger, with the caption “First reason to switch to MacBook”.

The Surface social media team replied with “Surface has entered the chat”, which hints at the compact chargers that all portable Surface PCs come with. And in all fairness, most Windows laptops ship with compact chargers these days, except the dirt-cheap ones and the powerful gaming laptops, both of which are understandable.
Microsoft’s reply has gone on to garner almost a million views, but sadly, despite saying the truth, the comments section is filled with hostility towards Microsoft and Windows, which isn’t new, to be honest. But the issue here is that the original post by the developer has reached a staggering 3.8 million views despite showing a false claim.

Of course, we are not denying the frustration many users feel toward Microsoft, and a lot of it stems from the rough patches Windows 11 went through over the past year. The company has already signaled that 2026 will focus heavily on performance tuning, reliability, and security changes to rebuild confidence in the platform.
But when criticism spills over into situations where the claim itself is accurate, it risks reinforcing outdated perceptions.
Someone casually coming across that viral post could scroll away thinking all Windows laptops still ship with bulky adapters and treat that as a deciding factor, just like the original poster said, without realizing that charger size is dictated by hardware class and power demand.
Over the past several years, most thin-and-light and mid-range Windows laptops have already moved to USB-C Power Delivery adapters that are very similar in size and output to what ships with a MacBook. In fact, Windows laptops in the price range of MacBooks have more compact chargers than what Apple provides, with even higher power delivery.
For example, the following image shows the charger that comes with a Dell XPS 14 from 2025. Yes, the developer in his X post also shows a Dell charger, but we’re not sure which decade it is from.

Note that this charger is already smaller than the MacBook charger shown in the X post, while also delivering 100W of power output. It’s neatly designed with a detachable power cord that connects to the mains, and the adapter can be safely placed on a table or wherever convenient.
In comparison, the following image shows a 96W USB-C Power Adapter by Apple. It’s not something that looks compact, and it connects directly to the mains, so it wouldn’t be ideal if there is furniture (desk, table, or sofa) right in front of the power outlet.

If you think that this is compact, then look at the absolute behemoth shown below.

One look at the tiny USB-C port is enough to understand that this is a proper brick that Apple expects we connect directly to the power outlet. Honestly, not very practical. Apple is known for siding with form over function and this is one such example. Such a large adapter could’ve been better if it had a separate cable that can connect to the power outlet, like what dell did here:

This is Dell’s 130W power adapter which is much more compact and practical to use. But Dell isn’t done here. The following image shows an incredible 280W power adapter from Dell. It’s double the power delivery of Apple’s charging brick, while still being actually compact.

Yes, there are what Dell offers for their laptops priced similar to MacBook Pro. The company has even smaller ones, but you’ve seen enough of Dell already, so here is one from Samsung:


No further evidence is required to demonstrate how profoundly mistaken the original poster was. So maybe a short explanation of how the Windows laptop industry beat Apple in their own game of minimalist charging adapters would suffice.
How Windows laptop power adapters became compact?
While the change was gradual, Windows laptops have been shipping compact chargers for a few years now. So, it isn’t anything new.
Apple still doesn’t ship GaN power adapters with MacBooks, unlike Dell, Samsung, or pretty much all mainstream PC brands that sell laptops around the same price as MacBooks.
Of course, Apple certainly helped popularize the idea of minimalist laptop charging, but it would be wildly inaccurate to suggest Windows laptops still use bulky adapters, like how the original poster mistakenly claims.
Note that charger size is directly tied to how much power a laptop needs. A thin ultrabook designed for Office work and light development might draw 45W to 65W, which is easy to deliver through a very small USB-C adapter.
A high-performance laptop or gaming laptop can demand well over 140W sustained power to run high-performance CPUs and GPUs without throttling. That extra power has to come from somewhere, and physics still applies, which means larger components, more thermal headroom, and therefore a bigger adapter.
This is why gaming laptops from every brand, whether it is ASUS ROG, Alienware, Lenovo Legion, or MSI, still include sizeable power bricks. They are not designed to be minimalist travel machines. They are effectively portable desktops, so a comparison with MacBooks would be futile.
The Surface account’s response was pointing to the category it actually competes in. Surface Laptop, Surface Pro, and similar devices are thin productivity machines that already come with compact USB-C-based chargers (or the Surface Connect), very much in line with other premium ultraportables on the market.

Microsoft’s socials team was highlighting that modern premium Windows hardware, including Surface devices, has already moved to the same class of compact, travel-friendly charging adapters people associate with MacBooks. The viral comparison used an older, high-wattage barrel charger that does not represent what most ultrabooks ship with today.
What followed online, however, showed a broader sentiment about Windows rather than the specific claim being made.
This lingering negativity toward Microsoft and Windows also means some genuinely important developments risk getting buried in the noise. The PC ecosystem is in the middle of a major hardware transition, with Intel Panther Lake processors and continued momentum behind ARM-based Windows PCs, all targeting better performance per watt and longer battery life. Those advances matter far more than the size of a charger, yet they rarely get the same attention when discourse is dominated by past frustrations.
At the same time, Microsoft has already said that 2026 will focus on stabilizing Windows, with work underway to address reliability problems, reduce AI interventions, improve File Explorer, and background behavior during gaming. The company has even begun reconsidering long-requested changes like a movable and resizable taskbar.

Microsoft undeniably dropped the ball in 2025. But the current roadmap suggests a course correction for Windows 11. If those promised improvements land as expected, 2026 may end up being remembered less for social media debates and more for Windows getting back to fundamentals.
The post Microsoft pushes back “switch to MacBook” for a compact adapter, says Surface has slim chargers too in a viral X post appeared first on Windows Latest

This blog post was created with the help of AI tools. Yes, I used a bit of magic from language models to organize my thoughts and automate the boring parts, but the geeky fun and the
in C# are 100% mine.
Hi! 
One of the questions I get most often is: “Bruno, can I build a RAG (Retrieval-Augmented Generation) app in .NET without sending my data to the cloud?”
The answer is a resounding YES. 
In this post, I’ll walk you through three different ways to build RAG applications using ElBruno.LocalEmbeddings — a .NET library that generates text embeddings locally using ONNX Runtime. No external API calls for embeddings. Everything runs on your machine.
Each approach uses a different level of abstraction:
| # | Sample | Pattern | LLM | Complexity |
|---|---|---|---|---|
| 1 | RagChat | Retrieval-only (no LLM) | None | VectorData + DI |
| 2 | RagOllama | Turnkey RAG | Ollama (phi4-mini) | Kernel Memory orchestrates everything |
| 3 | RagFoundryLocal | Manual RAG pipeline | Foundry Local (phi-4-mini) | Full control, core library only |
The Library: ElBruno.LocalEmbeddingsBefore we start, here’s the quick setup. The core NuGet package:
dotnet add package ElBruno.LocalEmbeddings
And the companion packages we’ll use across the samples:
# For Microsoft.Extensions.VectorData integration (Sample 1)dotnet add package ElBruno.LocalEmbeddings.VectorData# For Microsoft Kernel Memory integration (Sample 2)dotnet add package ElBruno.LocalEmbeddings.KernelMemory
The library implements IEmbeddingGenerator<string, Embedding<float>> from Microsoft.Extensions.AI, so it plugs into any .NET AI pipeline that uses that abstraction. It downloads and caches HuggingFace sentence-transformer models automatically — no manual model management needed.
Default model:
sentence-transformers/all-MiniLM-L6-v2— 384-dimensional embeddings, ~90 MB download, cached locally after first run.
Sample 1: RagChat — Semantic Search with VectorData (No LLM!)The idea: Embed a set of FAQ documents, store them in an in-memory vector store, and let the user search by typing natural language queries. The system returns the most relevant documents ranked by cosine similarity. No LLM is involved — this is pure embedding-based retrieval.
This sample uses the ElBruno.LocalEmbeddings.VectorData companion package, which integrates with Microsoft.Extensions.VectorData abstractions and includes a built-in InMemoryVectorStore.
First, we define a Document class using VectorData attributes:
using Microsoft.Extensions.VectorData;public sealed class Document{ [VectorStoreKey] public required string Id { get; init; } [VectorStoreData] public required string Title { get; init; } [VectorStoreData] public required string Content { get; init; } [VectorStoreVector(384, DistanceFunction = DistanceFunction.CosineSimilarity)] public ReadOnlyMemory<float> Vector { get; set; } [VectorStoreData] public string? Category { get; init; }}
Notice the [VectorStoreVector(384)] attribute — that matches the 384 dimensions of the default MiniLM model. The DistanceFunction.CosineSimilarity tells the vector store how to rank results.
using ElBruno.LocalEmbeddings.VectorData.Extensions;using Microsoft.Extensions.AI;using Microsoft.Extensions.DependencyInjection;using Microsoft.Extensions.VectorData;// Step 1: Configure DIvar services = new ServiceCollection();services.AddLocalEmbeddingsWithInMemoryVectorStore(options =>{ options.ModelName = "sentence-transformers/all-MiniLM-L6-v2"; options.MaxSequenceLength = 256; options.EnsureModelDownloaded = true;}).AddVectorStoreCollection<string, Document>("faq");using var serviceProvider = services.BuildServiceProvider();// Step 2: Resolve embedding generator + vector collectionvar embeddingGenerator = serviceProvider .GetRequiredService<IEmbeddingGenerator<string, Embedding<float>>>();var faqCollection = serviceProvider .GetRequiredService<VectorStoreCollection<string, Document>>();
One line — AddLocalEmbeddingsWithInMemoryVectorStore() — registers both the local embedding generator and the in-memory vector store. Then we add a typed collection called "faq" for our Document model.
// Step 3: Load FAQ documents, batch-embed, upsert into vector storevar documents = SampleData.GetFaqDocuments(); // 20 FAQ docsvar embeddings = await embeddingGenerator .GenerateAsync(documents.Select(d => d.Content).ToList());for (var i = 0; i < documents.Count; i++) documents[i].Vector = embeddings[i].Vector;await faqCollection.UpsertAsync(documents);
We batch-embed all 20 documents at once (efficient!), assign vectors, and upsert them into the vector store.
while (true){ var input = Console.ReadLine(); // Embed the user query var queryEmbedding = (await embeddingGenerator.GenerateAsync([input]))[0]; // Search the vector store var results = await faqCollection .SearchAsync(queryEmbedding, top: 3) .ToListAsync(); // Filter by minimum similarity score results = results .Where(r => (r.Score ?? 0d) >= 0.2d) .OrderByDescending(r => r.Score ?? 0d) .ToList(); foreach (var result in results) Console.WriteLine($" [{result.Score:P0}] {result.Record.Title}");}
That’s it! The user types a question, we embed it, search the vector collection with SearchAsync, and display matches with their similarity scores. No LLM, no cloud calls, no API keys.
Sample 2: RagOllama — Full RAG with Kernel Memory + OllamaThe idea: Use Microsoft Kernel Memory to orchestrate the entire RAG pipeline — chunking, embedding, storage, retrieval, prompt building, and LLM response — with a single .WithLocalEmbeddings() call for the embedding part and Ollama running phi4-mini locally for text generation.
This is the “turnkey” approach — Kernel Memory handles everything. You just import text and ask questions.
This sample first asks the question without any memory (baseline), then asks the same question with RAG to show the difference:
using ElBruno.LocalEmbeddings.KernelMemory.Extensions;using Microsoft.KernelMemory;using Microsoft.KernelMemory.AI.Ollama;using Microsoft.KernelMemory.Configuration;using OllamaSharp;var ollamaEndpoint = "http://localhost:11434";var modelIdChat = "phi4-mini";var question = "What is Bruno's favourite super hero?";// ❌ Ask WITHOUT memory — the model doesn't know the answervar ollama = new OllamaApiClient(ollamaEndpoint){ SelectedModel = modelIdChat};Console.WriteLine("Answer WITHOUT memory:");await foreach (var token in ollama.GenerateAsync(question)) Console.Write(token.Response);
Without context, the LLM just guesses. Now let’s build the RAG pipeline:
// Configure Ollama for text generationvar config = new OllamaConfig{ Endpoint = ollamaEndpoint, TextModel = new OllamaModelConfig(modelIdChat)};// Build Kernel Memory: Ollama for chat + local embeddings for vectorsvar memory = new KernelMemoryBuilder() .WithOllamaTextGeneration(config) .WithLocalEmbeddings() // 👈 This is the magic line! .WithCustomTextPartitioningOptions(new TextPartitioningOptions { MaxTokensPerParagraph = 256, OverlappingTokens = 50 }) .Build();
.WithLocalEmbeddings() is an extension method from the ElBruno.LocalEmbeddings.KernelMemory companion package. Under the hood, it creates a LocalEmbeddingGenerator with default options and wraps it in a LocalEmbeddingTextGenerator adapter that implements Kernel Memory’ ITextEmbeddingGenerator interface. One line, zero configuration.
// Import facts into memoryvar facts = new[]{ "Gisela's favourite super hero is Batman", "Gisela watched Venom 3 2 weeks ago", "Bruno's favourite super hero is Invincible", "Bruno went to the cinema to watch Venom 3", "Bruno doesn't like the super hero movie: Eternals", "ACE and Goku watched the movies Venom 3 and Eternals",};for (var i = 0; i < facts.Length; i++) await memory.ImportTextAsync(facts[i], (i + 1).ToString());// ✅ Ask WITH memory — now the model knows!Console.WriteLine("\nAnswer WITH memory:");await foreach (var result in memory.AskStreamingAsync(question)){ Console.Write(result.Result); if (result.RelevantSources.Count > 0) foreach (var source in result.RelevantSources) Console.WriteLine($" [source: #{source.Index}] {source.SourceUrl}");}
When you call ImportTextAsync, Kernel Memory automatically:
When you call AskStreamingAsync, it:
All in one call. The answer now correctly says “Bruno’s favourite super hero is Invincible” — with source citations! 
phi4-mini pulled
Sample 3: RagFoundryLocal — Manual RAG with Foundry LocalThe idea: Build the entire RAG pipeline by hand — embed facts, search with FindClosest(), construct a prompt template, and stream the LLM response. This sample uses only the core ElBruno.LocalEmbeddings package (no companion packages) and Microsoft AI Foundry Local for the LLM.
This is the “full control” approach — every step is explicit.
using ElBruno.LocalEmbeddings;using ElBruno.LocalEmbeddings.Extensions;using Microsoft.AI.Foundry.Local;using Microsoft.Extensions.AI;using OpenAI;using System.ClientModel;var modelAlias = "phi-4-mini";var question = "What is Bruno's favourite super hero?";const int topK = 3;// Start Foundry Local modelawait using var manager = await FoundryLocalManager.StartModelAsync(modelAlias);// Resolve the alias to the actual model ID registered on the servervar modelIdChat = await ResolveModelIdAsync(manager.Endpoint, modelAlias);var openAiClient = new OpenAIClient( new ApiKeyCredential(manager.ApiKey), new OpenAIClientOptions { Endpoint = manager.Endpoint });IChatClient chatClient = openAiClient .GetChatClient(modelIdChat) .AsIChatClient();// ❌ Ask without context (baseline)await foreach (var update in chatClient.GetStreamingResponseAsync( [new ChatMessage(ChatRole.User, question)])) Console.Write(update.Text);
Foundry Local starts a local inference server and exposes an OpenAI-compatible API. We use IChatClient from Microsoft.Extensions.AI — the same abstraction you’d use with Azure OpenAI or any other provider.
// Same facts as the Ollama samplestring[] facts =[ "Gisela's favourite super hero is Batman", "Gisela watched Venom 3 2 weeks ago", "Bruno's favourite super hero is Invincible", "Bruno went to the cinema to watch Venom 3", "Bruno doesn't like the super hero movie: Eternals", "ACE and Goku watched the movies Venom 3 and Eternals",];// Step 1: Embed all facts locallyusing var embeddingGenerator = new LocalEmbeddingGenerator();var factEmbeddings = await embeddingGenerator.GenerateAsync(facts);// Step 2: Zip facts with their embeddingsvar indexedFacts = facts.Zip( factEmbeddings, (fact, embedding) => (Item: fact, Embedding: embedding));// Step 3: Embed the question and find closest matchesvar queryEmbedding = await embeddingGenerator.GenerateEmbeddingAsync(question);var contextDocs = indexedFacts .FindClosest(queryEmbedding, topK: topK) .Select(match => match.Item);
Here we use two key extension methods from the core library:
GenerateEmbeddingAsync(string) — convenience method that returns a single Embedding<float> directly (no array indexing needed)FindClosest() — extension on IEnumerable<(T Item, Embedding<float>)> that performs cosine similarity ranking and returns the top-K matchesNo vector store, no DI container — just LINQ and extension methods.
// Step 4: Build the prompt with retrieved contextstatic string BuildPrompt(string question, IEnumerable<string> contextDocs){ var context = string.Join("\n- ", contextDocs); return $""" You are a helpful assistant. Use the provided context to answer briefly and accurately. Context: - {context} Question: {question} """;}// Step 5: Ask the LLM with context ✅await foreach (var update in chatClient.GetStreamingResponseAsync( [new ChatMessage(ChatRole.User, BuildPrompt(question, contextDocs))])) Console.Write(update.Text);
We build a simple prompt template using C# raw string literals, inject the retrieved context, and stream the response. The LLM now has the relevant facts and answers correctly.
phi-4-mini available
Comparison: Which Approach Should You Use?| Aspect | RagChat | RagOllama | RagFoundryLocal |
|---|---|---|---|
| LLM | None (retrieval only) | Ollama phi4-mini | Foundry Local phi-4-mini |
| Embedding integration | DI + VectorData | Kernel Memory companion | Core library directly |
| RAG orchestration | Manual (VectorData SearchAsync) | Automatic (Kernel Memory) | Manual (embed → search → prompt) |
| Vector store | InMemoryVectorStore (built-in) | Kernel Memory’s built-in store | In-memory via LINQ |
| Companion packages | ElBruno.LocalEmbeddings.VectorData | ElBruno.LocalEmbeddings.KernelMemory | None — core only |
| Key extension method | AddLocalEmbeddingsWithInMemoryVectorStore() | .WithLocalEmbeddings() | FindClosest() |
| Lines of RAG code | ~20 | ~15 | ~25 |
| Best for | Search-only, FAQ, no LLM cost | Turnkey RAG with minimal code | Full pipeline control |
My recommendation:
All three share the same foundation: embeddings generated locally on your machine, no cloud calls, no API keys for the embedding part.
References and ResourcesHappy coding!
Greetings
El Bruno
More posts in my blog ElBruno.com.
More info in https://beacons.ai/elbruno

.NET 11 Preview 1 ships a groundbreaking feature: Runtime Async. Instead of relying solely on the C# compiler to rewrite async/await methods into state machines, the .NET runtime itself now understands async methods as a first-class concept. This article explores what Runtime Async is, why it matters, what changed in Preview 1, and how you can experiment with it today.