Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149552 stories
·
33 followers

Boolean Comparison Operators - The Tiny and Mighty Code Tidbits Running your Cloud World

1 Share
As with previous blog posts, all example code can be found up on my GitHub account. You will find all code here ! Cloud engineering looks complicated from the outside. When you peel back some layers, that complication doesn't seem as complicated. However, I'm also a big believer of breaking down trickier concepts in a way that a 6 year old could understand. We don't need to make everything so complicated that no one understands...to me, that's the quickest way to lose the hearts and minds of...

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

F# Weekly #47, 2025 – F# 10 & last #FsAdvent slots

1 Share

Welcome to F# Weekly,

A roundup of F# content from this past week:

News

Only 4 slots left in the main timeline #fsharp #FsAdent

Sergey Tihon 🦔🦀 (@sergeytihon.com) 2025-11-20T07:00:51.556Z

Videos

Blogs

F# vNext

Introducing F# 10 #fsharp devblogs.microsoft.com/dotnet/intro…

Jon Sagara (@jonsagara.com) 2025-11-17T18:14:31.057Z

Highlighted projects

New Releases

That’s all for now. Have a great week.

If you want to help keep F# Weekly going, click here to jazz me with Coffee!

Buy Me A Coffee





Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

WebAssembly Still Expanding Frontend Uses 10 Years Later

1 Share
A web

It’s been 10 years since Mozilla, Microsoft, Apple and Google announced WebAssembly (Wasm) as a collaborative effort. Back then, the goal seemed clear: Create a low-level binary instruction format for compiling older, non-web languages to run in the browser.

Its uses have grown beyond that initial goal, but there’s much more that Wasm offers developers on the frontend. The list expanded even more with September’s release of Wasm 3.

The New Stack spoke with Thomas Steiner, developer relations engineer at Google, about the common uses for WebAssembly.

Wasm for Business Logic

One of the most popular uses for applications is writing the business logic for an application and then using that code across platforms via WebAssembly, according to Steiner.

“That’s a common pattern that we see where people outsource the business logic to a WebAssembly module, and then they pull that module in from various contexts, which can be web applications, native applications, some even use [applications on] the server side,” Steiner told The New Stack. ”If you have a logic that runs on a server that doesn’t necessarily need a frontend, you can also use WebAssembly there.”

He pointed to Snapchat, which he said more or less has the same app on web and mobile platforms. Rather than recoding the business logic for every platform, Snapchat writes the business logic in one language and then translates it to WebAssembly.

“WebAssembly can be run on the web, but also on the native platforms,” he said. “They can have the same business logic run in different contexts, and they save themselves a lot of development work.”

JavaScript and WebAssembly

WebAssembly can actually run faster than JavaScript code in many cases, Steiner said. For instance, tasks that are very computationally intense can run faster inside WebAssembly.

It can also be used to address hyphenation issues. JavaScript does not allow hyphenation in variable and function names. For some languages — English and German, for instance — the browser already knows how to deal with hyphenation.

“One thing that we’ve seen used quite frequently is hyphenation,” he said. “There’s some languages where the browser doesn’t know the hyphenation rules.”

When those rules are implemented in libraries and the developer wants to render strings on a web page that are written in a nonsupported language, the developer can do a hyphenation in the WebAssembly module, and then just output the hyphenated text to the web page.

“You input the text that you want to hyphenate. The WebAssembly module does its logic, tells you where it would split words and so on, and then you take those and render it on the screen,” he said. “That’s a common example.”

Another common use case for WebAssembly is with cryptography, if you need to encrypt or decrypt something. It can be used to implement features that are already implemented elsewhere, he added.

“I, for example, maintain a library that converts from raster images and turns them into vector images,” he said. “You can imagine this is a relatively costly operation, so with WebAssembly, we can outsource that processing cost into Assembly, [and] make it run in a separate thread in the browser.”

This allows developers to have a fully interactive frontend that, at the same time, runs very computationally intense jobs in the background.

A New Approach to JavaScript Strings

Wasm 3 offers a more efficient way to handle JavaScript strings. We asked Steiner about the significance of this change.

“The core idea of this feature is essentially you have a language, JavaScript, that already has built-in functions for dealing with strings, for example,” he said.

One example is handling Unicode strings, which is quite complex. This kind of feature is already implemented in the JavaScript language, but if a developer wanted to use the same in WebAssembly, they would need to compile the code to do that and turn it into WebAssembly.

The new method provides an easier option.

“You could just borrow the implementation that it already exists in the JavaScript-land, import it into WebAssembly, make it usable from there, and save yourself from doing some of the compilation work that already exists on the hosting language, like, in this case, JavaScript,” he said.

The new feature creates a mechanism that lets the Wasm module simply call or import the existing, built-in JavaScript string function directly.

New Languages Due to Garbage Collection

Thanks to the September Wasm 3 update that incorporates garbage collection, more higher-level languages are adding support for WebAssembly. Java, OCaml, Scala, Kotlin, Scheme and Dart are some of the languages that now target Wasm for compilation, according to the WebAssembly.org blog announcing the improvement.

In Wasm 3, the WebAssembly team added support for a new and separate form of storage that is automatically managed by the Wasm runtime via a garbage collector. Since Wasm is a low-level language, Wasm GC adds low-level primitives to Wasm, allowing compilers to target Wasm more easily for garbage-collected languages.

“It can declare the memory layout of its runtime data structures in terms of struct and array types, plus unboxed tagged integers, whose allocation and lifetime is then handled by Wasm. But that’s it,” the Wasm blog post stated. “Everything else, such as engineering suitable representations for source-language values, including implementation details like method tables, remains the responsibility of compilers targeting Wasm.”

That means no built-in object systems, nor closures or other higher-level constructs, which the post added, “would inevitably be heavily biased towards specific languages.”

“Instead, Wasm only provides the basic building blocks for representing such constructs and focuses purely on the memory management aspect,” the post noted.

Wasm, Serverless and Backend Liberation

It can also be used to support serverless functions on the frontend, although Steiner pointed out “serverless” is a misnomer.

“Of course, there is a server, but the idea is the server is not constantly running,” he said.

Developers will write their business logic and the WebAssembly runtime (which is run on the server) quickly spins up, handles the request and then goes to sleep again, he said.

“WebAssembly has a bunch of unique features that make it very fast to spin up in such contexts,” Steiner said. “That’s why, in the WebAssembly ecosystem, a lot of startups are working around supporting WebAssembly on the server as well.”

Fastly is one such company, he added. Fastly is an edge cloud platform and offers a Content Delivery Network (CDN). The high-performance, open source web server Nginx also supports Wasm on the server, he added.

There is an entire service stack that runs everything on the server in WebAssembly, he continued. This allows developers to switch the backend provider easily, so the developer isn’t locked into a particular backend technology stack.

“As long as your stack supports WebAssembly, everything that is beyond this WebAssembly runtime, you don’t need to care,” he said.

Tools for Compiling Wasm

If you see a use for Wasm that might work for your application, here are a few options for compiling to WebAssembly.

One of the most popular tools is Emscripten. Originally designed to port video games — specifically a first-person shooter called “Sauerbraten” (or “Syntensity” on the web) — written in C and C++ to the browser, Emscripten was created by Alon Zakai, a former Mozilla engineer who now works at Google. It’s open source under both the MIT license and the University of Illinois/NCSA Open Source License. It leverages both Binaryen and is a LLVM/Clang-based compiler.

LLVM can be used to compile to WebAssembly for the backend and optimizations. It supports frontends for C, C++ and Rust, leveraging advanced analysis and transformation passes, according to KodeKloud Notes’ introductory class on Wasm.

Binaryen lets developers assemble, optimize and transform Wasm binaries, which makes it ideal for minimizing code size and fine-tuning low-level performance, according to KodeKloud.

wasm-pack can compile, test, and publish Rust-based Wasm packages to npm.

AssemblyScript provides a TypeScript-flavored syntax that compiles directly to Wasm. It exposes WebAssembly-specific types (e.g., i32, f64) for predictable performance, according to KodeKloud.

The post WebAssembly Still Expanding Frontend Uses 10 Years Later appeared first on The New Stack.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

I tested the new WhatsApp for Windows 11 (a web wrapper) and it’s a performance nightmare

1 Share

WhatsApp has ironically “updated” its app to a newer version that we found performs much poorly than the one it replaced. Version 2.2584.3.0 has replaced the native (WinUI/UWP) app with a web wrapper, which runs on Microsoft’s WebView2 rendering engine.

On October 31, Windows Latest first reported that WhatsApp may switch to a Chromium web app from November 5. As expected, Meta hasn’t mentioned anything about the transition from UWP to Chromium-based Web Wrapper. All they gave was a warning that you’ll have to log in again post the next update.

The transition has been in the works for several months now, as reported by Windows Latest in late July. Back then, the WhatsApp Beta version alone had moved to web.whatsapp.com.

On November 5, Meta kept its sinister promise and started rolling out an update to all WhatsApp users on Windows. We have received the update on day 1 and started testing it along with the beta version. Both are essentially the same, as it simply loads web.whatsapp.com in a WebView2 container.

Testing the new WhatsApp for Windows

We’re testing the Beta version of WhatsApp, which transitioned to the Chromium web wrapper a few months ago. If you expect it to have gained some stability and performance by now, then you’d be absolutely wrong.

Even before logging in, WhatsApp Beta web app uses 3X more RAM than WhatsApp UWP which is already logged in
Even before logging in, WhatsApp Beta web app uses 3X more RAM than WhatsApp UWP, which is already logged in

In the image above, I haven’t even logged in yet. None of my chats have started syncing. And yet, it already uses 300MB of RAM. If you think it’s alright, then take a look at the UWP WhatsApp below that, which is already logged in, open, and running, with RAM usage just shy of 100MB.

It is my everyday WhatsApp, with more than 100 one-to-one chats, and about 30 active groups, and yet the UWP WhatsApp was so optimized that it used less than 100MB of RAM. Let’s see how the WebView2 WhatsApp fares…

Logging in was a familiar affair, but WhatsApp gave a pop-up before showing my chats, which said “We’Voe updated how WhatsApp Beta looks and works. This includes adding Channels and more functionality to Status and Communities.”

WhatsApp pop-up tells that they've changed the design

It’s true that the UWP WhatsApp didn’t have access to Channels and Communities, but I really liked that it didn’t have either, and I wish I could turn them off on my phone as well.

As for the part that says “We’Voe”, it’s honestly fine, since this is a beta version and things like this happen. Except for the fact that this is a web wrapper, which theoretically means that Meta can fix it for web.whatsapp.com, and it would be fixed here as well. It’s also easier and cheaper. Wasn’t that the whole point of moving to a web wrapper model?

Anyway, for the first 10 minutes of using the new WhatsApp, it looked like it was running at 10fps, like a choppy video game. Clicking on each chat would take a good long second to open it. I thought it was because the chats were loading in the background.

But the lag persisted even after a few hours of using it.

When compared to the old UWP WhatsApp, the delay is clear as day. For someone like me, who uses WhatsApp on my PC regularly, this kind of delay isn’t tolerable.

While switching between chats, the animation starts a fraction of a second after I click. In the UWP version, chat switching happens the same instant I click. Of course, this might be because I have already maxed out my RAM, but I never had this problem with the UWP version, so there is no reason for me to settle.

Splitscreen with WhatsApp is not as good as it used to be

What’s even more annoying is that I can’t resize WhatsApp just enough for only the current chat to be visible.

For context, I put WhatsApp on the left side, with my browser occupying most of the screen. This setup is usually reserved for when I’m engaged in some group chats about a specific topic, so I’ll research on the browser, type and send the information on WhatsApp, without being distracted by other chats.

Split screen in UWP WhatsApp allows me to open just one chat while reading articles
Split screen in UWP WhatsApp allows me to open just one chat while reading articles

The new WhatsApp, based on Chromium, doesn’t allow me to do this. No matter how hard I drag on the edges, it doesn’t shrink to a point where I can view only the current chat.

How much RAM does the new WhatsApp use?

Coming back to the performance part, my laptop, despite having 16GB of RAM, always hovers around 90 to 95% RAM utilization. I want to blame it on Edge, but I do have close to 100 tabs open at all times. And the RAM is non-upgradeable, something I thought wouldn’t be a problem when I bought it.

Either way, I wish to save all the RAM that I can, without closing tabs (because, research!). But WhatsApp now uses RAM as if it’s a professional grade application, starting off with a minimum of 300 to 600MB.

But the issue here is that the 600MB RAM usage is only when the app is open and idle. If I scroll through a chat, then the RAM usage instantly doubles and reaches about 1.2GB, which is absurd for a messaging application.

Also, scrolling through messages is no where as smooth as the UWP WhatsApp, which by the way, uses less than 300MB RAM even while scrolling quickly through messages.

Scrolling messages in UWP WhatsApp uses around 250MB RAM only
Scrolling messages in UWP WhatsApp uses around 250MB RAM only

The worst part is that even if I close WhatsApp, it will still continue to use nearly 400MB, and it’s because closing the app simply takes it to the system tray and makes it run in the background, so that you’ll get notifications.

But the native UWP WhatsApp notifications come straight from Windows using the system’s built-in notification APIs, so it doesn’t have to necessarily run in the background.

The Chromium-based WebView2 wrapper WhatsApp uses the browser engine to receive push notifications through service workers, so it must stay active in the background.

However, WhatsApp allows you to prevent it from running in the background. To do that, click the gear icon on the left (Settings) in WhatsApp, select General, and turn off “Minimise to system tray”. You can also turn off “Start WhatsApp at login”, especially if you have an older or weaker hardware.

How to disable WhatsApp from running in the background

New WhatsApp can potentially slow down older hardware

If you have a PC with older hardware and 4 or 8 GB of RAM, then you’re in luck, because WhatsApp just gave you another reason to upgrade to newer hardware.

My dad’s PC has 10-year-old hardware, with 8GB RAM, and it runs Windows 11. Just keeping WhatsApp open, which my dad always does, is now using up more than 600MB of RAM. In the UWP version, WhatsApp used just around 100MB of RAM.

The latest WhatsApp using 600MB RAM on a PC with 8GB RAM, while doing nothing

As I mentioned earlier, closing the new WhatsApp powered by Chromium doesn’t really close it, and so for a person like my dad, who doesn’t know how to Exit it from the System Tray, will have to deal with poorer performance.

But if you do happen to Exit it, then if you receive a message in WhatsApp, you’ll still get a notification, but it just says “You may have new messages”. Of course, this isn’t an issue with the UWP WhatsApp.

However, high RAM usage isn’t the only concern. You might have noticed in the screenshots how the CPU usage for the new Chromium based WhatsApp ranges anywhere between 10 and 35%, while the UWP WhatApp barely goes beyond 5%.

Even while doing nothing, with just one chat open, WhatsApp on my dad’s old PC uses 22.4% of the CPU. Scrolling messages and attempting to do video calls will more CPU and RAM.

Video calls in new WhatsApp uses 3X RAM

Without doing anything else, like scrolling messages, video calls done through the latest Chromium-based WhatsApp uses almost 900MB of RAM and 25.8% CPU utilization.

Video calls in Chromium-based WhatsApp uses almost 900MB RAM
Video calls in Chromium-based WhatsApp uses almost 900MB RAM

However, the UWP WhatsApp uses only 316MB RAM, and barely any CPU usage. Funnily enough, even while being closed, the Chromium-based WhatsApp (WhatsApp Beta in the screenshot) uses around 550MB of RAM.

Video call in UWP WhatsApp uses only 300MB RAM
Video call in UWP WhatsApp uses only 300MB RAM

Other issues with the new WhatsApp

I’ve used the Chromium-based WhatsApp Beta version on my PC and the public version on my dad’s PC, and most of the following issues exist on both versions:

Certain Statuses don’t show properly in the newer version of WhatsApp, and the app says that my version of WhatsApp doesn’t support it. However, the older UWP version of WhatsApp shows the same Status updates without an issue.

Some Status updates doesn't show properly in web wrapper WhatsApp

If you haven’t opened WhatsApp for a while, the next time you open it, you may see a “Computer not connected” warning. However, this disappears in a few seconds.

Computer not connected error message in WhatsApp

Sometimes, opening the app will just show the startup screen for a few seconds. I have had instances where it took almost 10 seconds to display the messages.

WhatsApp stuck in the loading screen

Waking the PC from hibernation has logged off WhatsApp a couple of times. However, restarting the PC didn’t log me out of WhatsApp.

If your PC already has the hardware prowess, you might not experience any performance hiccups, and apart from the issues I mentioned, which aren’t in any way trivial, everything seems to work all right.

However, there is a looming heaviness feeling in the app, making the overall experience sluggish. The sad part is that the UWP app was super snappy, despite having a few minor issues.

Sometimes, opening an image someone has sent will make the text box unresponsive. A Status update, which I have already viewed on my phone, will show again as an update in WhatsApp UWP. These are but minor issues that Meta could’ve fixed with an update. But instead, they chose to evolve backwards.

Why does the new WhatsApp for Windows perform poorly?

When you open the new Chromium-based WhatsApp on Windows, it doesn’t run as a single lightweight process like the old UWP version. Instead, it behaves like a mini-browser running web.whatsapp.com inside your PC. So, it would have the entire stack of WebView2 processes, specifically the ones you see in the task manager.

WhatsApp Beta WebView

The main WebView2 Manager coordinates all child processes that such Chrome-based apps need to run. The WebView2 GPU Process handles all rendering and animations. Services like WebView2 Utility: Network, Audio, and Storage exist because Chromium splits networking, audio playback, file access, and databases into isolated processes for security and sandboxing.

WhatsApp Beta appears as the app shell, and Runtime Broker still manages permissions at the OS level. Crashpad is Chrome’s own crash-reporting client, which also runs constantly in the background.

As expected, all these moving parts take more fuel than the UWP app, which just has the WhatsApp sub-process and Runtime Broker.

So, if WhatsApp already had an optimized UWP app, why did they move to the WebView 2 model?

Why did Meta kill the UWP WhatsApp?

A few years ago, Meta had massive plans for virtual reality and augmented reality. They invested billions of dollars into it, called it Metaverse, and even rebranded Facebook to Meta.

But a couple of years ago, the world had a different idea, and it was AI. Although Meta had been working on their own Llama models, they still haven’t received the popularity that ChatGPT and its likes have garnered.

Long story short, Meta lost a ton of money, and their solution was layoffs. As part of the process, the company found departments that were not needed anymore, and the Windows software development team was the right candidate.

Note that WhatsApp for Windows wasn’t always a UWP app; it was a web wrapper in the beginning, which got fully transitioned into UWP in 2022. Development for the same started somewhere in 2021.

Of course, the company hasn’t announced publicly that they have laid off Windows developers, but the removal of the Messenger app, the transition from UWP to WebView for WhatsApp, and the Facebook app, which has already migrated to a web wrapper model, are all signs that the company doesn’t need Windows developers anymore. The Instagram and Threads apps are also web wrappers in case you are wondering.

Meta might say that their goal was to simplify development by reusing the existing WhatsApp Web codebase instead of maintaining a separate native client. However, the real reason is just cost.

But what baffles me is that this lack of money doesn’t hold a candle to the WhatsApp app for macOS. Yes, macOS, which has a minuscule market share when compared to Windows, still has a native WhatsApp application. And it’s not like Meta is going to lay off Apple Platform Engineers, especially since they made a native WhatsApp app for Apple Watch.

According to Meta, WhatsApp messages on your wrist are more important than those on a full-blown, powerful desktop OS like Windows.

But can we blame Meta alone for this? How is Microsoft treating its own social app?

LinkedIn, which Microsoft spent $26.2 billion to acquire, doesn’t have a native app. I find it preposterous that an almost $ 4 trillion USD company can’t channel the resources to keep their own development environment alive.

It wasn’t like this before…

The golden time of UWP was during the Windows Phone days, when Microsoft paid from their pockets to large companies to bring their apps to the Microsoft Store!

I still remember rocking my Windows Phone, which got a lot of new WhatsApp features much before Android or iOS.

When it came to Windows Phone, Microsoft was always on a spending spree instead of filling in the gaps, but for Windows 11, Microsoft doesn’t seem to care that they’re losing app support.

The reason might be AI. With all hands on deck for an agentic OS, Microsoft believes that the future wouldn’t have people using apps with their mouse or keyboard. The company believes that we’ll be talking to our computer to get stuff done. If that’s the case, would we need apps?

Perhaps not. But that future is a long time away. And if Microsoft is losing customers for poor experiences, like a slow File Explorer in Windows, then can the company guarantee a future where people would talk to a Windows PC?

The post I tested the new WhatsApp for Windows 11 (a web wrapper) and it’s a performance nightmare appeared first on Windows Latest

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Check-In Doc MCP Server: A Handy Way to Search Only the Docs You Trust

1 Share

Ever wished you could ask a question and have the answer come only from a handful of trusted documentation sites—no random blogs, no stale forum posts? That’s exactly what the Check-In Doc MCP Server does. It’s a lightweight Model Context Protocol (MCP) server you can run locally (or host) to funnel questions to selected documentation domains and get a clean AI-generated answer back.

What It Is

The project (GitHub: https://github.com/fboucher/check-in-doc-mcp) is a Dockerized MCP server that:

  • Accepts a user question.
  • Calls the Reka AI Research API with constraints (only allowed domains).
  • Returns a synthesized answer based on live documentation retrieval.

You control which sites are searchable by passing a comma‑separated list of domains (e.g. docs.reka.ai,docs.github.com). That keeps > results focused, reliable, and relevant.

What Is the Reka AI Research API?

Reka AI’s Research API lets you blend language model reasoning with targeted, on‑the‑fly web/document retrieval. Instead of a model hallucinating an answer from static training data, it can:

  • Perform limited domain‑scoped web searches.
  • Pull fresh snippets.
  • Integrate them into a structured response.

In this project, we use the research feature with a web_search block specifying:

  • allowed_domains: Only the documentation sites you trust.
  • max_uses: Caps how many retrieval calls it makes per query (controls cost & latency).

Details used here:

  • Model: reka-flash-research
  • Endpoint: http://api.reka.ai/v1/chat/completions
  • Auth: Bearer API key (generated from the Reka dashboard: https://link.reka.ai/free)

How It Works Internally

The core logic lives in ResearchService (src/Domain/ResearchService.cs). Simplified flow:

  1. Initialization
    Stores the API key + array of allowed domains, sets model & endpoint, logs a safe startup message.

  2. Build Request Payload
    The CheckInDoc(string question) method creates a JSON payload:

    var requestPayload = new {
      model,
      messages = new[] { new { role = "user", content = question } },
      research = new {
        web_search = new {
          allowed_domains = allowedDomains,
          max_uses = 4
        }
      }
    };
    
  3. Send Request
    Creates a HttpRequestMessage (POST), adds Authorization: Bearer <APIKEY>, sends JSON to Reka.

  4. Parse Response
    Deserializes into a RekaResponse domain object, returns the first answer string.

Adding It to VS Code (MCP Extension)

You can run it as a Docker-based MCP server. Two simple approaches:

Option 1: Via “Add MCP Server” UI

  1. In VS Code (with MCP extension), click Add MCP Server.
  2. Choose type: Docker image.
  3. Image name: fboucher/check-in-doc-mcp.
  4. Enter allowed domains and your Reka API key when prompted.

Option 2: Via mcp.json (Recommended)

Alternatively, you can manually configure it in your mcp.json file. This will make sure your API key isn't displayed in plain text. Add or merge this configuration:

{
  "servers": {
    "check-in-docs": {
      "type": "stdio",
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "ALLOWED_DOMAINS=${input:allowed_domains}
        ",
        "-e",
        "APIKEY=${input:apikey}",
        "fboucher/check-in-doc-mcp"
      ]
    }
  },
  "inputs": [
    {
      "id": "allowed_domains",
      "type": "promptString",
      "description": "Enter the comma-separated list of documentation domains to allow (e.g. docs.reka.ai,docs.github.com):"
    },
    {
      "id": "apikey",
      "type": "promptString",
      "password": true,
      "description": "Enter your Reka Platform API key:"
    }
  ]
}

How to Use It

To use it ask to Check In Doc something or You can now use the SearchInDoc tool in your MCP-enabled environment. Just ask a question, and it will search only the specified documentation domains.

Final Thoughts

It’s intentionally simple—no giant orchestration layer. Just a clean bridge between a question, curated domains, and a research-enabled model. Sometimes that’s all you need to get focused, trustworthy answers.

If this sparks an idea, clone it and adapt away. If you improve it (citations, richer error handling, multi-turn context)—send a PR!

Watch a quick demo


Links & References

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Reading Notes #674

1 Share

This week: Cake v6.0.0 is out, Docker Desktop adds helpful debugging tools, and .NET 10 brings a ton of changes worth exploring. Plus some thoughts on working with AI coding assistants and a great cybersecurity podcast.

AI

DevOps

  • Cake v6.0.0 released - Great news! I will have to upgrade my pipeline. Hopefully, the upgrade will be smooth.

  • Docker Desktop 4.50 Release (Deanna Sparks) - Nice update, and oh wow! I'm looking forward to try that debug, that's great news

Programming

Podcasts


Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.

If you have interesting content, share it!

~frank


Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories