Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151718 stories
·
33 followers

Sal Khan: Companies Should Give 1% of Profits To Retrain Workers Displaced By AI

1 Share
"I believe artificial intelligence will displace workers at a scale many people don't yet realize," says Sal Kahn (founder/CEO of the nonprofit Khan Academy). But in an op-ed in the New York Times he also proposes a solution that "could change the trajectory of the lives of millions who will be displaced..." "I believe that every company benefiting from automation — which is most American companies — should... dedicate 1 percent of its profits to help retrain the people who are being displaced." This isn't charity. It is in the best interest of these companies. If the public sees corporate profits skyrocketing while livelihoods evaporate, backlash will follow — through regulation, taxes or outright bans on automation. Helping retrain workers is common sense, and such a small ask that these companies would barely feel it, while the public benefits could be enormous... Roughly a dozen of the world's largest corporations now have a combined profit of over a trillion dollars each year. One percent of that would create a $10 billion annual fund that, in part, could create a centralized skill training platform on steroids: online learning, ways to verify skills gained and apprenticeships, coaching and mentorship for tens of millions of people. The fund could be run by an independent nonprofit that would coordinate with corporations to ensure that the skills being developed are exactly what are needed. This is a big task, but it is doable; over the past 15 years, online learning platforms have shown that it can be done for academic learning, and many of the same principles apply for skill training. "The problem isn't that people can't work," Khan writes in the essay. "It's that we haven't built systems to help them continue learning and connect them to new opportunities as the world changes rapidly." To meet the challenges, we don't need to send millions back to college. We need to create flexible, free paths to hiring, many of which would start in high school and extend through life. Our economy needs low-cost online mechanisms for letting people demonstrate what they know. Imagine a model where capability, not how many hours students sit in class, is what matters; where demonstrated skills earn them credit and where employers recognize those credits as evidence of readiness to enter an apprenticeship program in the trades, health care, hospitality or new categories of white-collar jobs that might emerge... There is no shortage of meaningful work — only a shortage of pathways into it. Thanks to long-time Slashdot reader destinyland for sharing the article.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Zellij — A Modern Terminal Multiplexer Built for Developers

1 Share

Zellij home screen

If you’re a developer who spends most of their day inside a terminal, your workflow probably depends on managing multiple shells, logs, servers, and editors simultaneously. Traditionally, tmux has been the go-to solution for this problem. It’s powerful, battle-tested, and ubiquitous — but also notoriously hard to learn, configure, and maintain.

Zellij enters this space with a clear goal:

Provide a first-class terminal workspace without sacrificing usability.

Written in Rust, Zellij is a next-generation terminal multiplexer that combines performance, sane defaults, and discoverability — something terminal tools have historically ignored.

Core Concepts: Sessions, Tabs, and Panes

Before diving deeper, let’s quickly clarify the core building blocks of any terminal multiplexer.

Sessions

A session is a persistent workspace. Think of it as a long-running terminal environment that survives terminal closures, SSH disconnects, or even system reboots.

With Zellij:

  • Sessions are persistent by default
  • You can detach and reattach at will
  • Ideal for remote servers, DevOps workflows, and long builds

Example use case:

Start a backend server, a frontend dev server, and a log tail — disconnect — come back hours later to the exact same state.

Sessions make Zellij extremely useful for SSH-heavy and production-like workflows.

Tabs (Windows)

Tabs (similar to windows in tmux terminology) allow you to separate concerns within a session.

For example:

  • Tab 1: Editor + Git
  • Tab 2: Backend services
  • Tab 3: Logs & monitoring

Tabs help keep your mental model clean and prevent pane overload.

Panes

Panes are splits inside a tab. You can divide your terminal vertically or horizontally to run multiple processes side-by-side.

Typical pane layout:

  • Left pane: nvim
  • Right pane: test runner
  • Bottom pane: application logs

Zellij makes pane management intuitive and visual, even for beginners.

Discoverability: The Killer Feature

One of Zellij’s most underrated features is keybinding discoverability.

Unlike tmux — where you’re expected to memorize cryptic shortcuts — Zellij shows a context-aware keybinding bar at the bottom of the screen. When you enter a mode, available actions are displayed instantly.

This dramatically reduces cognitive load and makes onboarding painless.

You don’t guess shortcuts.

You see them.

Keybindings You’ll Actually Use

Zellij uses a modal keybinding system, similar to Vim, which keeps shortcuts ergonomic and conflict-free.

Pane Management

  • Ctrl + p → Enter Pane Mode
  • n → New pane
  • x → Close pane
  • h / j / k / l → Move between panes
  • ← ↑ ↓ → → Resize panes

Tab Management

  • Ctrl + t → Enter Tab Mode
  • n → New tab
  • x → Close tab
  • ← / → → Switch tabs

Session Controls

  • Ctrl + o → Detach from session
  • zellij list-sessions → View running sessions
  • zellij attach <name> → Reattach

All of this is visible in real time via the help bar — no docs required.

Layouts: Reproducible Workspaces

Zellij introduces layout files, which let you define complex terminal setups declaratively.

A layout can:

  • Create multiple tabs
  • Define pane splits
  • Run commands automatically

This is extremely powerful for:

  • Project bootstrapping
  • Consistent dev environments
  • Team-wide workflow sharing

Example:

One command opens your editor, starts Docker containers, tails logs, and launches tests — every time.

Layouts turn your terminal into infrastructure.

Plugins and Extensibility

Zellij ships with a plugin system that runs inside the terminal UI itself. These plugins handle things like:

  • Status bars
  • Tab indicators
  • Session managers
  • Custom UI widgets

Unlike tmux, you don’t need external scripts or shell hacks. Plugins are first-class citizens and integrate cleanly with the core system.

Performance and Reliability

Because Zellij is written in Rust:

  • It’s fast
  • Memory-efficient
  • Crash-resistant

This matters when you’re:

  • Running dozens of panes
  • SSH’ing into remote machines
  • Keeping sessions alive for days

Zellij feels stable under load — an underrated but critical feature for production-grade workflows.

Zellij vs tmux (Realistically)

tmux isn’t going anywhere — and that’s fine. It’s mature, deeply customizable, and widely available.

But Zellij offers:

  • Better UX
  • Visual feedback
  • Less configuration debt
  • Faster onboarding

For many developers, Zellij is the 90% solution with 10% effort.

Who Should Use Zellij?

  • Backend and systems developers
  • DevOps engineers and SREs
  • Rust and Linux enthusiasts
  • Developers tired of managing massive tmux configs
  • Anyone who wants productivity without friction

Final Thoughts

Zellij doesn’t just modernize tmux — it rethinks how developers interact with terminal workspaces. By prioritizing discoverability, sane defaults, and performance, it removes unnecessary complexity while preserving power.

If your terminal is your primary IDE, Zellij might just be the best upgrade you didn’t know you needed.

Install it once. Use it everywhere.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

I Thought Compilers Were Scary. So I Built Sauce.

1 Share

I’ve been writing code for years. I type cargo run or npm start, hit enter, and meaningful things happen. But if you asked me what actually happens between hitting enter and seeing "Hello World," I’d have mumbled something about "machine code" and changed the subject.

That bothered me. I rely on these tools every day, but I didn't understand them.

So I started building Sauce, my own programming language in Rust. Not because the world needs another language, but because I needed to stop treating my compiler like a black box.

Turns out, a language isn't magic. It's just a pipeline.

Why We Think It's Hard

We usually see a compiler as this big, scary brain that judges our code. You feed it text, and it either gives you a working program or yells at you with an error.

I spent years thinking you needed to be a math genius to build one. I was wrong. You just need to break it down into small steps.

What Sauce Actually Is

Strip away the hype, and a programming language is just a way to move data around.

Sauce is a statically typed language that feels like a simple script. I wanted something that was clear and honest. The core ideas are simple:

  • Pipelines (|>) are default: Data flows explicitly from one step to the next, like a factory line.
  • Effects are explicit (toss): No hidden surprises or secret jumps in your code.

But to get there, I had to build the engine. And thanks to Rust, I learned that the engine is actually pretty cool.

The Architecture: It’s Just an Assembly Line

I used to think compilation was one giant, messy function. In reality, it’s a disciplined process. I’m following a strict architecture for Sauce that separates "understanding the code" (Frontend) from "running the code" (Backend).

saucr architecture

Here is exactly how Sauce works under the hood:

The diagram above isn't just a sketch; it's the map I'm building against. It breaks down into two main phases.

Phase 1: Frontend (The Brain)

This phase is all about understanding what you wrote. It doesn't run anything yet; it just reads and checks.

  1. Lexer (Logos): The Chopping Block
  2. The Job: Computers don't read words; they read characters. The Lexer's job is to group those characters into meaningful chunks called "Tokens."
  3. In Plain English: Imagine reading a sentence without spaces: thequickbrownfox. It's hard. The Lexer adds the spaces and labels every word. It turns grab x = 10 into a list: [Keyword(grab), Ident(x), Symbol(=), Int(10)].
  4. The Tool: I used a Rust tool called Logos. It’s incredibly fast, but I learned a hard lesson: computers are dumb. If you don't explicitly tell them that "grab" is a special keyword, they might think it's just a variable name like "green." You have to set strict rules.

  5. Parser (Chumsky): The Grammar Police

  6. The Job: Now that we have a list of words (tokens), we need to check if they make a valid sentence. The Parser organizes these flat lists into a structured tree called the AST (Abstract Syntax Tree).

  7. In Plain English: A list of words like [10, =, x, grab] contains valid words but makes no sense. The Parser ensures the order is correct (grab x = 10) and builds a hierarchy: "This is a Variable Assignment. The name is 'x'. The value is '10'."

  8. The Tool: I used Chumsky, which lets you build logic like LEGOs. You write a tiny function to read a number, another for a variable, and glue them together.

  9. The "Aha!" Moment: I learned how much structure matters. Instead of trying to parse everything in one giant loop, breaking the grammar into small, composable pieces made the language way easier to extend and reason about. It’s not magic; it’s just organizing data.

  10. Type Checking: The Logic Check

  11. The Job: Just because a sentence is grammatically correct doesn't mean it makes sense. "The sandwich ate the Tuesday" is a valid sentence, but it's nonsense. The Type Checker catches these logical errors.

  12. In Plain English: If you write grab x = "hello" + 5, the Parser says "Looks like a valid math operation!" But the Type Checker steps in and says, "Wait. You can't add a Word to a Number. That's illegal." Sauce currently has a small, explicit system that catches these basic mistakes before you ever try to run the code.

Phase 2: Backend (The Muscle)

Once the Frontend gives the "thumbs up," we move to the Backend. This phase is about making the code actually run.

  1. Codegen (Inkwell/LLVM): The Translator
  2. The Job: This is where we leave the high-level world of "Variables" and "Pipelines" and enter the low-level world of CPU instructions. We translate our AST into LLVM IR (Intermediate Representation).
  3. In Plain English: Sauce is like a high-level manager giving orders ("Calculate this pipeline"). The CPU is the worker who only understands basic tasks ("Move number to register A," "Add register A and B"). LLVM is the translator that turns the manager's orders into the worker's checklist.
  4. Why LLVM? It's the same industrial-grade machinery that powers Rust, Swift, and C++. By using it, Sauce gets decades of optimization work for free. Once you figure out how to tell LLVM to "print a number," the rest stops feeling so scary.

  5. Native Binary: The Final Product

  6. The Job: The final step is bundling all those CPU instructions into a standalone file (like an .exe on Windows or a binary on Linux).

  7. In Plain English: This is what lets you send your program to a friend. They don't need to install Sauce, Rust, or anything else. They just double-click the file, and it runs. (Currently, this works for simple, effect-free programs).

What Works Right Now (v0.1.0)

Sauce isn't just an idea anymore. The core compiler pipeline is alive.

  • Pipelines: You can write grab x = 10 |> _ and it understands it perfectly. The data flows left-to-right, just like reading a sentence.
  • Real Output: I can feed it real .sauce source code, and it parses it into a type-safe syntax tree.
  • Explicit Effects: You can use toss to signal a side effect. This currently works in the interpreter, while the LLVM backend intentionally rejects effects for now.

The Road Ahead

I have a clear plan for where this is going. Since the core architecture is stable, the next updates are about making it usable.

  • v0.1.x (UX): Right now, if you make a mistake, the error messages are a bit cryptic. I'm adding a tool called Ariadne to give pretty, helpful error reports (like Rust does).
  • v0.2.0 (Effects): This is the big one. I'll be finalizing how "Effects" work—defining rules for when you can resume a program after an error and when you have to abort.
  • v0.3.0 (Runtime): Merging the Interpreter and LLVM worlds so they behave exactly the same, plus adding a standard library so you can do more than just print numbers.

Why You Should Try This

I avoided building a language for years because I thought I wasn't smart enough.

But building Sauce taught me that there's no magic. It's just data structures. A Lexer is just regex. A Parser is just a tree builder. An Interpreter is just a function that walks through that tree.

If you want to actually understand how your code runs, don't just read a book. Build a tiny, broken compiler. Create a Lexer. Define a simple tree. Parse 1 + 1.

You'll learn more in a weekend of fighting syntax errors than you will in a year of just using cargo run.

Check out Sauce on GitHub. It's small, it's honest, and we are officially cooking.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What I Learned Testing MCPs Across Claude, Cursor, and Replit

1 Share

Model Context Protocols (MCPs) are a powerful pattern for connecting AI assistants to real world data sources and tools. Over the past few months, I’ve been experimenting with MCPs across multiple environments in particular Claude, Cursor, and Replit and I want to share what actually works, where things tend to break, and how I think about validation.

MCPs are not magic they are just a standard for letting LLM based tools talk to external systems in a reliable way. This is similar to how a USB-C port lets different devices connect to your computer: the protocol itself doesn’t do anything special, but it makes integration possible when done right.

What I observed

1. Not all MCPs behave the same everywhere
A lot of MCP examples that feel solid in one environment print nothing or fail silently in another. This usually comes down to how each environment handles execution, context, filesystem access, or tool arguments.

2. Small config differences matter

Sometimes an MCP that runs in Claude breaks in Cursor not because of a logic bug but because of subtle differences in how CLI tools, paths, or quirk settings are handled.

3. Validation is hard

There isn’t a silver bullet yet for catching silent failures. Most of the time, I find myself running the same MCP in minimal contexts, checking raw outputs and side effects, and isolating tool chains until I understand exactly where it fails.

4. Trending MCPs you might find useful

From systems that access files or GitHub repos to tooling that helps with analytics or Redis access, MCPs are being built for a variety of workflows. Treat them as reusable modules, not one off scripts.

What this means for you

If you’re building with MCPs, treat them as execution boundaries with behavioral contracts you should validate each one in each environment before using it in production workflows.

For folks discovering MCPs or trying to find working examples across tools like Claude, Cursor, GitHub Copilot and more, I’ve been aggregating what actually runs in multiple environments.

Here’s a place where that’s organized for easy reference:

https://ai-stack.dev/

Would love to hear how others validate MCPs in their workflows.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Building a Retro CRT Terminal Website with WebGL and GitHub Copilot (Claude Opus 4.5)

1 Share

During the holidays, I was browsing the internet when I came across cool-retro-term, an open-source terminal that mimics the visuals of old cathode ray tube (CRT) displays.

Terminal preview

I loved the look of it—it has an awesome retro sci-fi atmosphere that reminds me of the Alien and Fallout fictional universes. I thought it would be amazing if I could make my personal website look like that. I took a look at the source code and quickly realized that it's implemented using QML and C++. I'm not experienced with either, so I wondered if it would be possible to port it to web technologies using either WebGL or Emscripten. I asked GitHub Copilot, and it advised me to try the WebGL route because there were fewer technical challenges involved.

Understanding the Architecture

I have no experience with QML, but I have worked with Three.js in the past. I don't have a deep understanding of how shaders work internally, but with the help of GitHub Copilot, I was able to understand the architecture of the original source code. I cloned the repo, added a new web directory, and started providing instructions to Claude. The original application has two sets of shaders: the first one handles the static frame, and the second one is a series of effects that are influenced by the current time.

Step 1: The Static Frame

I started by asking Claude to implement the static frame using Three.js while ignoring the terminal emulation for now. This gave me a foundation to build upon without getting overwhelmed by complexity.

Step 2: Text Rendering

The second step was to ask Claude to render some basic text from a hardcoded text file in the Three.js scene using the appropriate retro font.

Step 3: Migrating the Visual Effects

Then I started to migrate the visual effects—starting with the background noise and then moving on to the other effects:

  • Bloom – A glow effect that makes bright areas bleed into surrounding pixels
  • Brightness – Controls the overall luminosity of the display
  • Chroma Color – Adds color tinting to simulate phosphor characteristics
  • RGB Shift – Separates color channels slightly to mimic CRT color misalignment
  • Screen Curvature – Warps the image to simulate the curved glass of old monitors
  • Burn-In – Simulates phosphor burn-in from static images left on screen too long
  • Flickering – Adds subtle brightness fluctuations like real CRT displays
  • Glowing Line – Renders a scanning beam effect moving across the screen
  • Horizontal Sync – Simulates horizontal sync issues causing image distortion
  • Jitter – Adds small random movements to simulate signal instability
  • Rasterization – Renders visible scan lines characteristic of CRT displays
  • Static Noise – Adds animated noise/grain to the image

At this point, there were some visual bugs that required a bit of trial and error until the LLM was able to fix them without introducing new issues. The main one was a problem related to the position of the screen reflections.

Step 4: Integrating Xterm.js

Once I was able to get the terminal frame and the effects ported from OpenGL to WebGL, I asked the LLM to replace the hardcoded text with the output of Xterm.js. Xterm.js is an open-source project designed to be a web-based front-end for terminals. It's used in tools like Visual Studio Code because VS Code is a web application that runs inside Electron—Xterm.js is the front-end within VS Code that accesses a real terminal instance on your machine.

In my case, I don't need a real terminal, so I asked Claude to create a terminal emulator with a bunch of basic commands such as clear, ls, cd, and cat.

Terminal LS

Step 5: Building Games

At this point, everything was almost complete, so I asked Claude to implement multiple text-based games. I implemented games like Pong, Tetris, Snake, Minesweeper, Space Invaders, and Arkanoid—and most of them worked almost perfectly on the first attempt. Some of the games experienced minor visual issues, but I was able to solve everything by describing the issue in detail to Claude and what I thought was the root cause.

Terminal Game

Terminal Game

Terminal Game

Terminal Game

Step 6: Adding Media Playback with ffplay

I also wanted to add support for playing audio and video files directly in the terminal, similar to how ffplay works in a real terminal. I asked Claude to implement an ffplay command that could render video with all the effects previously implemented.

Terminal ffplay

Step 7: Refactoring and Publishing

The final step was to ask Claude to refactor the code to clearly separate the library code (the WebGL retro terminal renderer) from my application code (the terminal emulator and games). The goal was to publish the WebGL terminal renderer as a standalone npm module, and Claude was able to do it with zero issues in just one attempt.

Conclusion

Overall, the entire implementation took around 10–15 hours. Without an LLM, it would have taken me several weeks. I think this project has been an interesting way to demonstrate how powerful LLMs can be as development tools—especially when working with unfamiliar technologies like shader programming.

By the end of the experiment, I consumed about 50% of my monthly GitHub Copilot for Business tokens ($21/month), which means the entire project cost me roughly $10.50. When you consider that this would have taken weeks of work otherwise, the cost savings enabled by Claude Opus are absolutely insane.

If you're curious, you can check out the result at https://remojansen.github.io/ or browse the source code on GitHub.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Supercharging Application Performance with Intelligent Client-Side Caching

1 Share
This excerpt discusses enhancing Microsoft .NET application performance by minimizing network calls. The author emphasizes client-side caching with Spargine’s InMemoryCache, which drastically improves responsiveness and scalability for costly operations like reflection. While significant speed gains are noted, developers are advised to benchmark changes, as caching may not always be beneficial.



Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories