Read more of this story at Slashdot.
Read more of this story at Slashdot.
If you’re a developer who spends most of their day inside a terminal, your workflow probably depends on managing multiple shells, logs, servers, and editors simultaneously. Traditionally, tmux has been the go-to solution for this problem. It’s powerful, battle-tested, and ubiquitous — but also notoriously hard to learn, configure, and maintain.
Zellij enters this space with a clear goal:
Provide a first-class terminal workspace without sacrificing usability.
Written in Rust, Zellij is a next-generation terminal multiplexer that combines performance, sane defaults, and discoverability — something terminal tools have historically ignored.
Before diving deeper, let’s quickly clarify the core building blocks of any terminal multiplexer.
A session is a persistent workspace. Think of it as a long-running terminal environment that survives terminal closures, SSH disconnects, or even system reboots.
With Zellij:
Example use case:
Start a backend server, a frontend dev server, and a log tail — disconnect — come back hours later to the exact same state.
Sessions make Zellij extremely useful for SSH-heavy and production-like workflows.
Tabs (similar to windows in tmux terminology) allow you to separate concerns within a session.
For example:
Tabs help keep your mental model clean and prevent pane overload.
Panes are splits inside a tab. You can divide your terminal vertically or horizontally to run multiple processes side-by-side.
Typical pane layout:
nvim
Zellij makes pane management intuitive and visual, even for beginners.
One of Zellij’s most underrated features is keybinding discoverability.
Unlike tmux — where you’re expected to memorize cryptic shortcuts — Zellij shows a context-aware keybinding bar at the bottom of the screen. When you enter a mode, available actions are displayed instantly.
This dramatically reduces cognitive load and makes onboarding painless.
You don’t guess shortcuts.
You see them.
Zellij uses a modal keybinding system, similar to Vim, which keeps shortcuts ergonomic and conflict-free.
Ctrl + p → Enter Pane Mode
n → New panex → Close paneh / j / k / l → Move between panes← ↑ ↓ → → Resize panesCtrl + t → Enter Tab Mode
n → New tabx → Close tab← / → → Switch tabsCtrl + o → Detach from sessionzellij list-sessions → View running sessionszellij attach <name> → ReattachAll of this is visible in real time via the help bar — no docs required.
Zellij introduces layout files, which let you define complex terminal setups declaratively.
A layout can:
This is extremely powerful for:
Example:
One command opens your editor, starts Docker containers, tails logs, and launches tests — every time.
Layouts turn your terminal into infrastructure.
Zellij ships with a plugin system that runs inside the terminal UI itself. These plugins handle things like:
Unlike tmux, you don’t need external scripts or shell hacks. Plugins are first-class citizens and integrate cleanly with the core system.
Because Zellij is written in Rust:
This matters when you’re:
Zellij feels stable under load — an underrated but critical feature for production-grade workflows.
tmux isn’t going anywhere — and that’s fine. It’s mature, deeply customizable, and widely available.
But Zellij offers:
For many developers, Zellij is the 90% solution with 10% effort.
Zellij doesn’t just modernize tmux — it rethinks how developers interact with terminal workspaces. By prioritizing discoverability, sane defaults, and performance, it removes unnecessary complexity while preserving power.
If your terminal is your primary IDE, Zellij might just be the best upgrade you didn’t know you needed.
Install it once. Use it everywhere.
I’ve been writing code for years. I type cargo run or npm start, hit enter, and meaningful things happen. But if you asked me what actually happens between hitting enter and seeing "Hello World," I’d have mumbled something about "machine code" and changed the subject.
That bothered me. I rely on these tools every day, but I didn't understand them.
So I started building Sauce, my own programming language in Rust. Not because the world needs another language, but because I needed to stop treating my compiler like a black box.
Turns out, a language isn't magic. It's just a pipeline.
We usually see a compiler as this big, scary brain that judges our code. You feed it text, and it either gives you a working program or yells at you with an error.
I spent years thinking you needed to be a math genius to build one. I was wrong. You just need to break it down into small steps.
Strip away the hype, and a programming language is just a way to move data around.
Sauce is a statically typed language that feels like a simple script. I wanted something that was clear and honest. The core ideas are simple:
|>) are default: Data flows explicitly from one step to the next, like a factory line.toss): No hidden surprises or secret jumps in your code.But to get there, I had to build the engine. And thanks to Rust, I learned that the engine is actually pretty cool.
I used to think compilation was one giant, messy function. In reality, it’s a disciplined process. I’m following a strict architecture for Sauce that separates "understanding the code" (Frontend) from "running the code" (Backend).
Here is exactly how Sauce works under the hood:
The diagram above isn't just a sketch; it's the map I'm building against. It breaks down into two main phases.
This phase is all about understanding what you wrote. It doesn't run anything yet; it just reads and checks.
thequickbrownfox. It's hard. The Lexer adds the spaces and labels every word. It turns grab x = 10 into a list: [Keyword(grab), Ident(x), Symbol(=), Int(10)].The Tool: I used a Rust tool called Logos. It’s incredibly fast, but I learned a hard lesson: computers are dumb. If you don't explicitly tell them that "grab" is a special keyword, they might think it's just a variable name like "green." You have to set strict rules.
Parser (Chumsky): The Grammar Police
The Job: Now that we have a list of words (tokens), we need to check if they make a valid sentence. The Parser organizes these flat lists into a structured tree called the AST (Abstract Syntax Tree).
In Plain English: A list of words like [10, =, x, grab] contains valid words but makes no sense. The Parser ensures the order is correct (grab x = 10) and builds a hierarchy: "This is a Variable Assignment. The name is 'x'. The value is '10'."
The Tool: I used Chumsky, which lets you build logic like LEGOs. You write a tiny function to read a number, another for a variable, and glue them together.
The "Aha!" Moment: I learned how much structure matters. Instead of trying to parse everything in one giant loop, breaking the grammar into small, composable pieces made the language way easier to extend and reason about. It’s not magic; it’s just organizing data.
Type Checking: The Logic Check
The Job: Just because a sentence is grammatically correct doesn't mean it makes sense. "The sandwich ate the Tuesday" is a valid sentence, but it's nonsense. The Type Checker catches these logical errors.
In Plain English: If you write grab x = "hello" + 5, the Parser says "Looks like a valid math operation!" But the Type Checker steps in and says, "Wait. You can't add a Word to a Number. That's illegal." Sauce currently has a small, explicit system that catches these basic mistakes before you ever try to run the code.
Once the Frontend gives the "thumbs up," we move to the Backend. This phase is about making the code actually run.
Why LLVM? It's the same industrial-grade machinery that powers Rust, Swift, and C++. By using it, Sauce gets decades of optimization work for free. Once you figure out how to tell LLVM to "print a number," the rest stops feeling so scary.
Native Binary: The Final Product
The Job: The final step is bundling all those CPU instructions into a standalone file (like an .exe on Windows or a binary on Linux).
In Plain English: This is what lets you send your program to a friend. They don't need to install Sauce, Rust, or anything else. They just double-click the file, and it runs. (Currently, this works for simple, effect-free programs).
Sauce isn't just an idea anymore. The core compiler pipeline is alive.
grab x = 10 |> _ and it understands it perfectly. The data flows left-to-right, just like reading a sentence..sauce source code, and it parses it into a type-safe syntax tree.toss to signal a side effect. This currently works in the interpreter, while the LLVM backend intentionally rejects effects for now.I have a clear plan for where this is going. Since the core architecture is stable, the next updates are about making it usable.
I avoided building a language for years because I thought I wasn't smart enough.
But building Sauce taught me that there's no magic. It's just data structures. A Lexer is just regex. A Parser is just a tree builder. An Interpreter is just a function that walks through that tree.
If you want to actually understand how your code runs, don't just read a book. Build a tiny, broken compiler. Create a Lexer. Define a simple tree. Parse 1 + 1.
You'll learn more in a weekend of fighting syntax errors than you will in a year of just using cargo run.
Check out Sauce on GitHub. It's small, it's honest, and we are officially cooking.
Model Context Protocols (MCPs) are a powerful pattern for connecting AI assistants to real world data sources and tools. Over the past few months, I’ve been experimenting with MCPs across multiple environments in particular Claude, Cursor, and Replit and I want to share what actually works, where things tend to break, and how I think about validation.
MCPs are not magic they are just a standard for letting LLM based tools talk to external systems in a reliable way. This is similar to how a USB-C port lets different devices connect to your computer: the protocol itself doesn’t do anything special, but it makes integration possible when done right.
1. Not all MCPs behave the same everywhere
A lot of MCP examples that feel solid in one environment print nothing or fail silently in another. This usually comes down to how each environment handles execution, context, filesystem access, or tool arguments.
2. Small config differences matter
Sometimes an MCP that runs in Claude breaks in Cursor not because of a logic bug but because of subtle differences in how CLI tools, paths, or quirk settings are handled.
3. Validation is hard
There isn’t a silver bullet yet for catching silent failures. Most of the time, I find myself running the same MCP in minimal contexts, checking raw outputs and side effects, and isolating tool chains until I understand exactly where it fails.
4. Trending MCPs you might find useful
From systems that access files or GitHub repos to tooling that helps with analytics or Redis access, MCPs are being built for a variety of workflows. Treat them as reusable modules, not one off scripts.
If you’re building with MCPs, treat them as execution boundaries with behavioral contracts you should validate each one in each environment before using it in production workflows.
For folks discovering MCPs or trying to find working examples across tools like Claude, Cursor, GitHub Copilot and more, I’ve been aggregating what actually runs in multiple environments.
Here’s a place where that’s organized for easy reference:
https://ai-stack.dev/
Would love to hear how others validate MCPs in their workflows.
During the holidays, I was browsing the internet when I came across cool-retro-term, an open-source terminal that mimics the visuals of old cathode ray tube (CRT) displays.
I loved the look of it—it has an awesome retro sci-fi atmosphere that reminds me of the Alien and Fallout fictional universes. I thought it would be amazing if I could make my personal website look like that. I took a look at the source code and quickly realized that it's implemented using QML and C++. I'm not experienced with either, so I wondered if it would be possible to port it to web technologies using either WebGL or Emscripten. I asked GitHub Copilot, and it advised me to try the WebGL route because there were fewer technical challenges involved.
I have no experience with QML, but I have worked with Three.js in the past. I don't have a deep understanding of how shaders work internally, but with the help of GitHub Copilot, I was able to understand the architecture of the original source code. I cloned the repo, added a new web directory, and started providing instructions to Claude. The original application has two sets of shaders: the first one handles the static frame, and the second one is a series of effects that are influenced by the current time.
I started by asking Claude to implement the static frame using Three.js while ignoring the terminal emulation for now. This gave me a foundation to build upon without getting overwhelmed by complexity.
The second step was to ask Claude to render some basic text from a hardcoded text file in the Three.js scene using the appropriate retro font.
Then I started to migrate the visual effects—starting with the background noise and then moving on to the other effects:
At this point, there were some visual bugs that required a bit of trial and error until the LLM was able to fix them without introducing new issues. The main one was a problem related to the position of the screen reflections.
Once I was able to get the terminal frame and the effects ported from OpenGL to WebGL, I asked the LLM to replace the hardcoded text with the output of Xterm.js. Xterm.js is an open-source project designed to be a web-based front-end for terminals. It's used in tools like Visual Studio Code because VS Code is a web application that runs inside Electron—Xterm.js is the front-end within VS Code that accesses a real terminal instance on your machine.
In my case, I don't need a real terminal, so I asked Claude to create a terminal emulator with a bunch of basic commands such as clear, ls, cd, and cat.
At this point, everything was almost complete, so I asked Claude to implement multiple text-based games. I implemented games like Pong, Tetris, Snake, Minesweeper, Space Invaders, and Arkanoid—and most of them worked almost perfectly on the first attempt. Some of the games experienced minor visual issues, but I was able to solve everything by describing the issue in detail to Claude and what I thought was the root cause.
I also wanted to add support for playing audio and video files directly in the terminal, similar to how ffplay works in a real terminal. I asked Claude to implement an ffplay command that could render video with all the effects previously implemented.
The final step was to ask Claude to refactor the code to clearly separate the library code (the WebGL retro terminal renderer) from my application code (the terminal emulator and games). The goal was to publish the WebGL terminal renderer as a standalone npm module, and Claude was able to do it with zero issues in just one attempt.
Overall, the entire implementation took around 10–15 hours. Without an LLM, it would have taken me several weeks. I think this project has been an interesting way to demonstrate how powerful LLMs can be as development tools—especially when working with unfamiliar technologies like shader programming.
By the end of the experiment, I consumed about 50% of my monthly GitHub Copilot for Business tokens ($21/month), which means the entire project cost me roughly $10.50. When you consider that this would have taken weeks of work otherwise, the cost savings enabled by Claude Opus are absolutely insane.
If you're curious, you can check out the result at https://remojansen.github.io/ or browse the source code on GitHub.
