Ikea has released the most affordable version of its Sjöss USB-C chargers yet. Following an $8 30W version that launched in 2024 and a $35 65W option that debuted earlier this year, the new 20W Sjöss is now available online and through many of Ikea's physical US stores for just $3.99.
That's considerably cheaper than Apple's $19 20W USB-C Power Adapter while being comparable in size, and cheaper than Anker's $11.99 Nano Pro 20W USB-C charger. However, Ikea's new $4 charger is much larger than Anker's, and is about 26 grams heavier. If you're a traveler trying to pack as light as possible, Anker's may be the better choice. If you're a travel …
Python has a feature that surprises developers coming from other languages: you can attach an else clause to while and for loops.
This isn't a bug or obscure syntax—it's an intentional design choice, and it solves a real problem.
The else block runs only if the loop completes without encountering a break.
while condition:
if found_target:
break
# keep searching
else:
# This runs only if we never broke out
print("Search completed without finding target")
Consider a search loop. Without else, you need a flag variable:
found = False
index = 0
while index < len(numbers):
if numbers[index] == target:
found = True
break
index += 1
if not found:
print("Not found")
That's three extra operations:
found = False
found = True when foundif not found after the loopWith else:
index = 0
while index < len(numbers):
if numbers[index] == target:
print(f"Found at index {index}")
break
index += 1
else:
print("Not found")
No flag. The else replaces the conditional check entirely.
Think of while...else as "while...nobreak":
break executes → else is skippedFalse) → else runsThis pattern is common when connecting to APIs, databases, or external services:
max_attempts = 3
attempt = 0
while attempt < max_attempts:
attempt += 1
print(f"Attempt {attempt}...")
if try_connection():
print("Connected!")
break
else:
print("All attempts failed. Service unavailable.")
The else fires only when all retries are exhausted—exactly when you need to handle failure.
Note: This is a simplification. Production retry logic typically includes exponential backoff and jitter to avoid overwhelming the server with simultaneous retries from multiple clients.
The naming is confusing. else after a loop doesn't sound like "run if no break." It sounds like it should be connected to a conditional.
Some Python developers have argued it should have been called nobreak or finally (though finally already means something different in Python's exception handling).
The syntax won't change, so the best approach is to internalize the mental model: else means nobreak.
Important: It's only useful for loops that use break for early exit. If your loop doesn't have a break, the else will always run—which is rarely what you want.
Since we're talking about while loops, here are three bugs that waste debugging time:
Bug 1: Forgetting to update the variable
count = 0
while count < 5:
print(count)
# Forgot: count += 1
Bug 2: Wrong update direction
count = 10
while count > 0:
print(count)
count += 1 # Should be: count -= 1
Bug 3: Off-by-one with not-equals
x = 3
while x != 10:
x += 2 # x goes 3, 5, 7, 9, 11... never equals 10
Fix: Use x < 10 instead of x != 10.
Debugging tip: Add a safety counter during development:
safety = 0
while condition and safety < 1000:
safety += 1
# your code
if safety >= 1000:
print("WARNING: Loop hit safety limit!")
From my upcoming book "Zero to AI Engineer: Python Foundations."
By 2026, the debate between Playwright and Selenium has largely moved beyond "syntax preference" or "language support." For the senior automation architect, the choice is no longer about whether you prefer driver.find_element or page.locator; it is a fundamental decision about infrastructure topology and protocol efficiency.
Historically, browser automation was viewed as "scripting": sending a command to click a button and waiting for a result. Today, it is a critical layer of distributed infrastructure. We run tests in ephemeral containers, scrape data behind aggressive WAFs, and automate complex authentications protected by hardware-bound credentials. In this high-stakes environment, the underlying architecture of your automation tool dictates its reliability, speed, and maintainability.
This review dismantles the internal mechanics of Selenium (including the W3C WebDriver BiDi evolution) and Playwright. We will analyze why architectural decisions made in 2004 still constrain Selenium today, and how Playwright’s "headless-first" event loop aligns with the reality of the modern web.
The single most defining difference between the two frameworks is the communication protocol used to drive the browser. This is not an implementation detail; it is the root cause of nearly every performance and stability difference between the two.
Selenium’s architecture is built on the WebDriver W3C Standard. At its core, it is a RESTful HTTP protocol. Every single action in a Selenium script triggers a discrete HTTP request:
POST /session/{id}/element (Find element)POST /session/{id}/element/{id}/click (Click element)This "chatty" architecture introduces Control Loop Latency. Between every command, there is network overhead, serialization, and deserialization. In a local environment, this is negligible (milliseconds). In a distributed grid (e.g., running tests on Sauce Labs or BrowserStack from a CI runner), these round-trip times accumulate, creating a "stop-and-go" execution rhythm that is inherently slower and prone to race conditions.
Playwright abandons standard HTTP for a single, persistent WebSocket connection (leveraging the Chrome DevTools Protocol or CDP). Once the connection is established, the channel remains open.
Senior engineers often cite Playwright’s "Auto-Wait" as a key feature. However, understanding how it works architecturally explains why Selenium struggles to replicate it, even with "Explicit Waits."
When you use WebDriverWait in Selenium, the logic lives in your client script (Python/Java/C#). The script effectively spams the browser driver with HTTP requests:
"Is it visible? No. (Wait 500ms). Is it visible? No. (Wait 500ms). Is it visible? Yes."
This polling happens outside the browser process. It creates network noise and, crucially, a race condition window. The element might flicker into visibility and back out between poll intervals, causing the test to fail or interact with a detaching element.
Playwright compiles your locator logic and injects it directly into the browser context via the CDP session. The "waiting" happens inside the browser's own event loop.
Playwright hooks into requestAnimationFrame and the browser’s painting cycle. It checks for element stability (is the bounding box moving?) and actionability (is it covered by a z-index overlay?) in the same render loop as the application itself. The command to "click" is only executed when the browser itself confirms the element is ready. This atomic "Check-and-Act" mechanism eliminates the race conditions inherent to external polling.
In 2026, automation is rarely just about clicking buttons. It requires mocking API responses, injecting headers, and blocking analytics trackers.
Selenium historically required a "Man-in-the-Middle" (MITM) proxy (like BrowserMob) to intercept network traffic. This added a massive point of failure: certificate trust issues, decreased throughput, and complex infrastructure setup. While Selenium 4+ introduced NetworkInterceptor, it is a patchwork implementation on top of the WebDriver protocol, often limited in granularity and prone to compatibility issues across different browser drivers.
Playwright gains network control for free via its architecture. Because it communicates via CDP (or the Firefox/WebKit equivalents), it sits between the browser’s network stack and the rendering engine. It can pause, modify, or abort requests natively without a proxy server. This allows for:
As of 2026, Selenium 5 has fully embraced WebDriver BiDi, a standardized effort to bring bi-directional communication (WebSockets) to the WebDriver standard. This is Selenium’s answer to Playwright.
The Reality of BiDi:
While BiDi allows Selenium to receive events (like console logs) without polling, it is fundamentally an "add-on" to a legacy architecture. The vast ecosystem of Selenium Grids, cloud providers, and existing test suites relies on the HTTP Request/Response model. Migrating a massive Selenium codebase to utilize BiDi features often requires significant refactoring, bringing the effort parity close to a full migration to Playwright.
Playwright’s Advantage:
Playwright was designed after the Single Page Application (SPA) revolution. Its "Browser Context" model—which allows spinning up hundreds of isolated, incognito-like profiles within a single browser process—is an architectural leap over Selenium’s "One Driver = One Browser" resource-heavy model. This makes Playwright exponentially cheaper to run at scale in containerized environments (Kubernetes/Docker).
When should you stick with the incumbent, and when should you adopt the challenger?
| Feature | Selenium (w/ BiDi) | Playwright |
|---|---|---|
| Primary Protocol | HTTP (Restful) | WebSocket (Event-driven) |
| Wait Mechanism | External Polling | Internal Event Loop (RAF) |
| Language Support | Java, C#, Python, Ruby, JS | TS/JS, Python, Java,.NET |
| Legacy Browsers | Excellent (IE Mode support) | Non-existent (Modern engines only) |
| Mobile Support | Appium (Native Apps) | Experimental / Web only |
| Scale Cost | High (1 Process per Test) | Low (Contexts per Process) |
Stick with Selenium if:
Migrate to Playwright if:
In 2026, Playwright is not just a "better Selenium"—it is a different species of tool. By coupling tightly with the browser internals via WebSockets, it removes the layers of abstraction that caused a decade of "flaky tests."
Selenium remains a titan of interoperability and standard compliance. Its W3C WebDriver standard ensures it will run on anything, forever. But for the engineering team tasked with building a reliable, high-speed automation pipeline for a modern web application, Playwright’s architecture offers the path of least resistance. It solves the hard problems of synchronization and latency at the protocol level, allowing you to focus on the test logic, not the sleep statements.
When we talk about improving as a developer, most advice focuses on writing code: learning new frameworks, mastering algorithms, or optimizing performance. But one of the most underrated skills in programming isn’t writing—it’s reading code.
The Hidden Superpower
Think about it. Every day, developers spend hours working with code they didn’t write: legacy systems, open-source libraries, or teammate contributions. Being able to quickly understand someone else’s code is like having a superpower. It helps you debug faster, avoid introducing bugs, and even learn new patterns you hadn’t considered.
Yet, many developers skip this practice. We’re wired to solve problems by coding, not by reading. But reading code can teach you how others think, reveal idiomatic uses of a language, and expose you to clever techniques that you can later apply in your own projects.
Start Small, Read Daily
You don’t need to dive into huge codebases right away. Start with something small:
As you read, ask questions: Why did they structure it this way? Could it be simpler? How does this function interact with the rest of the system?
This approach trains you to think like a developer before you type a single line of code.
The Debugging Bonus
Reading code is also the secret to better debugging. Often, the bug isn’t where you think it is. By systematically reading through the code, you can understand the flow, spot edge cases, and find issues before they turn into hours of frustration.
It’s like being a detective: the more you read, the more clues you pick up, and the faster you solve the mystery.
Learning Beyond Tutorials
Tutorials and courses are great for learning syntax, but real growth comes from reading real-world code. You’ll see patterns, anti-patterns, trade-offs, and compromises that tutorials never teach. Over time, your own code starts to look cleaner, more maintainable, and more efficient because you’ve absorbed best practices by osmosis.
A Habit Worth Building
Try setting aside 20–30 minutes a day to read someone else’s code. Treat it like reading a book: analyze, reflect, and learn. Pair it with your coding time, and you’ll notice subtle improvements in your speed, design sense, and problem-solving skills.
Writing documentation is no longer just for humans.
Developers still read it, but AI reads it too. Large language models scan, summarize, and even generate code from your docs.
This shift does not make documentation less important, it changes how we write and structure it. In this article, you’ll learn how llms.txt helps your docs work for both humans and machines.
The Google Cloud Developer Experience team focuses on one goal: helping developers move from learning to launching as fast as possible.
As Google Cloud services grew, keeping documentation accurate and up to date became harder. Developers expect quick, correct answers. If docs fall behind, adoption suffers.
Google Cloud did not replace technical writers.
They augmented them with AI.
Generative AI is now part of their documentation workflow. It helps with formatting, markup translation, and validation. Some docs are even tested automatically by running the documented steps in real environments.
Documentation is treated like code: generated, tested, and continuously improved.
You may not work at Google Cloud scale, but the same pressures already exist in many teams today.
Developers still read documentation. But very often, AI reads it first.
Today, developers:
Human readers are still important. But LLMs are now a primary consumer of documentation.
Documentation is no longer just read by humans. It is consumed by LLMs.
That reality changes how docs should be structured and published.
It is easy to worry that AI will replace tech writers.
In practice, the opposite is happening.
AI can generate text quickly. It cannot decide what matters, what is correct, or how concepts should be structured.
Tech writers provide that structure.
Tech writers do not need to compete with AI. They need to organize knowledge so AI can use it correctly.
This shift moves the role from writing pages to designing knowledge systems.
One common way to do this is by providing AI tools with a structured, machine-readable version of your docs. This is where llms.txt comes in.
llms.txt Is and What It Is Not
llms.txt is a machine-readable version of your documentation. It is usually written in Markdown and designed for AI tools and LLMs.
Think of it as a translation layer:
llms.txt gives AI a clean and structured view of the same contentA good llms.txt file often includes:
What it is not is just as important.
This does not replace documentation.
It protects it.
By giving AI its own context file, you avoid turning human docs into prompt-shaped content. Human readers get clarity. AI tools get structure.
llms.txt Auto-Generated
One key lesson from Google Cloud is automation.
Their documentation is generated, validated, and tested continuously. llms.txt should follow the same idea.
Best practice is to auto-generate it whenever documentation changes.
Practical guidance:
llms.txt as part of your docs build processThis matters because:
One source of truth.
Two audiences.
No duplication.
Google Cloud also applied AI to code samples.
They faced thousands of APIs, many languages, and constant change. Manual maintenance did not scale.
Their solution used AI systems that:
The lesson is simple.
AI works best when knowledge is structured, grounded, and validated.
That same principle applies to documentation. llms.txt provides that structure for AI tools.
llms.txt in Practice
For AI tools with limited capabilities that cannot fetch docs on their own, llms.txt is especially useful.
A simple workflow:
docs.example.com/llms.txt
This keeps AI output aligned with real documentation and real constraints.
For llms.txt to be useful, it must be visible.
Recommended approach:
docs.example.com/llms.txt
This is not a power-user trick.
It is basic documentation infrastructure.
AI is not removing the need for tech writers.
It is raising expectations.
The work shifts from writing more pages to:
llms.txt is a small file, but it represents a real shift.
If you own documentation today, the question is not whether AI will read it.
It already does.
The real question is whether it is reading the right version.