1169. In this bonus segment, originally released in November, we look at Peter Sokolowski's "Tale of Two Dictionaries," tracing the word "dictionary" back to a 16th-century Latin work by a monk named Calepino. We look at how this original source led to the first monolingual dictionaries in both English and French, all within a year of each other.
Agent skills are modular folders of instructions, scripts, and assets enabling progressive disclosure so agents load precise knowledge only when needed. Anthropic's Claude Code taxonomy groups skills into data fetching, business automation, code quality, verification, and incident runbooks while emphasizing gotchas, small skill bodies, and filesystem-style packaging. Skill Creator adds testing, benchmarking, and auto-description tuning to preserve trigger reliability across model updates and to distinguish capability uplifts from encoded-preference workflows.
There's an ongoing narrative that Windows is worse than ever today and people are leaving in droves. Paul does not see that, and will simply point to Windows 8 and remind folks that it can be (and was) worse. Also, PowerToys 0.98 adds a major new feature to Command Palette, big changes to Keyboard Manager and CursorWrap, and about 100 other updates. This is a big one. Plus, Mozilla Firefox is staging a comeback and may be worth another look.
Windows
Rajesh Jha is retiring and Microsoft is reorging its Experiences + Devices team
Release Preview: A peek at next week's Week D update (and April's Patch Tuesday) shows we're getting improvements to Narrator, Settings, Smart App Control, Pen settings, Display, File Explorer, and the Windows Recovery Environment (WRE). The trend continues!
New Canary, Dev, and Beta builds - Nothing new in Canary. Dev/Beta: Drag Tray is being renamed to Drop Tray, you can change the user folder name during Setup, Restore points are getting a modern update finally
Intel goes nuts with new "Arrow Lake refresh" processors; these are not Copilot+ PC capable and it's unclear what the Panther Lake comparison looks like
IDC now expects 11.3 percent decline in PC market in 2026, 7.6 percent decline for tablets
AI
Microsoft may sue OpenAI for contract breach - the best Microsoft divorce since IBM
Major reorg in Microsoft's AI businesses
Former Snap exec in charge of consolidated Copilot offerings across consumer and commercial
Mustafa Suleyman to focus on Microsoft's foundational models
There has been a lot of retiring and a lot of outside hires for top-level executive positions in Microsoft over the past year or more. Curious.
Rumors vs. reality in Microsoft scaling back AI ambitions in Windows
Rumor: Microsoft is backtracking on some Copilot features
Reality: Microsoft is not backtracking on its AI ambitions, it's just going to try to do a better job with branding and positioning
Microsoft launches Copilot Health in the U.S.
Google Personal Intelligence ships in the U.S.
OpenAI releases GPT-5.4 mini and nano models
GPT-5 mini is available as a reasoning model on Duck.ai
Xbox and gaming
Rumor vs. reality in Xbox strategy
Rumor: Microsoft removed "This is an Xbox" messaging from website so it must be focusing on consoles again
Reality: Literally nothing has changed
Xbox Insiders is testing per-game Quick Resume toggle
Also more groups on Home, custom colors, profile badges in guide
Big half month for Game Pass, with Resident Evil 7: Biohazard, more coming
Starfield is coming to PS5 on April 7
NVIDIA launches DLSS 5, changes existing games, people are freaking out
Tips and picks
Tip of the week: The grass is always greener
App pick of the week: PowerToys 0.98
RunAs Radio this week: Sustainable AI with Darshna Shah
Brown liquor pick of the week: Teeling Small Batch Whiskey
The Windows Weekly theme music is courtesy of Carl Franklin.
Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit
Navigating the modern JavaScript ecosystem can often feel like trying to catch a moving train. With the official stabilization of React Server Components (RSC), the way we think about data fetching and component boundaries has fundamentally shifted for the better.
In this guide, we will explore how RSCs allow us to write faster, leaner applications by offloading heavy logic to the server. If you are looking to stay ahead in the Indian tech landscape, understanding this paradigm is no longer optional.
Exploring the shift toward Server-First React Architectures
The fundamental architectural difference between Server and Client components in the modern React tree.
How to drastically reduce your client-side bundle size by moving heavy dependencies to the server.
Effective strategies for data fetching directly within your component logic without using useEffect.
Practical patterns for composing Server and Client components while maintaining strict security boundaries.
Insights into why React Server Components are the standard for high-performance enterprise applications in 2026.
The Fundamental Shift: Understanding React Server Components
For years, React developers relied heavily on Client-Side Rendering (CSR), which often led to bloated JavaScript bundles and sluggish "loading spinners" on slower Indian mobile networks. React Server Components (RSC) change this by executing exclusively on the server, sending only the final UI structure to the browser.
This means your code can interact directly with your database or file system without exposing sensitive logic to the client. By utilizing server-side execution, we eliminate the need for complex API layers for simple data fetching tasks.
If you are familiar with older patterns, you might want to check out my previous post on Modern React Patterns to see how far the ecosystem has evolved recently. This shift is not just about syntax; it is a complete rethink of the request-response cycle.
Drawing the Line: Server vs. Client Components
One of the most common points of confusion for developers is knowing when to use the 'use client' directive. While Server Components are the default, they cannot handle interactivity like click listeners, state hooks, or browser-only APIs.
Server Components: These are ideal for data-heavy sections, SEO-critical content, and components that use large third-party libraries that shouldn't be sent to the user's browser.
Client Components: Use these for parts of your app that require immediate feedback, such as search bars, toggle switches, or complex animations using Framer Motion.
Shared Components: Components that don't use server-only or client-only features can actually run in both environments depending on where they are imported.
By keeping the interactivity at the leaves of your component tree, you ensure that the majority of your application remains lightweight and fast, providing a premium user experience.
Unlocking Massive Performance with Zero-Bundle Size
The "Zero-Bundle Size" promise of React Server Components is perhaps the most exciting feature for performance enthusiasts. In a traditional setup, if you used a heavy library like 'moment.js' or 'markdown-it', that entire code would be downloaded by the user.
With RSC, these libraries stay on the server. The client only receives the generated HTML and a tiny bit of metadata. This drastically improves the First Contentful Paint (FCP) and Time to Interactive (TTI) metrics, which are crucial for SEO ranking.
We've discussed similar optimization techniques in our guide on Web Performance Optimization. Implementing RSC is like getting a free performance upgrade without having to manually split your code into a million tiny chunks.
Implementation Best Practices for 2026
To truly master RSC, you must follow established patterns to avoid common pitfalls like "waterfall" requests. Always aim to fetch data in parallel rather than sequentially whenever possible.
Key Strategies for Success
Colocate Data Fetching: Keep your database queries inside the component that needs the data. This makes the code easier to maintain and prevents the "prop-drilling" nightmare we all hate.
Use Suspense Boundaries: Wrap slow-loading server components in React Suspense to show an elegant loading state while the rest of the page remains interactive for the user.
Secure Your Actions: When using Server Actions to mutate data, always validate the user session and input data to prevent common security vulnerabilities like CSRF.
Remember, security is a shared responsibility. Just because the code runs on the server doesn't mean it's automatically shielded from malicious input or unauthorized access attempts.
The Future Outlook: Is RSC Always Necessary?
While React Server Components offer incredible benefits, they aren't a silver bullet for every single project. Simple, static websites or highly interactive dashboards might still benefit from older, more straightforward architectures.
However, for content-rich sites, e-commerce platforms, and enterprise tools, RSC is becoming the industry standard. As the ecosystem matures, we expect even better integration with tools like Vite and various meta-frameworks.
Stay updated with the latest trends by following our Software Development Blog where we regularly break down complex tech concepts into simple, actionable advice for our community.
Frequently Asked Questions (FAQ)
1. What is the difference between SSR and RSC?
SSR sends an HTML snapshot to the client, while RSC allows components to stay on the server permanently, reducing the client-side JavaScript bundle.
2. Can I use hooks like useState in Server Components?
No, hooks that require client-side state or lifecycle methods can only be used in Client Components marked with the 'use client' directive.
3. Do Server Components improve SEO?
Yes, because the content is rendered on the server, search engine crawlers can easily index the fully populated HTML, improving your search visibility.
4. Are Server Components exclusive to Next.js?
While Next.js popularized them, RSC is a React feature that can be implemented by other frameworks like Remix, Waku, or custom implementations.
5. How do I fetch data in RSC?
You can simply use async/await directly within your functional component to fetch data from an API or a database without needing useEffect.
6. Can Server Components be nested inside Client Components?
No, you cannot import a Server Component into a Client Component. However, you can pass a Server Component as a 'child' to a Client Component.
7. Does RSC replace the need for Redux?
Not necessarily, but it reduces the need for global state for fetched data, as that data can now live directly on the server components.
8. What is the 'use client' directive?
It is a convention used at the top of a file to signal to the bundler that the following code and its imports belong to the client-side bundle.
9. Is it hard to migrate an existing app to RSC?
It requires a rethink of your component tree, but you can adopt it incrementally by migrating small, data-heavy sections of your app first.
10. Will RSC make my website faster?
In most cases, yes, especially on mobile devices, because the amount of JavaScript the browser needs to parse and execute is significantly lower.
I hope this deep dive into React Server Components helps you build faster and more efficient applications. The transition might feel challenging at first, but the performance rewards are well worth the effort. Happy coding, and feel free to share your thoughts in the comments!
I wrote code without tests that ran in production without defects, and I wrote buggy code with TDD (Test Driven Development). Time to look back at 35 years of coding and when tests help, and when there is something better. And especially, what these better things are.
In this part, we look at how to make manual testing easier and why it matters.
Make it easy to test manually
Our stack
Our production system runs on the Azure cloud.
Web-Apps respond to requests from clients and provide an HTTP API to other systems. Azure Functions execute asynchronous tasks. There is also a dedicated Identity Provider Service running.
Data is stored in SQL Server, blob storage, or table storage. A Redis cache is also used.
Communication between different processes is either HTTP or goes over a Service Bus.
Finally, we use Telemetry to see what’s going on in our system.
Using all of these Azure services is great for us because they provide out-of-the-box, dynamic, load-based scaling and are easy enough for a small team like ours to run.
Why simple local testing is important
The downside is that spinning up test systems on the same stack is rather tricky, especially for a quick exploratory testing session or for running a backend while working on frontend code.
That’s why we decided to be able to run almost the whole system locally. Ideally, in a way that makes debugging easy. The easiest way to debug code is when it all runs inside a single process.
Now we can quickly spin up the system on a local machine, do exploratory testing, or hunt down a bug. This ability makes our daily developer lives much simpler. The most important effect, however, is that we do not have to test everything with automated tests. Manual testing is so simple that it is good enough for many scenarios. This saves a lot of time that would otherwise be spent on automated tests. Keep in mind that we write automated tests for regression purposes. But that is only relevant if the code will change, to ensure we don’t break it. With our extreme slicing approach, most code doesn’t change for a very long time after release. And if a slice changes, chances are high that the automated test has to be changed as well – and, therefore, it isn’t a true regression test anymore.
Of course, we can’t test the infrastructure, like the communication over the bus, this way, but most testing is for business-logic code, not infrastructure, in our context. Infrastructure testing occurs in a dedicated test environment or in the production system. We use feature toggles on test tenants to enable testing new infrastructure components without risk for real tenants.
Making it run locally in a single process
Of course, we can’t use a real service bus or Azure functions. So we decided to fake all the storages and the service bus.
We created a console application that hosts the backend as a Web App. Instead of using a real service bus sender, we inject a fake that short-circuits the sender to the handler, keeping it in the same process. We do the same for HTTP calls to Azure functions. The calls are not sent over HTTP, but the fake calls the handler code directly.
For table storage, we use the Azurite emulator. And we can either run with a real local SQL Database or a hand-written simulator (we just keep lists of data in-memory).
Next time
In the next and last post, I’ll talk a bit about LLMs and their impact on our testing strategy and conclude this series.
We built our policy engine on Rego, the language of the Open Policy Agent (OPA), to give you control over your deployments and runbooks. Rego is powerful and flexible, but it comes with a learning curve. Until now, getting a policy working meant switching between your Octopus instance and our documentation, with no guarantee a snippet would behave as expected. That kind of friction discourages teams from adopting the guardrails that keep deployments safe.
Starter Policies are designed to change that.
How it works
A guided wizard takes the guesswork out of writing your first policy. When you add a new policy in Platform Hub, you’ll find a library of starter policies built around common compliance use cases we see across our community.
Open Platform Hub, head to Policies, and select a starter policy that matches your goal. Give it a name, and Octopus takes it from there. It generates the boilerplate code with the name, scope, and conditions already filled in.
From there, it’s yours to customize. The editor includes Rego syntax highlighting to make the logic easier to read, and inline comments explain exactly which parts to modify for your team.
Governance in Platform Hub
Starter policies are a new addition to the Policies feature in Platform Hub, available on the Enterprise tier.
Platform Hub is your central home for software delivery, insights, and governance in Octopus. If you’re on Enterprise, you can use Policies to define and enforce standards at scale across all your deployments and Runbook runs. This includes things like ensuring production deployments always have an approval workflow, or preventing deployments from using outdated package versions.
Starter policies make it easier to get started with all of this, regardless of how familiar you are with Rego.
Learning by doing
We want you to understand the code, not just copy it. Each starter policy includes inline comments explaining how the Rego logic works, highlighting the specific sections you need to modify to customize the policy for your team.
This gives you something you can run immediately, not just read. Direct links to the policy schema are built into the editor too, so if you want to go deeper, the reference material is already there.
To see this in practice, here is a real-world example: enforcing an approval workflow for all deployments and runbook runs in your Production environments.
This policy is made up of two parts.
The policy scope defines when the policy applies. In this case, it targets any Environment named Production.
package require_manual_intervention# Default: Do not evaluate unless conditions are metdefault evaluate := falseevaluate if { input.Environment.Name == "Production"}
The default evaluate := false line is worth noting. Unlike a policy that applies everywhere by default, this one’s off unless the condition input.Environment.Name == "Production" is met. Scoping it to Production means the approval requirement only activates in environments with that name, leaving other environments unaffected.
The policy conditions define what is enforced. It checks that at least one manual intervention step exists in the Deployment or Runbook run, and that none of those steps are being bypassed:
package require_manual_intervention# Default: Deny alldefault result := {"allowed": false}# Helper: True if any manual intervention step is present in the skipped steps listmanual_intervention_skipped if { some step in input.Steps step.Id in input.SkippedSteps step.ActionType == "Octopus.Manual"}# Allow: Manual intervention steps exist and none are being bypassedresult := {"allowed": true} if { some step in input.Steps step.ActionType == "Octopus.Manual" not manual_intervention_skipped}# Deny: Block Deployment if manual intervention is bypassedresult := { "allowed": false, "reason": "Manual intervention steps cannot be skipped in this Environment"} if { manual_intervention_skipped}
There are three possible outcomes this policy can produce:
Deny by default: If no manual intervention step is present, the policy is non-compliant. The default result := {"allowed": false} line ensures this is the baseline behavior.
Allowed: A manual intervention step exists and has not been skipped. The approval workflow is intact, and the Deployment or Runbook run proceeds.
Non-compliant with a reason: A manual intervention step exists but has been explicitly skipped. Octopus surfaces the reason to the team: “Manual intervention steps cannot be skipped in this Environment.”
Violation actions
How Octopus responds to a non-compliant policy (the first and third outcomes above) depends on the configured Violation Action:
Block (default): The Deployment or Runbook run is stopped from progressing until the issue is resolved.
Warning: The execution continues, but the team is shown a warning so the non-compliance is visible without being a hard stop.
The right violation action will depend on your team’s processes and how strictly you need to enforce compliance in an Environment. Either way, the outcome is recorded. Every policy evaluation is captured in your audit log, including the policy name, verdict, action taken, and the reason. When something is blocked, your team can see exactly why and what needs to change.
Get started
Starter policies are available now. Head over to the Platform Hub section of your Octopus instance and start building out your guardrails.
We’re continuing to expand policy management in Platform Hub, and your feedback shapes what we build next. If you have suggestions, share them on the Octopus Deploy Roadmap.