
CES has always carried the maker spirit, but this year it felt more baked than ever. We didn’t see rows of 3D printers running as in the recent past. Instead, we saw their output (printed parts, custom enclosures, snapped-together modules) embedded in products. Same with Raspberry Pis and dev boards. They’re not being demoed. They’re […]
The post CES 2026: Maker Highlights from the Biggest Tech Show on Earth appeared first on Make: DIY Projects and Ideas for Makers.
When we read crime novels, we want to find out who the baddie is. This means that we need a number of suspects from which to choose. But how many suspects do you need in a crime novel?
Crime fiction is usually divided into mysteries, thrillers, and horror stories.
Whatever type of crime fiction (and their many sub-genres) you choose, you need suspects for the protagonist to interview, chase down, or stop. And you need enough of them to keep the reader interested.
Must-Read: Mystery, Horror, Thriller – What’s The Difference?
Now that we’ve established there is a crime, we need to include great suspects to keep the story going for 80 000 words.
‘A suspect is a known person accused or suspected of committing a crime. … The term is also related to the terms defendant, the accused, person of interest, and prime suspect.’ (Wikipedia)
Fewer than five suspects do not challenge the reader enough. We want them to wonder who committed the crime and to go along with the detective or sleuth for the full length of the novel. More than five can be too many in a book. This can bore your reader and they will lose interest.
People can usually remember four suspects well. So use five, but concentrate more on the four most likely culprits. We are able to easily remember the names, addresses, connections to the victim, and motives of four people.
(In films, it is common to have more suspects, sometimes up to 10. Glass Onion, A Knives Out Mystery has eight suspects.)
The General Red Herring Suspect: This person of interest could be any of the above and is designed to look guilty with a strong motive, a lack of alibi, and a compromising situation. This diverts attention from the real killer. They are often mistakenly arrested during the course of the story.
Tip: Use red herrings that seem to implicate or clear a suspect. A good red herring could either implicate the wrong person or give a reliable alibi for the real murderer. This makes your protagonist think they’ve caught the killer or discount them as a suspect.
These characters must be interesting. We should remember their interviews, their lies, their truths, and their false leads. At the end of the novel, there will be one suspect left who is guilty. The trip towards this destination should be intriguing.

by Amanda Patterson
© Amanda Patterson
If you liked this blogger’s writing, you may enjoy:
Top Tip: Find out more about our workbooks and online courses in our shop.
The post How Many Suspects Do You Need In A Crime Novel? appeared first on Writers Write.
I had some fun agentic coding sessions over the weekend as I wanted to test a couple of hypotheses about how the tools worked. I learned some things, and hope to publish some short blogs this week!
[blog] The Blood Dimmed Tide of Agents. More agents for coding, or business outcomes? Yay! How are we supposed to manage them all? *crickets*
[blog] Don’t fall into the anti-AI hype. Don’t listen to me; listen to great engineers who are doing better work, while staying eyes-wide-open about the possible implications. The fun of building is untouched, though. More from Simon.
[blog] Start your meetings at 5 minutes past. It’s the only system that works. My group does it too. If you want to avoid the back-to-back meeting mania, force them to start minutes later.
[blog] Under the Hood: Universal Commerce Protocol (UCP). We announced this yesterday and it looks like it already has great industry backing. Browse and checkout via agents.
[blog] The AI platform shift and the opportunity ahead for retail. UCP was one of a few things we talked about at the National Retail Federation event.
[article] Google Cloud: A Deep Dive into GKE Sandbox for Agents. We want a safer way to run untrusted workloads. This subsystem is open source, and cleanly baked into our Kubernetes service.
[blog] AWS in 2026: The Year of Proving They Still Know How to Operate. Did our AWS friends figure some things out last year? Sure. Corey also points out that Google is their actual competition, not the revenue-obfuscating chaps in Redmond.
[blog] Joint statement from Google and Apple. Apple likes Gemini, and is betting on it for Siri and other experiences. News story here.
[blog] Cowork: Claude Code for the rest of your work. Very cool. This reminds me of some other things from us and others. Raw, but great potential.
[blog] Go 1.26 interactive tour. This release is a big one, and I enjoy Anton’s posts that let you interact with the new language features.
[blog] Increased file size limits and expanded inputs support in Gemini API. Reference cloud storage buckets and other sources when shipping context to Gemini.
[article] The biggest obstacle for engineer productivity in 2026. An AI agent can help you stay in the zone longer by keeping your from bouncing around different tools. But there’s also constant interruption as you wait for prompt results.
[blog] A2UI for Google Apps Script. This framework that lets agents generate dynamic UIs is pretty cool. Here, it’s implemented in a way that bakes into Google Workspace.
Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:
New from Anthropic today is Claude Cowork, a "research preview" that they describe as "Claude Code for the rest of your work". It's currently available only to Max subscribers ($100 or $200 per month plans) as part of the updated Claude Desktop macOS application.
I've been saying for a while now that Claude Code is a "general agent" disguised as a developer tool. It can help you with any computer task that can be achieved by executing code or running terminal commands... which covers almost anything, provided you know what you're doing with it! What it really needs is a UI that doesn't involve the terminal and a name that doesn't scare away non-developers.
"Cowork" is a pretty solid choice on the name front!
The interface for Cowork is a new tab in the Claude desktop app, called Cowork. It sits next to the existing Chat and Code tabs.
It looks very similar to the desktop interface for regular Claude Code. You start with a prompt, optionally attaching a folder of files. It then starts work.
I tried it out against my perpetually growing "blog-drafts" folder with the following prompt:
Look at my drafts that were started within the last three months and then check that I didn't publish them on simonwillison.net using a search against content on that site and then suggest the ones that are most close to being ready

It started by running this command:
find /sessions/zealous-bold-ramanujan/mnt/blog-drafts \
-type f \( -name "*.md" -o -name "*.txt" -o -name "*.html" \) \
-mtime -90 -exec ls -la {} \;That /sessions/zealous-bold-ramanujan/mnt/blog-drafts path instantly caught my eye. Anthropic say that Cowork can only access files you grant it access to - it looks to me like they're mounting those files into a containerized environment, which should mean we can trust Cowork not to be able to access anything outside of that sandbox.
It turns out I have 46 draft files from the past three months. Claude then went to work with its search tool, running 44 individual searches against site:simonwillison.net to figure out which of my drafts had already been published.
Here's the eventual reply:
Based on my analysis, here are your unpublished drafts that appear closest to being ready for publication:
🔥 Most Ready to Publish (substantial content, not yet published)
freq-argued-llms.md- "Frequently Argued Questions about LLMs" (22,602 bytes)
- This is a meaty piece documenting common arguments about LLMs with your counterpoints
- Well-structured with a TL;DR and multiple sections
- No matching published article found on your site
- Very close to ready - just needs a final review pass
claude-code-timeline.md- "Claude Code Timeline and Codex Timeline" (3,075 bytes)
- About viewing JSONL session logs from Claude Code and Codex
- You published
claude-code-transcriptson Dec 25, but this appears to be a different/earlier piece about timeline viewing tools- Shorter but seems complete
datasette-1a20-upgrade-instructions.md- Plugin Upgrade Guide (3,147 bytes)
- Technical guide for plugin authors
- You published the main 1.0a20 announcement but this companion upgrade guide appears unpublished
- Would be valuable for plugin maintainers
That's a good response! It found exactly what I needed to see, although those upgrade instructions are actually published elsewhere now (in the Datasette docs) and weren't actually intended for my blog.
Just for fun, and because I really like artifacts, I asked for a follow-up:
Make me an artifact with exciting animated encouragements to get me to do it
Here's what I got:

I couldn't figure out how to close the right sidebar so the artifact ended up cramped into a thin column but it did work. I expect Anthropic will fix that display bug pretty quickly.
I've seen a few people ask what the difference between this and regular Claude Code is. The answer is not a lot. As far as I can tell Claude Cowork is regular Claude Code wrapped in a less intimidating default interface and with a filesystem sandbox configured for you without you needing to know what a "filesystem sandbox" is.
Update: It's more than just a filesystem sandbox - I had Claude Code reverse engineer the Claude app and it found out that Claude uses VZVirtualMachine - the Apple Virtualization Framework - and downloads and boots a custom Linux root filesystem.
I think that's a really smart product. Claude Code has an enormous amount of value that hasn't yet been unlocked for a general audience, and this seems like a pragmatic approach.
With a feature like this, my first thought always jumps straight to security. How big is the risk that someone using this might be hit by hidden malicious instruction somewhere that break their computer or steal their data?
Anthropic touch on that directly in the announcement:
You should also be aware of the risk of "prompt injections": attempts by attackers to alter Claude's plans through content it might encounter on the internet. We've built sophisticated defenses against prompt injections, but agent safety---that is, the task of securing Claude's real-world actions---is still an active area of development in the industry.
These risks aren't new with Cowork, but it might be the first time you're using a more advanced tool that moves beyond a simple conversation. We recommend taking precautions, particularly while you learn how it works. We provide more detail in our Help Center.
That help page includes the following tips:
To minimize risks:
- Avoid granting access to local files with sensitive information, like financial documents.
- When using the Claude in Chrome extension, limit access to trusted sites.
- If you chose to extend Claude’s default internet access settings, be careful to only extend internet access to sites you trust.
- Monitor Claude for suspicious actions that may indicate prompt injection.
I do not think it is fair to tell regular non-programmer users to watch out for "suspicious actions that may indicate prompt injection"!
I'm sure they have some impressive mitigations going on behind the scenes. I recently learned that the summarization applied by the WebFetch function in Claude Code and now in Cowork is partly intended as a prompt injection protection layer via this tweet from Claude Code creator Boris Cherny:
Summarization is one thing we do to reduce prompt injection risk. Are you running into specific issues with it?
But Anthropic are being honest here with their warnings: they can attempt to filter out potential attacks all they like but the one thing they can't provide is guarantees that no future attack will be found that sneaks through their defenses and steals your data (see the lethal trifecta for more on this.)
The problem with prompt injection remains that until there's a high profile incident it's really hard to get people to take it seriously. I myself have all sorts of Claude Code usage that could cause havoc if a malicious injection got in. Cowork does at least run in a filesystem sandbox by default, which is more than can be said for my claude --dangerously-skip-permissions habit!
I wrote more about this in my 2025 round-up: The year of YOLO and the Normalization of Deviance.
Security worries aside, Cowork represents something really interesting. This is a general agent that looks well positioned to bring the wildly powerful capabilities of Claude Code to a wider audience.
I would be very surprised if Gemini and OpenAI don't follow suit with their own offerings in this category.
I imagine OpenAI are already regretting burning the name "ChatGPT Agent" on their janky, experimental and mostly forgotten browser automation tool back in August!
bashtoni on Hacker News:
Simple suggestion: logo should be a cow and and orc to match how I originally read the product name.
I couldn't resist throwing that one at Nano Banana:

Tags: sandboxing, ai, prompt-injection, generative-ai, llms, anthropic, claude, ai-agents, claude-code, lethal-trifecta
Anthropic released Cowork on Monday, a new AI agent capability that extends the power of its wildly successful Claude Code tool to non-technical users — and according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself.
The launch marks a major inflection point in the race to deliver practical AI agents to mainstream users, positioning Anthropic to compete not just with OpenAI and Google in conversational AI, but with Microsoft's Copilot in the burgeoning market for AI-powered productivity tools.
"Cowork lets you complete non-technical tasks much like how developers use Claude Code," the company announced via its official Claude account on X. The feature arrives as a research preview available exclusively to Claude Max subscribers — Anthropic's power-user tier priced between $100 and $200 per month — through the macOS desktop application.
For the past year, the industry narrative has focused on large language models that can write poetry or debug code. With Cowork, Anthropic is betting that the real enterprise value lies in an AI that can open a folder, read a messy pile of receipts, and generate a structured expense report without human hand-holding.
The genesis of Cowork lies in Anthropic's recent success with the developer community. In late 2024, the company released Claude Code, a terminal-based tool that allowed software engineers to automate rote programming tasks. The tool was a hit, but Anthropic noticed a peculiar trend: users were forcing the coding tool to perform non-coding labor.
According to Boris Cherny, an engineer at Anthropic, the company observed users deploying the developer tool for an unexpectedly diverse array of tasks.
"Since we launched Claude Code, we saw people using it for all sorts of non-coding work: doing vacation research, building slide decks, cleaning up your email, cancelling subscriptions, recovering wedding photos from a hard drive, monitoring plant growth, controlling your oven," Cherny wrote on X. "These use cases are diverse and surprising — the reason is that the underlying Claude Agent is the best agent, and Opus 4.5 is the best model."
Recognizing this shadow usage, Anthropic effectively stripped the command-line complexity from their developer tool to create a consumer-friendly interface. In its blog post announcing the feature, Anthropic explained that developers "quickly began using it for almost everything else," which "prompted us to build Cowork: a simpler way for anyone — not just developers — to work with Claude in the very same way."
Unlike a standard chat interface where a user pastes text for analysis, Cowork requires a different level of trust and access. Users designate a specific folder on their local machine that Claude can access. Within that sandbox, the AI agent can read existing files, modify them, or create entirely new ones.
Anthropic offers several illustrative examples: reorganizing a cluttered downloads folder by sorting and intelligently renaming each file, generating a spreadsheet of expenses from a collection of receipt screenshots, or drafting a report from scattered notes across multiple documents.
"In Cowork, you give Claude access to a folder on your computer. Claude can then read, edit, or create files in that folder," the company explained on X. "Try it to create a spreadsheet from a pile of screenshots, or produce a first draft from scattered notes."
The architecture relies on what is known as an "agentic loop." When a user assigns a task, the AI does not merely generate a text response. Instead, it formulates a plan, executes steps in parallel, checks its own work, and asks for clarification if it hits a roadblock. Users can queue multiple tasks and let Claude process them simultaneously — a workflow Anthropic describes as feeling "much less like a back-and-forth and much more like leaving messages for a coworker."
The system is built on Anthropic's Claude Agent SDK, meaning it shares the same underlying architecture as Claude Code. Anthropic notes that Cowork "can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks."
Perhaps the most remarkable detail surrounding Cowork's launch is the speed at which the tool was reportedly built — highlighting a recursive feedback loop where AI tools are being used to build better AI tools.
During a livestream hosted by Dan Shipper, Felix Rieseberg, an Anthropic employee, confirmed that the team built Cowork in approximately a week and a half.
Alex Volkov, who covers AI developments, expressed surprise at the timeline: "Holy shit Anthropic built 'Cowork' in the last... week and a half?!"
This prompted immediate speculation about how much of Cowork was itself built by Claude Code. Simon Smith, EVP of Generative AI at Klick Health, put it bluntly on X: "Claude Code wrote all of Claude Cowork. Can we all agree that we're in at least somewhat of a recursive improvement loop here?"
The implication is profound: Anthropic's AI coding agent may have substantially contributed to building its own non-technical sibling product. If true, this is one of the most visible examples yet of AI systems being used to accelerate their own development and expansion — a strategy that could widen the gap between AI labs that successfully deploy their own agents internally and those that do not.
Cowork doesn't operate in isolation. The feature integrates with Anthropic's existing ecosystem of connectors — tools that link Claude to external information sources and services such as Asana, Notion, PayPal, and other supported partners. Users who have configured these connections in the standard Claude interface can leverage them within Cowork sessions.
Additionally, Cowork can pair with Claude in Chrome, Anthropic's browser extension, to execute tasks requiring web access. This combination allows the agent to navigate websites, click buttons, fill forms, and extract information from the internet — all while operating from the desktop application.
"Cowork includes a number of novel UX and safety features that we think make the product really special," Cherny explained, highlighting "a built-in VM [virtual machine] for isolation, out of the box support for browser automation, support for all your claude.ai data connectors, asking you for clarification when it's unsure."
Anthropic has also introduced an initial set of "skills" specifically designed for Cowork that enhance Claude's ability to create documents, presentations, and other files. These build on the Skills for Claude framework the company announced in October, which provides specialized instruction sets Claude can load for particular types of tasks.
The transition from a chatbot that suggests edits to an agent that makes edits introduces significant risk. An AI that can organize files can, theoretically, delete them.
In a notable display of transparency, Anthropic devoted considerable space in its announcement to warning users about Cowork's potential dangers — an unusual approach for a product launch.
The company explicitly acknowledges that Claude "can take potentially destructive actions (such as deleting local files) if it's instructed to." Because Claude might occasionally misinterpret instructions, Anthropic urges users to provide "very clear guidance" about sensitive operations.
More concerning is the risk of prompt injection attacks — a technique where malicious actors embed hidden instructions in content Claude might encounter online, potentially causing the agent to bypass safeguards or take harmful actions.
"We've built sophisticated defenses against prompt injections," Anthropic wrote, "but agent safety — that is, the task of securing Claude's real-world actions — is still an active area of development in the industry."
The company characterized these risks as inherent to the current state of AI agent technology rather than unique to Cowork. "These risks aren't new with Cowork, but it might be the first time you're using a more advanced tool that moves beyond a simple conversation," the announcement notes.
The launch of Cowork places Anthropic in direct competition with Microsoft, which has spent years attempting to integrate its Copilot AI into the fabric of the Windows operating system with mixed adoption results.
However, Anthropic's approach differs in its isolation. By confining the agent to specific folders and requiring explicit connectors, they are attempting to strike a balance between the utility of an OS-level agent and the security of a sandboxed application.
What distinguishes Anthropic's approach is its bottom-up evolution. Rather than designing an AI assistant and retrofitting agent capabilities, Anthropic built a powerful coding agent first — Claude Code — and is now abstracting its capabilities for broader audiences. This technical lineage may give Cowork more robust agentic behavior from the start.
Claude Code has generated significant enthusiasm among developers since its initial launch as a command-line tool in late 2024. The company expanded access with a web interface in October 2025, followed by a Slack integration in December. Cowork is the next logical step: bringing the same agentic architecture to users who may never touch a terminal.
For now, Cowork remains exclusive to Claude Max subscribers using the macOS desktop application. Users on other subscription tiers — Free, Pro, Team, or Enterprise — can join a waitlist for future access.
Anthropic has signaled clear intentions to expand the feature's reach. The blog post explicitly mentions plans to add cross-device sync and bring Cowork to Windows as the company learns from the research preview.
Cherny set expectations appropriately, describing the product as "early and raw, similar to what Claude Code felt like when it first launched."
To access Cowork, Max subscribers can download or update the Claude macOS app and click on "Cowork" in the sidebar.
For technical decision-makers, the implications of Cowork extend beyond any single product launch. The bottleneck for AI adoption is shifting — no longer is model intelligence the limiting factor, but rather workflow integration and user trust.
Anthropic's goal, as the company puts it, is to make working with Claude feel less like operating a tool and more like delegating to a colleague. Whether mainstream users are ready to hand over folder access to an AI that might misinterpret their instructions remains an open question.
But the speed of Cowork's development — a major feature built in ten days, possibly by the company's own AI — previews a future where the capabilities of these systems compound faster than organizations can evaluate them.
The chatbot has learned to use a file manager. What it learns to use next is anyone's guess.