Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150121 stories
·
33 followers

What a security audit of 22,511 AI coding skills found lurking in the code

1 Share

AI coding agents have spawned a new software supply chain, and a new study suggests the proliferation of new agents is outpacing the security infrastructure around them.

Mobb.ai has released findings from a large-scale security audit of 22,511 public skills — reusable instruction sets for AI coding agents like Claude Code, Cursor, GitHub Copilot, and Windsurf — collected across four public registries: skills.sh, ClawHub, GitHub, and Tessl.

The audit produced 140,963 security findings and identified a structural gap that no registry has fully closed. That is, skills are scanned at publish time, but once they land on a developer’s machine, they execute with that developer’s full system permissions and almost no runtime verification, Mobb says.

Eitan Worcel, CEO of Mobb, tells The New Stack that “AI coding agents are becoming the default way developers write software.”

“When a developer installs a skill or plugin for their agent, they’re giving that skill the same access they have — their source code, their credentials, and their production systems,” Worcel says.

Worcel said the research was motivated by the absence of any systematic review of the ecosystem. “We noticed no one had systematically reviewed the ecosystem, so we did.”

A new kind of supply chain risk

Skills are typically markdown files — most commonly formatted as SKILL.md — that contain natural language instructions an AI agent follows, along with shell commands, MCP (Model Context Protocol) server configurations, IDE settings, and references to companion scripts. They are distributed through public registries and installed with a single command.

The supply chain Mobb maps runs from developer to registry to skill file to agent to system access. If any link in that chain is compromised, the attacker gains whatever access the developer has — source code, API keys, SSH credentials, cloud provider tokens, and the ability to push code into CI/CD pipelines, Worcel says.

Most skills scanned (66%) showed no findings under the patterns Mobb targeted. But among the 34% who did flag, 27% of all scanned skills contain command execution patterns, Worcel explains. One in six contains a curl | sh remote code execution pattern directly in skill instruction files, the classic attack of downloading a script from the internet and piping it straight into a shell interpreter. Nearly 15% reference consent bypass mechanisms that disable or circumvent the safety confirmations built into agent tools.

“The good news is that outright malware is rare; the ecosystem is largely healthy,” Worcel says, crediting in part the work of Paul McCarty and the OpenSourceMalware team. “But what concerns us is the attack surface. More than a quarter of skills contain instructions for agents to execute shell commands. One in six includes patterns that download and run remote scripts.”

The gap in protection

Each of the four registries has invested in security, though with varying approaches. Skills.sh, operated by Vercel, runs three independent scanners — Gen Agent Trust Hub, Socket, and Snyk — visible on a public audit page. ClawHub uses an AI-based classification system that labels skills as CLEAN, SUSPICIOUS, or MALICIOUS, though suspicious skills remain installable; the classification is informational, not enforced. Tessl uses Snyk and, notably, is the only registry that blocks installations with high or critical findings at the client side.

GitHub, which hosts the source repositories for most skills and accounts for 7,379 of the skills Mobb collected, provides standard repository security features like Dependabot and secret scanning, but those tools do not analyze SKILL.md instructions, MCP configurations, or agent hook definitions.

“The registries are doing real work — multiple security scanners, AI-based classification, risk scoring,” Worcel says. “But that protection lives on the registry’s servers. Once a skill reaches the developer’s machine, there are no guardrails. No signature verification, no runtime scanning, no way to know if what you installed is the same version that was audited.”

Worcel draws a parallel to earlier issues in the package ecosystem: “This is the same gap that hit the npm and PyPI ecosystems years ago, and the industry learned those lessons the hard way. We’re publishing this research so the AI agent ecosystem can learn them proactively.”

The gap Mobb identifies is consistent across all four registries: scanning happens at the registry boundary, at publish time. Once a developer installs a skill, no scan runs on the machine until the agent reads the files. There is no cryptographic signing to verify that the installed version matches the audited version. A skill that passes review today can be updated tomorrow with malicious content, and that window is exploitable.

Hooks — commands that execute automatically when specific agent events occur, such as a file edit or a new session — pose a particular persistence risk. A malicious skill can install a hook that continues operating after the skill itself is removed, and no registry currently audits hook configurations specifically.

What the Audit Found

Beyond statistical patterns, Mobb documented several concrete cases. A key one is a confirmed API traffic hijacking: a skill published on GitHub under the repository flyingtimes/podcast-using-skill contains a .claude/settings.json file that overrides the Anthropic API endpoint, redirects all traffic to Zhipu AI’s BigModel platform in China, swaps in a hardcoded third-party API token, and changes the model to glm-4.6. A developer who cloned that repository and opened it in Claude Code would have their entire conversation — all code context, prompts, and responses — silently routed through a third-party server with no visible indication that anything had changed.

“We found API traffic silently redirected to third-party servers, hardcoded credentials in public repositories, and invisible characters encoding hidden data in files that appear completely normal to the human eye,” Worcel says. “These aren’t theoretical risks — we documented each one with the exact file and line of code.”

Researchers also found 159 skills with hidden HTML comment payloads. HTML comments are invisible when markdown is rendered in a browser or IDE but are fully visible to an AI agent reading the raw file.

One example — found in a repository named claude-world/claude-skill-antivirus In a file labeled as a malicious skill example, it contained a classic prompt injection: a comment instructing the agent to ignore previous instructions and execute what followed. Another, found in a separate repository, contained a comment reading <!– security-allowlist: curl-pipe-bash –> — an attempt to suppress scanner warnings about piping curl to bash.

One hundred twenty-seven skills contained invisible Unicode zero-width characters, which can encode hidden data readable by any program processing raw text but invisible to human reviewers. One case, in a repository called copyleftdev/sk1llz, placed a long sequence of alternating zero-width spaces and zero-width joiners immediately after a heading — a pattern consistent with binary steganographic encoding.

On the MCP front, 37 skills auto-approve MCP server connections without user consent, and researchers found live API credentials committed directly into public repository MCP configuration files. One case involved a personal Apify actor endpoint — meaning a developer’s API token would be transmitted to a third-party individual’s infrastructure, not the vendor’s own servers.

The plan of attack

Mobb outlines the kill chain an attacker would follow: Publish a plausible-looking skill, embed malicious instructions in files that developers are unlikely to review manually, let registries distribute it, and wait for an agent to execute it.

What makes this attack surface unusual is that the instructions are in plain English — indistinguishable from legitimate skill content by binary signature scanning — and the agent is the executor. The attacker does not write exploit code. They write instructions, and the AI agent executes them using the developer’s credentials.

“The developer is in the loop, but may not be watching,” the Mobb report notes. “AI agents are designed to work autonomously. Developers increasingly trust agent actions without reviewing every step.”

Recommendations

Mobb directs its recommendations to three audiences.

  1. For registry operators, the report calls for client-side enforcement at install time, cryptographic signing, continuous re-scanning on update, and specific analysis of hook configurations. For developers, it recommends manually reviewing SKILL.md, .claude/settings.json, and .mcp.json before installing any skill, and treating MCP auto-approval settings as a red flag.
  2. For AI agent tool vendors — the makers of Claude Code, Cursor, Windsurf, and similar tools — the report argues for sandboxing skill execution so skills do not automatically inherit full developer permissions, requiring explicit consent before environment variables or MCP connections are applied, and surfacing hook visibility so developers can see what is running in the background.
  3. At the industry level, Mobb calls for the equivalent of npm audit or Docker Content Trust for the skill ecosystem, which includes standardized security metadata, shared vulnerability databases across registries, and trust chains with revocation mechanisms.

Context

The timing of the report follows a real-world incident at ClawHub, one of the four registries audited. In February 2026, 341 malicious skills were discovered on the platform in what researchers call the “ClawHavoc” incident. Skills.sh, the largest registry, reports more than 89,000 total skill installations to date.

Mobb concludes that the ecosystem is largely healthy, as outright malware is rare, and the findings skew toward risky patterns rather than confirmed attacks. But the infrastructure for abuse is in place, Worcel says.

The post What a security audit of 22,511 AI coding skills found lurking in the code appeared first on The New Stack.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Will AI force code to evolve or make it extinct?

1 Share
Street art stencil by The Pink Bear Rebel depicting a caveman in a business suit jacket holding a spear and briefcase, painted in black on a white tiled wall.

What would an AI-first language look like? Last year, a developer in Spain warned that our human-friendly syntax consumed an “excessive” number of tokens — thereby increasing costs — and prevented complex programs from fitting within existing AI context windows. “I asked Claude to invent a programming language where the sole focus is for LLM efficiency,” the developer explained on Reddit, “without any concern for how it would serve human developers.”

And their attempt at an “AI-first native language wasn’t the last. Just last week, a developer announced plans for a new language that addressed the needs of autonomous AI agents with “deterministic” syntax that clarified developer intent and a small language surface to reduce edge cases.

Andrea Griffiths, a senior developer advocate at GitHub and a writer for the newsletter Main Branch, has seen experiments with “AI-first” languages. Still, nothing with meaningful adoption yet, Griffiths tells The New Stack.

“And I think that says a lot,” Griffiths says. “The gravitational pull of existing ecosystems is enormous — libraries, tooling, community knowledge, production infrastructure. A new language doesn’t just need to be better for AI. It needs to justify abandoning everything developers already have, and that shift is not gonna happen overnight.”

“A new language doesn’t just need to be better for AI. It needs to justify abandoning everything developers already have, and that shift is not gonna happen overnight.”

Will we one day develop an AI-optimized language at the expense of human readability? Or will AI coding agents make it easier to use our existing languages — especially typed languages with built-in safety advantages? Could we even imagine a world with AI-first languages that abstract away everything, generating compiler-ready modules without source code?

Developers, language designers, and developer advocates are now beginning to ask those questions…

Chris Lattner’s Mojo vs. Rust

How should programming languages look in the age of AI? There’s more than one answer. During a recent episode of The Hanselminutes Podcast hosted by Scott Hanselman, Microsoft’s VP of developer community, Hanselman broached this topic with Chris Lattner, the co-founder and CEO of the AI tools company Modular AI.

Lattner’s career includes creating the Swift programming language and the LLVM compiler toolchain — but he’s focusing on how hardware is changing, arguing that with today’s multi-core and AI-optimized chips.

“We have all these crazy GPUs and all this compute out there that nobody knows how to program!” 

“We have all these crazy GPUs and all this compute out there that nobody knows how to program!”

So while Lattner’s company builds AI tools for developers, it’s also working on its new programming language Mojo, which Lattner suggests is “LLVM but for AI chips, basically… a way to program it that scales across all the silicon.”

Hanselman’s podcast dubbed it “a programming language for an AI world.”

But others still see AI nudging coders toward existing programming languages with built-in memory safety — including Peter Jiang, the founding engineer of Datacurve (which sells high-quality/complexity data). Writing earlier this month in Forbes, Jiang describes Rust as ” the unlikely engine of the vibe coding era… When AI writes the code, Rust’s strictness stops being a hurdle and becomes free quality assurance,” with Rust’s compiler acting as “a guardrail that forces the LLM to prove its logic is sound.”

It’s an attractive advantage, notes GitHub’s senior director for developer advocacy, Cassidy Williams. In January, Williams cited a 2025 academic study that found 94% of LLM-generated compilation errors were type-check failures.”

Typed languages for the win?

There’s data suggesting developers are acting on those advantages — and not just by moving to Rust. Williams added that TypeScript “is now the most used language on GitHub, overtaking both Python and JavaScript as of August 2025,” crediting as one factor “a boost from AI-assisted development… TypeScript grew by over 1 million contributors in 2025 (+66% YoY, Aug ’25 vs. Aug ’24) with an estimated 2.6 million developers total.” And other typed languages prove the trend, Williams believes, sharing more examples from GitHub’s data:

  • “Luau, Roblox’s scripting language, saw >194% YoY growth as a gradually typed language.”
  • “Typst, often compared to LaTeX, but with functional design and strong typing, saw >108% YoY growth.”
  • “Even older languages like Java, C++, and C# saw more growth than ever in this year’s report.”

So while AI may be impacting programming languages, Griffiths says, it’s not necessarily through a move toward new AI-optimized languages. “What actually happens is more subtle: languages that are already structured, strongly typed, and explicit become more attractive because AI tools work better with them. TypeScript over JavaScript. Rust over C. Python’s type hints are becoming standard practice. The change isn’t a new language. It’s a shift in which existing languages win.”

“The change isn’t a new language. It’s a shift in which existing languages win.”

Griffiths spelled it out last month on GitHub’s blog, writing that strongly typed languages like Rust impose “clearer constraints” on AI, resulting in “more reliable, contextually correct code.” And at the same time, “the penalty for choosing powerful but complex languages disappears” with AI handling the syntax. In fact, GitHub’s most recent figures, released in October, showed that even shell scripting in AI-generated projects leaped by 206%, Griffiths pointed out. “AI absorbed the friction that made shell scripting painful.

“So now we use the right tool for the job without the usual cost.”

Existing languages — or no language at all?

One person watching all this closely is Stephen Cass, the special projects editor at IEEE Spectrum. Since 2019, he’s ranked programming languages by popularity for IEEE Spectrum (a tradition they began in 2013). But will the popularity of today’s languages now remain frozen in time, Cass asked in September?

In a world with AI-powered coding tools, could emerging languages always face a handicap, since LLMs work best when they’re trained on large codebases with years of historical examples? Cass wondered if AI could also stymie the development of new languages in other ways — since “if an AI is soothing our irritations with today’s languages, will any new ones ever reach the kind of critical mass needed to make an impact?”

But Cass is also among those people intrigued by the possibility of a new language created specifically for AI agents. Languages basically create human-friendly abstractions (and safety precautions), Cass’s essay argued — but “how much abstraction and anti-foot-shooting structure will a sufficiently advanced coding AI really need?” Cass dared the ultimate question about our future: “Could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future?”

Cass acknowledged the obvious downsides. (“True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks.”) But “instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh.”

This leads to some mind-boggling hypotheticals, like “What’s the role of the programmer in a future without source code?” Cass asked the question and announced “an emergency interactive session” in October to discuss whether AI is signaling the end of distinct programming languages as we know them.

What if…?

In that webinar, Cass said he believes programmers of the future would still suggest interfaces, select algorithms, and make other architecture design choices. And obviously the resulting code would need to pass tests, Cass said, and “has to be able to explain what it’s doing.”

But what kind of abstractions could go away? And then “What happens when we really let AIs off the hook on this?” Cass asked — when we “stop bothering” to have them code in high-level languages. (Since, after all, high-level languages “are a tool for human beings.”)

“What if we let the machines go directly into creating intermediate code?” (Cass thinks the machine-language level would be too far down the stack, “because you do want a compile layer too for different architecture…”)

These ideas drew skepticism from the webinar’s co-host, Dina Genkina (the site’s associate editor focused on computing/hardware). Genkina agreed that today’s programming languages are offering “guard rails for the human to not do dumb stuff.” But even in a world trying new languages with AI-friendly micro-optimizations, “I feel like it’s an open question whether the AI will need more guard rails or [fewer] guard rails… I’m not saying it’s not possible, but I don’t quite see a path to there… from where we are right now.”

IEEE Spectrum webinar - IEEE Spectrum Webinar - Will AI End Distinct Programming Languages

So, regardless of any moves toward a more AI-friendly language, Genkina concluded that, in the end, our new machine-powered pair programmer will still have to undergo code reviews. “There’s definitely a camp of people that believe that you need a human in the loop… indefinitely… I think if we don’t understand what it’s doing, that contributes to the fear… Interpretability of AI is going to become more and more important, especially in things like this.”

With a laugh, Cass noted the possibility that we could even introduce “brand new failings,” like what he sees as “the driverless car dilemma… It’s like, ‘Well, you know, maybe it kills different people, but if it kills [fewer] people overall… In this future, the question might become ‘What if you make fewer mistakes, but they’re different mistakes?'”

Cass said he’s keeping an eye out for research papers on designing languages for AI, although he agreed that it’s not a “tomorrow” thing — since, after all, we’re still digesting “vibe coding” right now. But “I can see this becoming an area of active research.”

Although he also agrees that a sandboxed environment would be good for AI…

Code-free programming remains “speculative”

Reached for comment this week by The New Stack, co-host Dina Genkina remained skeptical, saying, “To my knowledge, code-free programming is still speculative.”

And back at MainBranch.dev, senior developer Andrea Griffiths remains unconvinced as well. “Will we see languages optimized for AI readers, not human maintainers? I’d push back on that. Code still needs to be debugged, audited, and understood by humans, especially when things go wrong in production. No engineering team is going to deploy code they can’t inspect.”

But a more likely outcome, Griffiths suggests, is a world where AI “changes what humans need to read.”

What we should be imagining, Griffiths says, is a future where “You spend less time reading boilerplate — and more time reviewing architecture decisions, edge cases, and security boundaries!”

The post Will AI force code to evolve or make it extinct? appeared first on The New Stack.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Android Weekly Issue #719

1 Share
Articles & Tutorials
Sponsored
We reach out to more than 80k Android developers around the world, every week, through our email newsletter and social media channels. Advertise your Android development related service or product!
alt
Jaewoong Eum traces the full pipeline from @Preview annotation to rendered pixels inside Android Studio.
James Cullimore's guide to improving your startup speed.
Nav Singh takes a quick look at name-based destructuring declarations.
Stefan examines how to use CompositionLocal.
Julien Salvi shows how to automate Android Vitals monitoring using the Play Developer Reporting API and a Gradle task.
Tim Malseed argues for learning coroutine testing fundamentals before reaching for Turbine, to avoid flaky, over-specified tests.
Saurabh Arora shows how to keep AI coding guidelines consistent across Android Studio, CLI tools, and CI bots using a git pre-commit hook.
Libraries & Code
A Kotlin Multiplatform flight recorder for Compose apps that captures gestures, screen views, breadcrumbs, and crashes for replay and reporting.
alt
A Compose Multiplatform navigation library built around pure reducers and immutable state for predictable, testable navigation on Android, iOS, and Desktop.
An MCP server that gives AI tools on-demand access to AOSP and AndroidX source code for accurate framework understanding.
A collection of AI agent skills for Kotlin projects, installable via the Agent Skills standard.
News
Google launches desktop experience design guidance and the Android Design Gallery for adaptive app development.
alt
JetBrains releases Kotlin 2.3.20 with Gradle 9.3 compatibility, name-based destructuring, and new C/Objective-C interop.
The Kotlin Foundation announces participation in GSoC 2026, with four projects open for contributor applications until March 31.
Google announces an advanced sideloading flow and free limited-distribution accounts as part of upcoming Android developer verification requirements.
JetBrains highlights selected talks from the full KotlinConf 2026 schedule, spanning language design, multiplatform, server-side, and AI.
Videos & Podcasts
Philipp Lackner covers six practical techniques for speeding up Gradle builds in Android projects.
Philipp Lackner tests the Figma MCP server to vibe-code Android UI designs using Claude Code.
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The art of influence: The single most important skill that AI can’t replace | Jessica Fain (Webflow, ex-Slack)

1 Share

Jessica Fain is a product leader at Webflow and former Chief of Staff to the CPO at Slack, where she worked alongside April Underwood and many past podcast guests including Stewart Butterfield, Annie Pearl, Tamar Yehoshua, and Noah Weiss. She’s spent her career learning how executives actually make decisions—and why most people completely misunderstand the process.

We discuss:

1. Why great ideas often don’t get buy-in

2. Why executive calendars are “like strobe lights” and why the first 30 seconds of a meeting matter so much

3. Why executives are usually optimizing for a global maximum while you are often optimizing locally

4. The best question Jessica uses when a leader says something that seems wrong: “That’s so interesting. What led you to believe that?”

5. Why you should go in to learn, not to convince

6. Why showing only one option is a mistake

7. Why AI will make influence more important, not less

Brought to you by:

Omni—AI analytics your customers can trust

Lovable—Build apps by simply chatting with AI

Vanta—Automate compliance, manage risk, and accelerate trust with AI

Episode transcript: https://www.lennysnewsletter.com/p/the-art-of-influence-jessica-fain

Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0

Where to find Jessica Fain:

• LinkedIn: https://www.linkedin.com/in/jessica-fain-79b8989

Where to find Lenny:

• Newsletter: https://www.lennysnewsletter.com

• X: https://twitter.com/lennysan

• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/

In this episode, we cover:

(00:00) Introduction to Jessica Fain

(03:53) Why influence is the highest-leverage skill in product

(04:47) Why great ideas fail without executive buy-in

(06:00) How executives actually think

(09:05) The fundamentals: context-setting, communication, and empathy

(10:22) Stop pitching for approval—start co-creating with execs

(12:59) Influence vs. politics (and why people get it wrong)

(15:44) How to disagree with execs without losing trust

(17:20) Going in to learn, not to convince

(19:08) How to present ideas

(26:05) The Minto-style approach and tailoring your communication to each exec

(28:22) Why Jessica doesn’t like the question “What’s top of mind for you?”

(30:24) Understanding incentives to unlock buy-in

(32:10) Aligning product work with company strategy

(35:10) Quick summary

(37:31) Disarming the executive

(40:49) Speed matters: why fast follow-up builds momentum

(43:32) How to run high-impact meetings (the 60-second rule)

(47:00) Why influencing execs is part of your job

(49:15) Asking for more resources and thinking in 10x bets

(52:23) What to do when your idea gets rejected

(54:18) Clarifying information

(56:50) How to build trust and make ideas stick

(58:30) Shrinking big ideas into experiments

(01:02:27) Common mistakes people make when influencing leaders

(01:06:00) How to grow into your next role

(01:09:32) How AI is changing influence and product work

(01:17:55) Using AI to simulate exec feedback and improve pitches

(01:21:15) Protecting our brains from overwhelm

(01:22:44) Lightning round and final thoughts

Referenced:

• Box: https://www.box.com

• Slack: https://slack.com

• Brightwheel: https://mybrightwheel.com

• Webflow: https://webflow.com

• April Underwood on LinkedIn: https://www.linkedin.com/in/aprilunderwood

• Lessons in product leadership and AI strategy from Glean, Google, Amazon, and Slack | Tamar Yehoshua (Product at Glean, ex-Google and Slack): https://www.lennysnewsletter.com/p/you-dont-need-to-be-a-well-run-company-to-win-tamar-yehoshua

• Atlassian: https://www.atlassian.com

• Behind the scenes of Calendly’s rapid growth | Annie Pearl (CPO): https://www.lennysnewsletter.com/p/behind-the-scenes-of-calendlys-rapid

• Calendly: https://calendly.com

• Glassdoor: https://www.glassdoor.co.in/index.htm

• The 10 traits of great PMs, how AI will impact your product, and Slack’s product development process | Noah Weiss (Slack, Foursquare, Google): https://www.lennysnewsletter.com/p/the-10-traits-of-great-pms-how-ai

• Ethan Eismann on X: https://x.com/eeismann

• Slack founder: Mental models for building products people love ft. Stewart Butterfield: https://www.lennysnewsletter.com/p/slack-founder-stewart-butterfield

• Ilan Frank on LinkedIn: https://www.linkedin.com/in/ilanfrank

• Checkr: https://checkr.com

• Ali Rayl on LinkedIn: https://www.linkedin.com/in/alirayl

• Rachel Wolan on LinkedIn: https://www.linkedin.com/in/rachelwolan

• How Webflow’s CPO built an AI chief of staff to manage her calendar, prep for meetings, and drive AI adoption | Rachel Wolan: https://www.lennysnewsletter.com/p/how-webflows-cpo-built-an-ai-chief

• Barbara Minto’s website: https://www.barbaraminto.com

• How Slack invests in big little details through Customer Love Sprints: https://slack.design/articles/sweating-the-small-stuff

• Building product at Stripe: craft, metrics, and customer obsession | Jeff Weinstein (Product lead): https://www.lennysnewsletter.com/p/building-product-at-stripe-jeff-weinstein

• The Enneagram Institute: https://www.enneagraminstitute.com/type-descriptions

The Pitt on Prime Video: https://www.amazon.com/The-Pitt-Season-1/dp/B0DNRR8QWD

• Towel warmer: https://www.amazon.com/FLYHIT-Large-Towel-Warmer-Bathroom/dp/B0CB5K34L2

• Casa: https://getcasa.com

• Jimi Hendrix: https://en.wikipedia.org/wiki/Jimi_Hendrix

• Greek Theatre: https://en.wikipedia.org/wiki/Greek_Theatre_(Los_Angeles)

Recommended books:

Pachinko: https://www.amazon.com/Pachinko-National-Book-Award-Finalist/dp/1455563927

Homegoing: https://www.amazon.com/Homegoing-Yaa-Gyasi/dp/1101971061

A History of Burning: https://www.amazon.com/History-Burning-Janika-Oza/dp/1538724243

The Overstory: https://www.amazon.com/Overstory-Novel-Richard-Powers/dp/039335668X

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.

Lenny may be an investor in the companies discussed.



To hear more, visit www.lennysnewsletter.com



Download audio: https://api.substack.com/feed/podcast/189787031/e33636fb65141b736d62d710df3bbac6.mp3
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Dullest $3,000 I Ever Spent on Technology

1 Share

Disclaimer: This blog post includes referral links to products mentioned. I may get a small commission if you buy the products via these links. This doesn’t influence what I write about these products, though.

By the year 2020, I was down to my preferred number of computers — one. Then the pandemic hit right when I was thinking about upgrading, and a “genius” thought crossed my mind — who needs laptops when you can’t go anywhere? So, I upgraded to a desktop.

Then it was over, and I had to buy a laptop and still kept the desktop. Two computers are more than one, but still manageable.

As a developer of a browser-centric JavaScript library working on Windows, I periodically get reports of issues in Safari, and while BrowserStack is amazing for solving a quick and simple one, the weirder problems are much easier to debug on a physical machine. So, after years of procrastination, I bought a Mac Mini just for that. That’s three computers to manage.

I had my Windows desktop at home, a Mac Mini at a co-working space, and a laptop for travel. Too many machines, but it makes some sense… sort of.

Then I got a job, a work MacBook, moved out of the co-working space, and ended up with 3 computers at home + a work one — something got to change.

When did Apple become the safe, boring choice?

I’m a Windows person. I don’t want to start any religious debates about the OS, but even the hardcore macOS fan would admit that there’s more hardware variety on the Windows side.

My main complaint against Macs is not about macOS. As someone who works at a desk and needs mobility in less than 10% of my working time, I find the traditional laptop design suboptimal (to say the least). The “always there” keyboard part is not in use 90% of the time but collects dust, takes valuable desk space, places the display further away than you may prefer, etc.

The best computer design for my use case, in my opinion, is the one popularized by Microsoft Surface Pro and perfected in Asus ROG Flow Z13 (Amazon: UK, Germany, France, SpainItaly).

And having my conviction was relatively easy, as it was always a more pragmatic choice. Until Apple Silicon that is…

MacBook Pro is the cheapest choice, huh?

So I decided to pull the plug and “downsize” to one (or maximum two) computers. My specific caveat would be that if I wanted to get a Windows machine and still have “native” access to macOS, I would have to keep the Mini.

I assembled a shortlist of computers I may want to get to cover all my bases (programming and music production would be the most demanding). It included everything from a new version of ROG Flow Z13, to a new Dell XPS, to the same MacBook Pro I have at work, to the most exciting of them all — Asus Zenbook DUO (Amazon: UK, Germany, SpainItaly):

Then I fed my requirements and the list to Deep Research of multiple AI vendors, and all came back with the same [sad] conclusion — there is no pragmatic explanation for getting anything but a MacBook Pro for my situation. And even if it wasn’t for the Mac Mini that I can sell if I get an MBP, it was still in MacBook’s favor. And selling the Mac Mini made it such a no-brainer that I wouldn’t be able to justify it to even myself.

So, I ended up getting a MacBook Pro with the same specs as my work machine, just in silver (or whatever the official name of the color is). The only exciting thing about it is that I resolved my “three computers, one desk” problem. Other than that, this purchase was as much fun as replacing a dishwasher.

Solving MacBook deficiencies and other extras

As mentioned above, “classic laptop” is a subpar form-factor for the way I use computers — at a desk with external monitor, keyboard, and mouse. I partially solved it (should I say worked around it?) by buying a desk-mounted laptop shelf. I got this one (Amazon: UK, Germany, France, SpainItaly):

It’s nothing special and requires way too much assembly for such a simple thing, but it does the job well.

Everyone knows that Apple is notorious for its “predatory” SSD upgrade prices — it starts reasonable, but every next step up comes at a ridiculous premium. I “splurged” on the 1TB MBP (would be painful to go lower with my music production hobby), but instead of being a sucker for more, I just pulled a second 1TB SSD from my desktop and bought this enclosure (Amazon: UK, Germany, Spain, Italy) — it works well for a secondary drive and doesn’t break the bank.

And, finally, something that I didn’t need to do but did anyway — got myself a Thunderbolt 5 docking station from Kensington (Amazon: UK, Germany, SpainItaly).

In the end, three thousand euros later, I have a computer that is less exciting than what I had before, but it is an adult, pragmatic choice. Who could have thought that choosing an Apple product would be the dullest thing you could do?


The Dullest $3,000 I Ever Spent on Technology was originally published in </dev> diaries on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

General Performance: Exploring Thread ID Retrieval Methods

1 Share
This article explains two methods to obtain the current thread ID in .NET and shows which method is more performant.



Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories