Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150604 stories
·
33 followers

The Macintosh changed computers forever

1 Share
A photo of a 1984 Macintosh on a gray background.

Apple's most legendary computer has two legacies: there's the computer itself, and there's the commercial. That commercial. Only a couple of days before Steve Jobs debuted the computer that would both help cement his legacy and contribute to his unceremonious exile from Apple, the company dropped a Super Bowl ad that is still one of the most iconic commercials of all time. It raised both the hype and the stakes for the Macintosh in a big way.

The Macintosh wasn't a great computer, at least at first. It didn't have enough memory; there wasn't enough software that supported it; it wasn't customizable in the ways PC users needed at the time. I …

Read the full story at The Verge.

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Era of Vertical AI Models

1 Share
From: AIDailyBrief
Duration: 14:57
Views: 1,119

Analysis contrasts Sutton's Bitter Lesson with a rising era of vertical AI models trained on last-mile interaction data, exemplified by Intercom's Apex and Cursor's Composer Two. Post-training on proprietary interaction datasets and reinforcement learning on curated quality data can elevate open-weight base models to meet or exceed frontier-model performance for specific tasks. Resulting effects include model speciation, a shift to in-house fine-tuning on open models, erosion of API-based moats, and a renewed premium on proprietary evaluation data.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - Managing Properties From Records in C#, Part 4

1 Share
From: Jason Bock
Duration: 1:43:04
Views: 20

Now that I've changed the attributes and the approach, I'm hoping in this stream I can finally get some code generated and tested.

https://github.com/JasonBock/Transpire/issues/44

#dotnet #csharp

Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

From skeptic to true believer: How OpenClaw changed my life | Claire Vo

1 Share

Claire Vo is the host of our sister podcast, “How I AI,” a former product executive and engineer, and founder of an AI startup called ChatPRD. Claire now runs her business, podcast, and family life with the help of nine OpenClaw agents running on multiple Mac Minis and old laptops. In this episode, Claire shares her journey from OpenClaw skeptic (it deleted her family calendar the first time she tried it) to true believer, and gives a masterclass in using AI agents in real life.

We discuss:

1. The exact step-by-step process to install and set up OpenClaw (it’s easier than you think)

2. How to avoid the biggest OpenClaw mistakes (don’t install it on your main computer)

3. Actual use cases that have changed Claire’s life (e.g. family scheduling, inbound sales, podcast prep, and course management)

4. Why multiple specialized agents beat one general-purpose agent

5. The security risks everyone worries about—and how to handle them

6. Browser limitations, memory issues, and practical workarounds

Brought to you by:

Mercury—Radically different banking

Omni—AI analytics your customers can trust

Orkes—The enterprise platform for reliable applications and agentic workflows

Where to find Claire Vo:

• X: https://x.com/clairevo

• LinkedIn: https://www.linkedin.com/in/clairevo

• Podcast: https://www.youtube.com/@howiaipodcast

• Website: https://clairevo.com

• ChatPRD: https://www.chatprd.ai

Where to find Lenny:

• Newsletter: https://www.lennysnewsletter.com

• X: https://twitter.com/lennysan

• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/

In this episode, we cover:

(00:00) Introduction to Claire and OpenClaw

(08:00) The journey from OpenClaw skeptic to believer

(11:50) What OpenClaw actually does that’s useful

(13:35) OpenClaw vs. other AI agent products

(17:05) How to actually install OpenClaw: the basics

(18:49) Setting up like you’d onboard a real assistant

(20:41) Security and privacy considerations

(24:53) Live demo: Installing OpenClaw step-by-step

(28:47) Setting up Q: an agent for her kids’ homework

(34:08) Understanding “soul,” “identity,” and “memory”

(40:40) The unlock: multiple agents, not just one

(45:02) How to run multiple agents on one machine

(47:28) Jesse Genet’s homeschooling use case

(49:58) Real examples and use cases

(56:41) Finn, Claire’s family agent

(1:00:05) Sage the Course Bot

(1:02:15) Common issues and workarounds

(1:08:08) The Exa/Perplexity web search workaround

(1:09:29) Memory management and context overload

(1:12:09) Pro tip: Screen sharing to manage Mac Minis

(1:14:18) Using Google Workspace for agent collaboration

(1:16:24) What makes OpenClaw special

(1:20:15) The “yappers API” and ramble mode

(1:22:04) Using Claude Code as your OpenClaw brain surgeon

(1:25:16) Bringing management skills to AI agents

(1:29:32) Why this matters

(1:32:37) Lightning round and final thoughts

Referenced:

• OpenClaw: https://openclaw.ai

• Claude Cowork: https://claude.com/product/cowork

• Fry’s Electronics: https://en.wikipedia.org/wiki/Fry%27s_Electronics

• Peter Steinberger on LinkedIn: https://www.linkedin.com/in/steipete

• Telegram: https://telegram.org

• WhatsApp: https://www.whatsapp.com

• Fin: https://fin.ai

• Why OpenClaw feels alive even though it’s not (this AI has a heartbeat but not a brain): https://x.com/clairevo/status/2017741569521271175

• 5 OpenClaw agents run my home, finances, and code | Jesse Genet: https://www.youtube.com/watch?v=96Vl8s3EQhk

• Executive Playbook for AI in Engineering, Product, and Design: https://maven.com/clairevo/ai-native-epd-org

• Zach Davis on LinkedIn: https://www.linkedin.com/in/zach-m-davis/

• ChatGPT Atlas: https://chatgpt.com/atlas

• Perplexity Comet: https://www.perplexity.ai/comet

• Browser (OpenClaw-managed): https://docs.openclaw.ai/tools/browser

• Buffer: https://buffer.com

• Brave: https://brave.com/search/api/

• Exa: https://exa.ai

• Hilary Gridley on X: https://x.com/yourgirlhils

• How to become a supermanager with AI: https://www.lennysnewsletter.com/p/how-to-become-a-supermanager-with

• How custom GPTs can make you a better manager | Hilary Gridley (Head of Core Product at Whoop): https://www.youtube.com/watch?v=xDMkkOC-EhI

• How to debug a team that isn’t working: the Waterline Model: https://www.lennysnewsletter.com/p/how-to-debug-a-team-that-isnt-working

• Jensen Huang on LinkedIn: https://www.linkedin.com/in/jenhsunhuang

• How I built a 1M+ subscriber newsletter and top 10 tech podcast | Lenny Rachitsky: https://www.lennysnewsletter.com/p/how-i-built-a-1m-subscriber-newsletter

Age of Attraction on Netflix: https://www.netflix.com/title/81779095

• Oura Ring: https://ouraring.com/

• Eight Sleep: https://www.eightsleep.com

• Hoopsalytics: https://hoopsalytics.com

• DJI Osmo smartphone gimbal: https://www.amazon.com/DJI-Stabilizer-Tracking-Extension-Stabilization/dp/B0FJ2L67HJ?ref_=ast_sto_dp

• Silent basketball: https://www.amazon.com/Rzkipdy-Silent-Basketball-Size-27-5/dp/B0FHFSQWPP/ref=sr_1_9

• Marc Andreessen: The real AI boom hasn’t even started yet: https://www.lennysnewsletter.com/p/marc-andreessen-the-real-ai-boom

Recommended books:

Treasure Island: https://www.amazon.com/Treasure-Island-Robert-Louis-Stevenson/dp/1505297400

Alice’s Adventures in Wonderland: https://www.amazon.com/Alices-Adventures-Wonderland-Illustrated-Illustrations/dp/991673268X

Charts for Babies: A Picture Book: https://www.amazon.com/Charts-Babies-Picture-Book/dp/1419785184

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.

Lenny may be an investor in the companies discussed.



To hear more, visit www.lennysnewsletter.com



Download audio: https://api.substack.com/feed/podcast/192012054/5a00347ccd52fb300cac2ea6e28874e8.mp3
Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

WebAssembly is now outperforming containers at the edge

1 Share
Abstract digital visualization of blue glowing data streams and light streaks, illustrating WebAssembly's high-speed, low-latency code deployment at the edge.

The mass adoption of WebAssembly has yet to be realized. 

The true turning point for WebAssembly — specifically its ability to ship lightweight code to any number of endpoints with millisecond latency — rests on finalizing the component model.

“The true turning point for WebAssembly — specifically its ability to ship lightweight code to any number of endpoints with millisecond latency — rests on finalizing the component model.”

Standardizing the component model will allow WebAssembly to replace containers in areas where they typically struggle, regardless of whether Kubernetes is involved. Wasm is better suited for edge devices, serverless environments, and event-driven deployments that require pushing updates to an unlimited number of endpoints simultaneously.

Indeed, WebAssembly has moved far beyond the browser. It shows its maturity via reliable production use across servers, CDNs, and backend services, as well as its broad applicability. 

While core WebAssembly is intentionally low-level and difficult to use directly, recent specification work enables higher-level abstractions. Reference types and interface types allow components to expose meaningful APIs without developers needing to understand WASM internals, making the technology more accessible to engineers.

During this talk, “Towards a Component Model 1.0” at Wasm I/O in Barcelona last week, Luke Wagner of Fastly described efforts to make the so-called Component Model easier to adopt, including motivating native browser implementations and closing a few remaining functionality gaps.

“Achieving a ‘just works’ developer experience requires standards-based answers to coordinated problems… such as how a standard library performs IO or how multiple modules are bundled and linked at runtime.”

While technical improvements like debugging and threading are important, the “higher order bit” for explosive Wasm adoption is a lack of upstream support in popular languages and frameworks, Wagner said.

Achieving a “just works” developer experience requires standards-based answers to coordinated problems, such as how a standard library performs IO or how multiple modules are bundled and linked at runtime. To address this, the strategy involves two layers: the component model, which provides foundational answers for computation and virtualization, and WASI, which defines modular standard APIs for various types of IO, Wagner said.

“I’m going to claim, perhaps contentiously, that a lack of upstream support for all the popular languages, tools, factors, and frameworks so that Wasm can just work both inside and outside the browser is holding up Wasm’s adoption,” Wagner said.

Wagner said WebAssembly Preview 2 factored out the component model layer, while the upcoming Preview 3 extends it to handle concurrency with async functions, strings, and futures. This concurrency feature will serve as a major milestone towards completing the component model.

Moving from “eager” memory allocation to a “lazy” API to reduce heap fragmentation and improve performance by inverting control flow is also planned. Other planned improvements for 1.0 include supporting multi-value returns, adding error context values, and introducing a GC API option for languages that use garbage-collected memory, Wagner said. 

“With Preview 3, we’re extending a Wasm module to provide answers to a lot of concurrency questions. And as part of that, finding async functions, strings, and futures as first-class concepts,”  Wagner said. “So, lots of benefits come from this lazy API. But how do we change the API by maintaining that all-important stability, guarantee that I just mentioned?”

Meanwhile, the component model provides standards-based answers to open questions, allowing for “upstream support everywhere, so the host can just work,” Wagner said. “We’ve got a preview for release coming very soon, followed by cooperative threads and a minor release that gives us answers to a bunch of hard concurrency questions,”  Wagner said. 

To encourage native browser support, Wagner highlighted JCO, a tool that transpiles components into JavaScript and core WebAssembly that runs in browsers today. Native support would offer performance gains by avoiding JS glue code and allowing direct calls from Wasm into browser code. 

 Wagner concluded his talk with a callout to the community to make pull requests that help simplify the component model by building shared tooling around guest and host APIs. The project can also use contributions for more documentation to keep pace with commits.

Contributions for upstreaming and cross-language tooling, and closing key expressivity gaps with features like optional imports, callbacks, subtyping, and more, are also needed, Wagner said.

“And so what I’d ask from everyone here is to use Preview 3 once it’s released, use JCO to simplify your web developer experience with Wasm,”  Wagner said. “And if any of these many Bytecode Alliance projects I mentioned sound interesting, please contribute and say hi to us on Bytecode Alliance at Zulip, and you can read and discuss the component model spec on the GitHub repo.” 

The post WebAssembly is now outperforming containers at the edge appeared first on The New Stack.

Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

96% of codebases rely on open source, and AI slop is putting them at risk

1 Share
Colorful cartoon illustration of cleaning supplies including a mop in a green bucket, pink floor scrub detergent, blue spray bottle, dust spray, broom, rubber gloves, and cleaning rags, representing the cleanup of AI-generated slop in open source software.

Verbose changes. Nonsensical descriptions. Pull requests contributors can’t explain. AI is DDoS-ing open source software (OSS) with slop, and some maintainers are calling it quits.

As Steve Croce, field CTO at Anaconda, a Python data science platform, tells The New Stack, “It’s having a profound effect on maintainer workload.” In response, maintainers are canceling bug bounty programs and introducing stricter contributor guidelines, he adds.

Some projects, like Jazzband, have been forced to sunset altogether. Jannis Leidel, the lead maintainer and Python Software Foundation chairperson, writes that the “flood of AI-generated spam PRs and issues” made his project unsustainable. 

According to Kate Holterhoff, Ph.D., a senior analyst at the consultancy Red Monk, the barrier to entry is now extremely low, making it easier to game the traditional incentive model for participating in open source. As she tells The New Stack, “It’s putting the contract between maintainers and contributors in peril in ways that haven’t existed before.”

For example, Rémi Verschelde, who oversees the open source Godot game engine, shares on BlueSky that dealing with AI slop is “draining and demoralizing.” Other project maintainers report growing apathy and wasted time responding to the deluge. 

To be fair, nearly all software developers now use AI, and many communities rely on it to produce legitimate fixes and contributions. But the volume of low-quality submissions is becoming unsustainable, especially given that 60% of maintainers are unpaid volunteers.

GitHub is aware of the issue and has released tools to aid maintainers and even suggested disabling PRs entirely while it explores long-term solutions. For now, however, fixes to the core problem remain elusive.

Below, we’ll look at the issue and consider the strategies emerging to manage the crisis — hopefully before it overwhelms the open-source ecosystem that most of the world depends on.

AI slop betrays the premise of open source

Open source has faced existential threats before, including licensing shifts, funding gaps, and maintainer burnout. But Slopmageddon introduces a new kind of strain.

The most immediate risk is wasted maintainer time. One developer estimates that it takes a reviewer 12 times longer to review and correct a pull request than to generate one with AI.

Generating clean, readable, and maintainable code remains difficult. Low-effort AI contributions require a disproportionate time to evaluate and respond to, decreasing morale and potentially drowning out high-value submissions.

Security risks are another concern. “AI-generated contributions can introduce subtle vulnerabilities, poorly understood dependencies, or incomplete fixes that expand the attack surface,” adds Anaconda’s Croce.

The situation can quickly spiral. In one twisted tale, a vindictive AI agent published a scathing hit piece on an open source maintainer after its code suggestion was rejected. The maintainer, Scott Shambaugh, founder of Leonid Space and contributor to matplotlib, says he felt compelled to respond quickly to protect his reputation.

Shambaugh tells The New Stack, “There was a real sense of ‘Oh, I need to get ahead of the story’ so my version of the truth gets out on top.”

For him, the episode reflects a broader erosion of authenticity in open source. In the past, your reputation was tied to your contributions, and people participated to give back to the community, gain recognition, and learn through a collaborative feedback loop, he says.

Maintainers, in turn, took pride in stewardship. But nowadays, attempts to quickly game bug bounty systems or gain credentials in open source with rapidly generated PRs undermine that dynamic.

“If you just point an AI agent at a GitHub issue, it can solve it and write a PR in 30 seconds,” says Shambaugh. “If that’s what we really wanted, the maintainers could do that themselves.” 

Ways to manage AI-generated contributions in open source

So, what can open source maintainers and the tech industry at large do to manage the influx of AI slop?

No single fix exists. Instead, it’ll likely take a combination of new contributor policies, platform tooling, reputation and verification systems, and guidance from foundations and other community-led initiatives.

Set AI policies for contributors 

One response is clearer contributor guidelines. The goal isn’t typically to close the door on external contributions or ban AI outright, but to ensure its use leads to higher-quality submissions.

Effective policies spell out expectations like: what types of AI are allowed, when disclosure is required, and how contributors should validate their work before submitting.

Red Monk’s Holterhoff recently assembled research on AI policies in the open source community, identifying 63 formal approaches across foundations and projects. These include efforts from Blender, Fedora, Firefox, Ghostty, the Linux Kernel, and WordPress, as well as guidance from the Eclipse Foundation, the Linux Foundation, the Electronic Frontier Foundation, and others.

While approaches vary, organizations tend to permit AI usage if usage is disclosed. Others restrict AI-assisted contributions only to approved issues. 14 projects ban AI contributions outright, while 12 are undecided.

The data also suggests that standards become stricter the closer you are to critical infrastructure. “The farther down the stack you go, the less permissive with AI you have to be,” Holterhoff tells The New Stack.

Still, enforcement remains a gray area. For Holterhoff, policies should remain grounded in community norms, regardless of how permissive they are. Each project is so different, too, meaning AI policies will depend on the context.

As such, the issue isn’t so much AI itself, but how it’s used and the intention behind it. “It’s only slop when you don’t understand it or when it’s just thrown out there,” says Holterhoff.

Similarly, for Ahmet Soormally, principal solutions engineer at Wundergraph, the focus should be on reinforcing good-faith contributions.

“It’s not about whether AI helped you to write a PR,” Soormally tells The New Stack. “It’s about what you hand to the next human or model. If it’s bloated, unclear, or hard to reason about, you are not helping; you are just adding noise.”

Use the platform tools

Another option is to use GitHub’s own tooling to respond to what it calls open source’s “eternal September.” Maintainers can limit PRs to collaborators, disable them entirely, or introduce criteria-based gating.

Some are building custom defenses. One developer has created an Anti-Slop GitHub Action to filter out sketchy PRs automatically.

Writing for her personal blog, Angie Jones, VP of developer experience, Agentic AI Foundation, recommends using an Agents.MD file, deploying AI to moderate AI submissions, having good tests, and automating the detection of low-quality PRs.

Still, for some, these measures aren’t enough. As Flux CD maintainer Stefan Prodan notes on LinkedIn, GitHub itself lacks a clear incentive to curb AI slop, given its investment in AI-assisted coding.

“This platform incentivizes this kind of behavior,” adds developer Yuri Sizov, posting on BlueSky, adding that “it inherently invites more low-quality contributions from drive-by devs.”

As a result, some projects are exploring alternative hosts. For instance, the Linux distribution Gentoo is migrating from GitHub to Codeberg.

Contributor reputation systems

Another approach to maintaining quality and trust in open source is to introduce reputation systems.

One such example is vouch, a trust management system designed by HashiCorp founder Mitchell Hashimoto. The Ghostty project is currently experimenting with it.

As Hashimoto writes in the vouch README, AI tools make it easy to “trivially create plausible-looking but extremely low-quality contributions.” Vouch addresses this by requiring contributors to be vouched for by a trusted party before interacting with a project.

Another project, good-egg, assigns scores to GitHub contributors based on their contribution history, which could be used to validate reputation and authenticity.

Cryptographic proofs of identity

Beyond human attestation, some argue for tying AI-generated contributions to verifiable identities.

For Shambaugh, the issue of AI agentic identity extends beyond open-source to trust across the broader internet. “Ephemeral identity can change at a keystroke, can be endlessly copied, and is nearly impossible to trace,” he tells The New Stack. “I don’t think we’re ready for a million more of these things to be on the internet at scale.”

Emerging approaches aim to address this issue through cryptographic verification. Treeship, for example, is an open-source project that uses blockchain-based techniques to create privacy-preserving proofs of AI agent actions.

As Revaz Tsivtsivadze, founder of Treeship, tells The New Stack, “There’s a trust issue when adopting AI agents. It’s a black box; nobody knows what goes into agents’ decision-making, memory, or tool calls.”

“You could get all kinds of AI agents, like malicious, rogue, or untrusted parties,” he adds. “Cryptographic attestation of AI agents is the key to trusting AI agents as economic actors.”

Tsivtsivadze says that a tamperproof record of agent actions could be used within open source projects to track agent identities, actions, timestamps, and the underlying decision process.

While technologies like Treeship have broader potential applications in agentic commerce, he believes such verification could help reduce AI slop in open-source by ensuring agents are tied to real human actors.

Other community support

Other community efforts aim to establish higher standards for accountability within open source at large. 

One example is the Open Source AI Manifesto, spearheaded by Wundergraph, which sets expectations for how generative AI is used in open-source, emphasizing ownership, responsibility, and authenticity. The project also provides a badge that maintainers can use to signal responsible AI usage.

“AI can scale code generation, but it can’t scale accountability,” says Wundergraph’s Soormally. “That part still belongs to us.”

Croce also points to a more fundamental issue: many open source projects remain underfunded and understaffed. Initiatives like NumFOCUS and the Open Source Endowment (OSE) aim to provide much-needed support.

“Finding ways to provide more resources and capacity for those reviews is definitely a stopgap and absolutely required for the future of OSS,” Croce adds.

The future of open source hinges on accountability

Open source is still being adopted at a rapid pace, with more pronounced use in the EU than in the US, according to the 2026 State of Open Source Report. Amid rising digital sovereignty concerns, avoiding vendor lock-in is now a top driver for open source.

There’s no doubt that open source is widely relied upon — 96% of commercial codebases contain open source, according to a 2024 Synopsis report. But slopocalypse presents a messy challenge to tackle. 

So, the question for open-source maintainers is whether it’s all worth it. 

“If you make life a living hell, they won’t do it anymore,” says Holterhoff. “If their labor is not compensated for and they throw in the towel, then the OSS community loses out.”

Worryingly, although maintainers have sounded the alarm, it remains unclear how foundations or platforms will respond to sustain the ecosystem.

“If we do not actively manage contribution quality in an AI-driven world, we are not just risking security issues or technical debt. We are putting the ecosystem itself at risk.”

“If we do not actively manage contribution quality in an AI-driven world, we are not just risking security issues or technical debt,” says Croce. “We are putting the ecosystem itself at risk.”

For now, it comes down to contributor accountability. “Accountability is the real standard,” Croce adds. “Contributors need to understand and stand behind what they submit.”

Without a single technical fix, perhaps an appeal to humans to ‘do what’s right’ will help. Because without that basic accountability and trust, the open source model itself starts to break down.

The post 96% of codebases rely on open source, and AI slop is putting them at risk appeared first on The New Stack.

Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories