Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147668 stories
·
33 followers

cURL’s Daniel Stenberg: AI slop is DDoSing open source

1 Share

At FOSDEM 2026 in Brussels, Belgium, Daniel Stenberg, creator of the popular open source data transfer program, cURL, described AI as a force that “augments us humans” in two directions: “The bad way or the good way.”

On the one hand, AI enables a flood of bogus, AI‑written security reports that burn out maintainers. On the other hand, advanced AI analyzers in the right hands are quietly uncovering deep bugs in cURL and other critical open source projects that no previous tool ever found.

How cURLs bug bounty program created the incentives for AI slop security reports

His current stance came after Stenberg called a stop to cURL’s bug bounty program. He made this move because both he and the cURL security team were overwhelmed by “AI slop.” That is, long, confident, and often completely fabricated vulnerability reports generated with LLMs.

He describes one report about a supposed HTTP/3 “stream dependency cycle exploit,” which, if true, would have been a “critical, the world is burning” security hole, he said. It came complete with GDB [GNU Debugger] sessions and register dumps — which turned out to reference a function that does not exist in cURL at all. It was all bogus.

Stenberg links much of the surge in low‑quality AI reports to cURL’s HackerOne bounty program, which offered up to $10,000 for a critical issue and $500 for low‑severity bugs. That payout schedule, he argued, encouraged reporters to ask AI tools to “find a security problem,” paste whatever they got into a report, mark it “critical,” and hope to hit the jackpot, with little or no attempt at verification.

The result, Stenberg points out, is that until early 2025, roughly one in six security reports to cURL were real.  He says, “Before, in the old days, you know, someone actually invested a lot of time [in] the security report. There was a built-in friction here, but now there’s no effort at all in doing this. The floodgates are open. Send it over.”

So, by late 2025, Stenberg observes, “The rate has gone up to now it’s more like one in 20 or one in 30, that is accurate.” This has turned security bug report triage into “terror reporting,” draining time, attention — and the “will to live” — from the project’s seven‑person security team. He warned that this AI‑amplified noise doesn’t just waste volunteer effort but ultimately risks the broader software supply chain: if maintainers become numb because of these junk reports, real vulnerabilities in code will be missed.

By officially shutting down the cURL bug bounty, the team hopes “removing the money” will at least end that particular incentive. Although he knows it won’t stop all human AI misuse. 

“AI is a tool”

All that said, Stenberg stressed that “AI is a tool” and that AI is already delivering real wins for open source security when used by experienced engineers. He explains, “We work with several AI-powered analyzing tools now […] They certainly find a lot of things no other tools previously found, and in ways no other tools previously could find.”

With the help of these tools, they have fixed “more than 100 bugs” that have surfaced, even after years of using aggressive compiler flags, fuzzers, traditional static analysis, and multiple human security audits.

That’s because these AI tools, he said, can reason across protocols, specs, and third‑party libraries in ways that feel “almost magical.” For example, a particular octet used in the Telnet implementation is invalid according to a Telnet spec that, Stenberg said, “no one has read it since 2012.” He notes that they can also flag inconsistencies between function comments and implementations that hint at subtle logic errors.

He also uses three different AI review bots on his pull requests. They run “at two in the morning” when no human reviewer is awake, find distinct classes of issues, and often catch missing tests or flawed assumptions about external library behavior, even though they are not a replacement for test suites.

While that’s all well and good, Stenberg remains deeply skeptical of using AI to generate production code, saying he does not use AI coding tools, is “not impressed,” and does not believe anyone on the cURL project relies on them for serious development. He also comments that code‑fix suggestions emitted by AI analyzers are “never good” enough to accept blindly, likening them instead to a sometimes‑useful “eager junior” whose ideas must be carefully cherry‑picked and backed by thorough testing.

On the legal side, he told the FOSDEM audience that AI‑generated contributions do not fundamentally change cURL’s risk model. The project has always had to trust that contributors have the right to submit the code they send, whether it is “someone made that code, copied that code, generated that code with AI, or copied it from Stack Overflow five years ago.”

What do you do with all the real bugs AI can find?

Stenberg also places cURL’s experience within a broader open source context. He points to other projects that have been deluged by both AI-generated garbage and a large volume of genuine vulnerabilities uncovered by internal security teams at hyperscalers. Citing the FFmpeg–Google saga without naming the companies directly, he described how a “giant company” can now use AI and large security departments to find many real bugs, then pressure tiny volunteer projects to fix them under strict disclosure deadlines without providing patches or funding.

Looking ahead, Stenberg says he expects AI to continue “augmenting everything we do in different directions.” He also urges projects to experiment with defenses against both spammy reports and AI scrapers. For example, he suggests using vetted‑reporter “secret clubs” to stricter submission requirements, even if those measures run against the traditional openness of open source.

Ultimately, his message was less about AI itself than about human choice. The same AI tools that enable “terror reporting” and Internet bandwidth‑hogging AI scrapers are also making cURL’s code measurably safer. It’s up to maintainers, companies, and communities to decide whether to use it for good or bad.

 

The post cURL’s Daniel Stenberg: AI slop is DDoSing open source appeared first on The New Stack.

Read the whole story
alvinashcraft
51 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How I Built My 10 Agent OpenClaw Team

1 Share
From: AIDailyBrief
Duration: 19:00
Views: 508

A 10-agent OpenClaw mission control built to test digital employees, persistent memory, heartbeats, and scheduled CR jobs. Agent roster includes a mobile builder, continuous research agents powering AI maturity maps and opportunity radars, project managers, a chief of staff, and an NLW Tasks interactive to-do agent. Practical takeaways cover Mac Mini and Tailscale setup, Claude as build partner, heartbeat reliability, security calibration, and the upfront time investment versus long-term automation gains.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
51 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Android Weekly Issue #714

1 Share
Articles & Tutorials
Sponsored
Vega OS delivers cross-device development with native performance, hot reloading, and built-in focus management. Vega Developer Tools provides you with the resources you need to develop, test, and distribute apps on Vega OS-powered devices. 
alt
Jaewoong Eum dives deep into the internal mechanisms of the kotlinx.serialization compiler plugin.
Abhi says Compose "retain" lets you drop ViewModel ceremony by retaining simple injectable presenters and cleaning them up via RetainObserver.
Sponsored
Code 10x faster. Tell Firebender to create full screens, ship features, or fix bugs - and watch it do the work for you. It's been battle tested by the best android teams at companies like Tinder, Adobe, and Instacart.
alt
Oğuzhan Aslan takes a closer look at the new Embedded Photo Picker.
Miguel Montemayor says Android 17 targeting forces large screen resizability and orientation support, pushing apps to adopt adaptive layouts, resilient camera previews, and robust state handling.
Leonidas Partsas implements a custom TopAppBarScrollBehavior that translates RecyclerView scroll into smooth collapse and expansion without partial rendering.
Pamela Hill says iOS-targeted multi-module KMP apps need an umbrella framework to prevent stdlib duplication and incompatible binaries across modules.
Mark Murphy warns that Android 17 Beta 1 mainly adds behavior hardening that can break apps using a small set of rare features.
Place a sponsored post
We reach out to more than 80k Android developers around the world, every week, through our email newsletter and social media channels. Advertise your Android development related service or product!
alt
News
Google says Android 17 Beta 1 mostly advances adaptability and media, connectivity, and companion device tooling alongside ongoing privacy, security, and performance work.
Videos & Podcasts
Dave Leeds explores a Kotlin feature change allowing return keywords in expression bodies.
alt
Amit Shekhar provides a detailed comparison of Retrofit and OkHttp, two popular libraries used by Android developers for networking.
Philipp Lackner explores the Media3 library along with its Jetpack Compose toolkit to build a custom-styled video player with our own UI letting you control media playback.
Alan Viverette and Aurimas Liutikas discuss the challenges and evolution of API design, particularly within the Android ecosystem.
Stevdza-San examines the new Koin Kotlin compiler plugin, which brings auto-detect constructor parameter features and compile-time code transformation, catching errors during the build process
Peter Friese and Marina Coelho attempting to port their "Make It So" to-do list app from iOS to Android using AI-powered coding agents, specifically Antigravity and Stitch
Daniel Atitienei presents a detailed AI-powered workflow for developing and launching profitable apps as a solo developer
Philipp Lackner explains structured concurrency in Kotlin coroutines, using a cooking analogy to illustrate concurrency concepts.
Read the whole story
alvinashcraft
51 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The CAP Theorem Is Why Your Cloud App Sometimes Feels Off

1 Share
There is a moment every cloud engineer seemingly has, whether they admit it or not. You open an application and something feels strange. A record you just saved is not there yet, a dashboard shows two different answers depending on where you look, or a system insists an action never happened even though you just performed it. At some point, a smart sounding person says “eventual consistency,” everyone nods, and the conversation moves on without anyone actually feeling satisfied by the...

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

OpenClaw is being called a security “Dumpster fire,” but there is a way to stay safe

1 Share
Conceptual 3D render of a row of dark protective shields with one shield glowing in bright gold, symbolizing advanced cybersecurity, data protection, and secure sandboxing.

In a blog earlier this February, Snyk engineers said they scanned the entire ClawHub (the OpenClaw marketplace) and found that over 7 percent of the skills contained flaws that expose sensitive credentials. “They are functional, popular agent skills that instruct AI agents to mishandle secrets, forcing them to pass API keys, passwords, and even credit card numbers through the LLM’s context window and output logs in plaintext,” they reported.

OK, so we know OpenClaw is a security “Dumpster fire” right now, as we have reported.

I looked at Deno sometime ago; it treats TypeScript as a first-class citizen. I couldn’t help notice this detail in their recent Sandbox update:

You don’t want to run untrusted code (generated by your LLMs, your users’ LLMs, or even handwritten by users) directly on your server. It will compromise your system, steal your API keys, and call out to evil dot com. You need isolation.

Deno Sandbox gives you lightweight Linux microVMs (running in the Deno Deploy cloud) to run untrusted code with defense-in-depth security.

OK, sandboxes aren’t new, but Deno’s deployment environment caught my attention.

Deno and Deno Deploy

Well, it’s been a while since my last article about Deno and TypeScript, so I’ll speed through my example just to make sure I still remember everything before we check out the new sandbox stuff.

So let’s install Deno on my Mac. Fortunately, this looks the same as before:

As before, Deno correctly detected my shell. After restarting it, I checked everything was hunky dory:

So I’m not a TypeScript guy, and yet in that article, I wrote a bit of code to persuade myself that TypeScript is just looking at contents for equivalence. (Checkout the post for more on how an OOP developer can grok TypeScript)

class Car {
  drive() {
    // hit the pedal to the floor
  }
}
class Golfer {
  drive() {
    // hit the ball far
  }
}
// No error?
let w: Car = new Golfer();


So let’s do what we did last time and use a project initializer to run a TypeScript test.

I replace the main.ts with my drive method example from above, and run it:

So Deno handles my TypeScript as a first-class object, and proves it is a structural type system. But let’s get to the good stuff, and sign into Deno itself:

Before we can use a sandbox, we need to hop through a small verification hoop:

Don’t worry — it just checks your credit card exists, using the handy StripeLink that appears on your phone like a phishing request. Now we can set up — I’ll be following the right-hand column with code integration:

Now, we have the typical problem of connecting our identity to our requests. You can create a sandbox directly in code, which is neat — but first, we need a token.

So I’ll create an organisation token to connect my identity to Deno. I installed the SDK as the panel above suggested and created a token using the nice blue button. One small gripe here is that the terms “access token”, “organisation token”, and “deploy token” seemed to be used interchangeably.

OK, after setting the DENO_DEPLOY_TOKEN environment variable in my shell, we should be ready to run some code and create our very own sandbox on Deno’s cloud.

I save the following code as main.ts . I’m going to assume await is some sort of promise, as this is clearly asynchronous code. (The term “await” is also familiar enough in Victorian prose.)

import { Sandbox } from "@deno/sandbox"; 
await using sandbox = await Sandbox.create(); 
await sandbox.sh`echo "Hello, world!"`;


Remember to prove this happened, Deno will have to retain a record of the sandbox even after it has expired. As we are dealing with a security solution, we do need to tell Deno that we are happy use networking with the right flags:

OK, depending on how the statements are called, that appeared to work. Better proof must come in the appearance of the sandbox in my records:

We can see a little more detail in the instance from a nice filterable event log on the dashboard:

Well, that was just fine. I wrote some code on my laptop and ran it in a sandbox on Deno’s cloud. But we need to do a bit more to avoid the horrors of exfiltration.

Exfiltration shooter

What exactly is exfiltration? Of course, I could give the example of popular multiplayer games (you know them, or you don’t) whose very purpose is to appear as an avatar in the game server, steal things, then escape. This can happen accidentally in real life, too; you have seen this when the press manages to see notes a politician made in a private meeting, only to walk confidently outside, exposing the notes they are holding. In this case, the politician has misunderstood their safe boundaries—or has never used their camera’s zoom function.

This isn’t a security article, and I’m not Bruce Schneier — but you get the idea. You don’t want to run code in your cosy sandbox that captures and escapes with secrets. One way to combat this is to restrict exit points, but another is to obfuscate your private data while it resides within the sandbox. This is what Deno refers to as secret redaction and substitution.

Configured secrets never enter the sandbox environment variables. Instead, Deno Deploy substitutes them, only to reveal them when the sandbox makes outbound requests to an approved host.

I’ll show this process partway. We can set up a secret simply enough, and the approved host where it will be revealed to:

await using sandbox = await Sandbox.create({
  secrets: {
    ANTHROPIC_API_KEY: {
      hosts: ["api.anthropic.com"],
      value: process.env.ANTHROPIC_API_KEY,
    },
  },
});


So this means that the Deno will obfuscate the environment key that it finds in my laptop, but send it to Anthropic, revealed only after it leaves the sandbox:

I won’t make a real call to the LLM in the Sandbox (I certainly could, as I can access the Sandbox via the CLI and have it last for as long as I need), but I’ll set up a secret on my laptop environment as if I were:

And with my code altered:

I’ll run the code and see what the value of the secret is in the Sandbox:

As I said, to fully prove this, I’d have to contact Anthropic with my key to prove the process — but I’ll leave that to you.

From a Deno tutorial video. The diagram appears under the hosts as they demonstrate sandboxes.

Conclusion

I focused on just one aspect, obfuscation, but you can also control the allowed outgoing addresses just as easily. And we’ve already looked at other aspects of the Deno Deploy service.

Obviously, the timing couldn’t be better. With the exponential increase in generated and untrusted code (that people nevertheless wish to trust), this type of service is gold dust. I’m sure it will be appearing in different services pretty soon.

The post OpenClaw is being called a security “Dumpster fire,” but there is a way to stay safe appeared first on The New Stack.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Fake Job Recruiters Hid Malware In Developer Coding Challenges

2 Shares
"A new variation of the fake recruiter campaign from North Korean threat actors is targeting JavaScript and Python developers with cryptocurrency-related tasks," reports the Register. Researchers at software supply-chain security company ReversingLabs say that the threat actor creates fake companies in the blockchain and crypto-trading sectors and publishes job offerings on various platforms, like LinkedIn, Facebook, and Reddit. Developers applying for the job are required to show their skills by running, debugging, and improving a given project. However, the attacker's purpose is to make the applicant run the code... [The campaign involves 192 malicious packages published in the npm and PyPi registries. The packages download a remote access trojan that can exfiltrate files, drop additional payloads, or execute arbitrary commands sent from a command-and-control server.] In one case highlighted in the ReversingLabs report, a package named 'bigmathutils,' with 10,000 downloads, was benign until it reached version 1.1.0, which introduced malicious payloads. Shortly after, the threat actor removed the package, marking it as deprecated, likely to conceal the activity... The RAT checks whether the MetaMask cryptocurrency extension is installed on the victim's browser, a clear indication of its money-stealing goals... ReversingLabs has found multiple variants written in JavaScript, Python, and VBS, showing an intention to cover all possible targets. The campaign has been ongoing since at least May 2025...

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories