At FOSDEM 2026 in Brussels, Belgium, Daniel Stenberg, creator of the popular open source data transfer program, cURL, described AI as a force that “augments us humans” in two directions: “The bad way or the good way.”
On the one hand, AI enables a flood of bogus, AI‑written security reports that burn out maintainers. On the other hand, advanced AI analyzers in the right hands are quietly uncovering deep bugs in cURL and other critical open source projects that no previous tool ever found.
How cURLs bug bounty program created the incentives for AI slop security reports
His current stance came after Stenberg called a stop to cURL’s bug bounty program. He made this move because both he and the cURL security team were overwhelmed by “AI slop.” That is, long, confident, and often completely fabricated vulnerability reports generated with LLMs.
He describes one report about a supposed HTTP/3 “stream dependency cycle exploit,” which, if true, would have been a “critical, the world is burning” security hole, he said. It came complete with GDB [GNU Debugger] sessions and register dumps — which turned out to reference a function that does not exist in cURL at all. It was all bogus.
Stenberg links much of the surge in low‑quality AI reports to cURL’s HackerOne bounty program, which offered up to $10,000 for a critical issue and $500 for low‑severity bugs. That payout schedule, he argued, encouraged reporters to ask AI tools to “find a security problem,” paste whatever they got into a report, mark it “critical,” and hope to hit the jackpot, with little or no attempt at verification.
The result, Stenberg points out, is that until early 2025, roughly one in six security reports to cURL were real. He says, “Before, in the old days, you know, someone actually invested a lot of time [in] the security report. There was a built-in friction here, but now there’s no effort at all in doing this. The floodgates are open. Send it over.”
So, by late 2025, Stenberg observes, “The rate has gone up to now it’s more like one in 20 or one in 30, that is accurate.” This has turned security bug report triage into “terror reporting,” draining time, attention — and the “will to live” — from the project’s seven‑person security team. He warned that this AI‑amplified noise doesn’t just waste volunteer effort but ultimately risks the broader software supply chain: if maintainers become numb because of these junk reports, real vulnerabilities in code will be missed.
By officially shutting down the cURL bug bounty, the team hopes “removing the money” will at least end that particular incentive. Although he knows it won’t stop all human AI misuse.
“AI is a tool”
All that said, Stenberg stressed that “AI is a tool” and that AI is already delivering real wins for open source security when used by experienced engineers. He explains, “We work with several AI-powered analyzing tools now […] They certainly find a lot of things no other tools previously found, and in ways no other tools previously could find.”
With the help of these tools, they have fixed “more than 100 bugs” that have surfaced, even after years of using aggressive compiler flags, fuzzers, traditional static analysis, and multiple human security audits.
That’s because these AI tools, he said, can reason across protocols, specs, and third‑party libraries in ways that feel “almost magical.” For example, a particular octet used in the Telnet implementation is invalid according to a Telnet spec that, Stenberg said, “no one has read it since 2012.” He notes that they can also flag inconsistencies between function comments and implementations that hint at subtle logic errors.
He also uses three different AI review bots on his pull requests. They run “at two in the morning” when no human reviewer is awake, find distinct classes of issues, and often catch missing tests or flawed assumptions about external library behavior, even though they are not a replacement for test suites.
While that’s all well and good, Stenberg remains deeply skeptical of using AI to generate production code, saying he does not use AI coding tools, is “not impressed,” and does not believe anyone on the cURL project relies on them for serious development. He also comments that code‑fix suggestions emitted by AI analyzers are “never good” enough to accept blindly, likening them instead to a sometimes‑useful “eager junior” whose ideas must be carefully cherry‑picked and backed by thorough testing.
On the legal side, he told the FOSDEM audience that AI‑generated contributions do not fundamentally change cURL’s risk model. The project has always had to trust that contributors have the right to submit the code they send, whether it is “someone made that code, copied that code, generated that code with AI, or copied it from Stack Overflow five years ago.”
What do you do with all the real bugs AI can find?
Stenberg also places cURL’s experience within a broader open source context. He points to other projects that have been deluged by both AI-generated garbage and a large volume of genuine vulnerabilities uncovered by internal security teams at hyperscalers. Citing the FFmpeg–Google saga without naming the companies directly, he described how a “giant company” can now use AI and large security departments to find many real bugs, then pressure tiny volunteer projects to fix them under strict disclosure deadlines without providing patches or funding.
Looking ahead, Stenberg says he expects AI to continue “augmenting everything we do in different directions.” He also urges projects to experiment with defenses against both spammy reports and AI scrapers. For example, he suggests using vetted‑reporter “secret clubs” to stricter submission requirements, even if those measures run against the traditional openness of open source.
Ultimately, his message was less about AI itself than about human choice. The same AI tools that enable “terror reporting” and Internet bandwidth‑hogging AI scrapers are also making cURL’s code measurably safer. It’s up to maintainers, companies, and communities to decide whether to use it for good or bad.
The post cURL’s Daniel Stenberg: AI slop is DDoSing open source appeared first on The New Stack.













