Verbose changes. Nonsensical descriptions. Pull requests contributors can’t explain. AI is DDoS-ing open source software (OSS) with slop, and some maintainers are calling it quits.
As Steve Croce, field CTO at Anaconda, a Python data science platform, tells The New Stack, “It’s having a profound effect on maintainer workload.” In response, maintainers are canceling bug bounty programs and introducing stricter contributor guidelines, he adds.
Some projects, like Jazzband, have been forced to sunset altogether. Jannis Leidel, the lead maintainer and Python Software Foundation chairperson, writes that the “flood of AI-generated spam PRs and issues” made his project unsustainable.
According to Kate Holterhoff, Ph.D., a senior analyst at the consultancy Red Monk, the barrier to entry is now extremely low, making it easier to game the traditional incentive model for participating in open source. As she tells The New Stack, “It’s putting the contract between maintainers and contributors in peril in ways that haven’t existed before.”
For example, Rémi Verschelde, who oversees the open source Godot game engine, shares on BlueSky that dealing with AI slop is “draining and demoralizing.” Other project maintainers report growing apathy and wasted time responding to the deluge.
To be fair, nearly all software developers now use AI, and many communities rely on it to produce legitimate fixes and contributions. But the volume of low-quality submissions is becoming unsustainable, especially given that 60% of maintainers are unpaid volunteers.
GitHub is aware of the issue and has released tools to aid maintainers and even suggested disabling PRs entirely while it explores long-term solutions. For now, however, fixes to the core problem remain elusive.
Below, we’ll look at the issue and consider the strategies emerging to manage the crisis — hopefully before it overwhelms the open-source ecosystem that most of the world depends on.
AI slop betrays the premise of open source
Open source has faced existential threats before, including licensing shifts, funding gaps, and maintainer burnout. But Slopmageddon introduces a new kind of strain.
The most immediate risk is wasted maintainer time. One developer estimates that it takes a reviewer 12 times longer to review and correct a pull request than to generate one with AI.
Generating clean, readable, and maintainable code remains difficult. Low-effort AI contributions require a disproportionate time to evaluate and respond to, decreasing morale and potentially drowning out high-value submissions.
Security risks are another concern. “AI-generated contributions can introduce subtle vulnerabilities, poorly understood dependencies, or incomplete fixes that expand the attack surface,” adds Anaconda’s Croce.
The situation can quickly spiral. In one twisted tale, a vindictive AI agent published a scathing hit piece on an open source maintainer after its code suggestion was rejected. The maintainer, Scott Shambaugh, founder of Leonid Space and contributor to matplotlib, says he felt compelled to respond quickly to protect his reputation.
Shambaugh tells The New Stack, “There was a real sense of ‘Oh, I need to get ahead of the story’ so my version of the truth gets out on top.”
For him, the episode reflects a broader erosion of authenticity in open source. In the past, your reputation was tied to your contributions, and people participated to give back to the community, gain recognition, and learn through a collaborative feedback loop, he says.
Maintainers, in turn, took pride in stewardship. But nowadays, attempts to quickly game bug bounty systems or gain credentials in open source with rapidly generated PRs undermine that dynamic.
“If you just point an AI agent at a GitHub issue, it can solve it and write a PR in 30 seconds,” says Shambaugh. “If that’s what we really wanted, the maintainers could do that themselves.”
Ways to manage AI-generated contributions in open source
So, what can open source maintainers and the tech industry at large do to manage the influx of AI slop?
No single fix exists. Instead, it’ll likely take a combination of new contributor policies, platform tooling, reputation and verification systems, and guidance from foundations and other community-led initiatives.
Set AI policies for contributors
One response is clearer contributor guidelines. The goal isn’t typically to close the door on external contributions or ban AI outright, but to ensure its use leads to higher-quality submissions.
Effective policies spell out expectations like: what types of AI are allowed, when disclosure is required, and how contributors should validate their work before submitting.
Red Monk’s Holterhoff recently assembled research on AI policies in the open source community, identifying 63 formal approaches across foundations and projects. These include efforts from Blender, Fedora, Firefox, Ghostty, the Linux Kernel, and WordPress, as well as guidance from the Eclipse Foundation, the Linux Foundation, the Electronic Frontier Foundation, and others.
While approaches vary, organizations tend to permit AI usage if usage is disclosed. Others restrict AI-assisted contributions only to approved issues. 14 projects ban AI contributions outright, while 12 are undecided.
The data also suggests that standards become stricter the closer you are to critical infrastructure. “The farther down the stack you go, the less permissive with AI you have to be,” Holterhoff tells The New Stack.
Still, enforcement remains a gray area. For Holterhoff, policies should remain grounded in community norms, regardless of how permissive they are. Each project is so different, too, meaning AI policies will depend on the context.
As such, the issue isn’t so much AI itself, but how it’s used and the intention behind it. “It’s only slop when you don’t understand it or when it’s just thrown out there,” says Holterhoff.
Similarly, for Ahmet Soormally, principal solutions engineer at Wundergraph, the focus should be on reinforcing good-faith contributions.
“It’s not about whether AI helped you to write a PR,” Soormally tells The New Stack. “It’s about what you hand to the next human or model. If it’s bloated, unclear, or hard to reason about, you are not helping; you are just adding noise.”
Another option is to use GitHub’s own tooling to respond to what it calls open source’s “eternal September.” Maintainers can limit PRs to collaborators, disable them entirely, or introduce criteria-based gating.
Some are building custom defenses. One developer has created an Anti-Slop GitHub Action to filter out sketchy PRs automatically.
Writing for her personal blog, Angie Jones, VP of developer experience, Agentic AI Foundation, recommends using an Agents.MD file, deploying AI to moderate AI submissions, having good tests, and automating the detection of low-quality PRs.
Still, for some, these measures aren’t enough. As Flux CD maintainer Stefan Prodan notes on LinkedIn, GitHub itself lacks a clear incentive to curb AI slop, given its investment in AI-assisted coding.
“This platform incentivizes this kind of behavior,” adds developer Yuri Sizov, posting on BlueSky, adding that “it inherently invites more low-quality contributions from drive-by devs.”
As a result, some projects are exploring alternative hosts. For instance, the Linux distribution Gentoo is migrating from GitHub to Codeberg.
Contributor reputation systems
Another approach to maintaining quality and trust in open source is to introduce reputation systems.
One such example is vouch, a trust management system designed by HashiCorp founder Mitchell Hashimoto. The Ghostty project is currently experimenting with it.
As Hashimoto writes in the vouch README, AI tools make it easy to “trivially create plausible-looking but extremely low-quality contributions.” Vouch addresses this by requiring contributors to be vouched for by a trusted party before interacting with a project.
Another project, good-egg, assigns scores to GitHub contributors based on their contribution history, which could be used to validate reputation and authenticity.
Cryptographic proofs of identity
Beyond human attestation, some argue for tying AI-generated contributions to verifiable identities.
For Shambaugh, the issue of AI agentic identity extends beyond open-source to trust across the broader internet. “Ephemeral identity can change at a keystroke, can be endlessly copied, and is nearly impossible to trace,” he tells The New Stack. “I don’t think we’re ready for a million more of these things to be on the internet at scale.”
Emerging approaches aim to address this issue through cryptographic verification. Treeship, for example, is an open-source project that uses blockchain-based techniques to create privacy-preserving proofs of AI agent actions.
As Revaz Tsivtsivadze, founder of Treeship, tells The New Stack, “There’s a trust issue when adopting AI agents. It’s a black box; nobody knows what goes into agents’ decision-making, memory, or tool calls.”
“You could get all kinds of AI agents, like malicious, rogue, or untrusted parties,” he adds. “Cryptographic attestation of AI agents is the key to trusting AI agents as economic actors.”
Tsivtsivadze says that a tamperproof record of agent actions could be used within open source projects to track agent identities, actions, timestamps, and the underlying decision process.
While technologies like Treeship have broader potential applications in agentic commerce, he believes such verification could help reduce AI slop in open-source by ensuring agents are tied to real human actors.
Other community efforts aim to establish higher standards for accountability within open source at large.
One example is the Open Source AI Manifesto, spearheaded by Wundergraph, which sets expectations for how generative AI is used in open-source, emphasizing ownership, responsibility, and authenticity. The project also provides a badge that maintainers can use to signal responsible AI usage.
“AI can scale code generation, but it can’t scale accountability,” says Wundergraph’s Soormally. “That part still belongs to us.”
Croce also points to a more fundamental issue: many open source projects remain underfunded and understaffed. Initiatives like NumFOCUS and the Open Source Endowment (OSE) aim to provide much-needed support.
“Finding ways to provide more resources and capacity for those reviews is definitely a stopgap and absolutely required for the future of OSS,” Croce adds.
The future of open source hinges on accountability
Open source is still being adopted at a rapid pace, with more pronounced use in the EU than in the US, according to the 2026 State of Open Source Report. Amid rising digital sovereignty concerns, avoiding vendor lock-in is now a top driver for open source.
There’s no doubt that open source is widely relied upon — 96% of commercial codebases contain open source, according to a 2024 Synopsis report. But slopocalypse presents a messy challenge to tackle.
So, the question for open-source maintainers is whether it’s all worth it.
“If you make life a living hell, they won’t do it anymore,” says Holterhoff. “If their labor is not compensated for and they throw in the towel, then the OSS community loses out.”
Worryingly, although maintainers have sounded the alarm, it remains unclear how foundations or platforms will respond to sustain the ecosystem.
“If we do not actively manage contribution quality in an AI-driven world, we are not just risking security issues or technical debt. We are putting the ecosystem itself at risk.”
“If we do not actively manage contribution quality in an AI-driven world, we are not just risking security issues or technical debt,” says Croce. “We are putting the ecosystem itself at risk.”
For now, it comes down to contributor accountability. “Accountability is the real standard,” Croce adds. “Contributors need to understand and stand behind what they submit.”
Without a single technical fix, perhaps an appeal to humans to ‘do what’s right’ will help. Because without that basic accountability and trust, the open source model itself starts to break down.
The post 96% of codebases rely on open source, and AI slop is putting them at risk appeared first on The New Stack.