Big tech companies step in to support the open source security ecosystem - Help Net Security
The companies building AI are trying to fix a problem their own tools created.
The Linux Foundation announced $12.5 million in grant funding on Tuesday, collectively pledged by Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI. The money goes to the Alpha-Omega Project and the Open Source Security Foundation (OpenSSF) to help open source maintainers handle a crisis that, in large part, came from AI: the flood of AI-generated bug report submissions that has overwhelmed security teams across the ecosystem.
The specific trigger was the cURL maintainer. In January, Daniel Stenberg announced he was ending cURL's bug bounty program after being buried under AI-generated submissions. "Grant funding alone is not going to help solve the problem that AI tools are causing today on open source security teams," said Greg Kroah-Hartman of the Linux kernel project. "OpenSSF has the active resources needed to support numerous projects that will help these overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving."
The irony is structural, not incidental. AI-powered security tools can find vulnerabilities at scale — that's the promise. But at scale, the outputs become unmanageable for small maintainer teams. The same companies selling AI-assisted security tools are now funding the infrastructure to manage the outputs of those tools.
Google's blog post on the announcement was explicit about this. "Turn a flood of AI-generated findings into fast action," the post said. The company also noted that its internal AI tools, Big Sleep and CodeMender, developed by Google DeepMind, have found and fixed vulnerabilities in Chrome — including in the browser's complex security infrastructure. Those same tools, at scale across thousands of open source projects, are what generated the flood in the first place.
The funding will support integrating security tools into maintainer workflows, deploying automated fixes, and building long-term resilience. The initiative also includes Sec-Gemini, a Google project applying AI to security research, which is being extended to open source projects.
What is notable is the coalition: Anthropic and OpenAI, in the middle of their legal war with each other and the government, are on the same funding list alongside Google DeepMind and Microsoft. The open source security problem is shared infrastructure that none of them can opt out of. Every AI model depends on open source packages. If those packages collapse under the weight of AI-generated security noise, the foundation of the AI ecosystem weakens.
Sources: