Twenty-nine million credentials ended up in public GitHub repositories in 2025. That is not a rounding error. It is a structural failure, and the structural failure is getting worse faster than anyone in the industry wants to admit. The annual count of hardcoded secrets saved to public code repositories hit 28.65 million last year, a 34 percent increase year over year, according to GitGuardian's annual State of Secrets Sprawl report published this week. But the headline number obscures the more uncomfortable finding: AI coding tools are generating credentials at roughly twice the rate of human developers, and nobody — not the toolmakers, not the enterprises deploying these tools, not the legal system — has figured out who is responsible when those credentials leak.
Claude Code, Anthropic's AI coding assistant, produced code with a secret-leak rate of 3.2 percent, GitGuardian found. The baseline across all public GitHub code is 1.5 percent. The AI-assisted rate is more than double that. For AI service credentials specifically — the API keys and tokens that grant access to other AI platforms — the number of leaked secrets reached 1.27 million in 2025, an 81 percent year-over-year increase. The tools that generate these credentials are among the most widely deployed in the industry. The accountability for what they leave behind is not with anyone.
Here is the part that makes the problem worth worrying about beyond the headline: 64 percent of the secrets that GitGuardian confirmed as valid in 2022 were still active and exploitable in 2026. They had not been revoked. The credential leaked, sat in a public repo for anyone to find, and nobody fixed it. Some of those were human errors. But as AI coding agents become the primary authors of infrastructure code — writing the Terraform scripts, the CI/CD pipelines, the deployment configs — the question of who catches the mistake, who revokes the key, and who is liable when it gets exploited is a question nobody has answered. The Model Context Protocol specification, which governs how AI agents connect to external tools and data sources, says credentials "SHOULD retrieve from environment variables" rather than be hardcoded. The word "SHOULD" is doing a lot of work there. It is not "MUST." It is not enforced. It is a suggestion in a spec that toolmakers can implement however they want.
GitGuardian, which operates the most-installed GitHub application in the marketplace and monitors secrets across more than 600,000 developers, found that internal repositories are roughly six times more likely to contain hardcoded secrets than public ones. Internal repos are where AI coding agents are most aggressively deployed — inside enterprises that have the most to lose from a credential leak. The AI service credential category grew faster than any other segment in the 2026 report, driven largely by the proliferation of AI agents that need programmatic access to other AI platforms. An AI agent that can write code needs credentials to call other services. Those credentials are being hardcoded into the very infrastructure the agent is building. When the agent introduces a secret, the security team that reviews its pull requests may not even know what a valid credential looks like — because the agent generated it, and the human reviewer is looking at the logic, not the configuration.
The 113,000 DeepSeek API keys that security firm Snyk found exposed in public repositories in 2025 are one data point in a much larger pattern. Those keys worked. Some of them still work. The model providers — the companies selling the AI platforms that issue these credentials — have no contractual liability when their customers' keys leak via a third-party tool. Anthropic's terms of service disclaim liability for downstream security failures. So do OpenAI's, Google's, and every other major AI provider's. The toolmaker is not responsible. The enterprise that deployed the tool without proper guardrails is not clearly responsible either — most SaaS agreements have carve-outs for security incidents caused by AI-generated code that the human never reviewed. The gap is not a bug. It is the default state of an industry that moved too fast to write the contracts.
Eric Fourrier, GitGuardian's CEO, noted in the company's analysis that secrets have been growing roughly 1.6 times faster than the active developer population since 2021. The developer base is expanding. The credential leak rate is expanding faster. That ratio is a lagging indicator of tooling decisions made without accounting for the security surface area that AI coding agents introduce.
What to watch: the Shai-Hulud incident GitGuardian documented in the 2026 report — where compromised CI/CD runners accounted for 59 percent of the affected machines — points to where the exploitation is actually happening. The credentials are not primarily being stolen from developer laptops. They are being harvested from the automation infrastructure that AI agents interact with most heavily. As more enterprises move to agentic CI/CD workflows, where an AI agent writes, tests, and deploys code with minimal human review, the attack surface shifts further into territory that most security teams do not have mature tooling to monitor. The question is not whether more credentials will leak. The question is who is accountable when they do — and right now, the honest answer is nobody has claimed that responsibility.
The GitGuardian State of Secrets Sprawl 2026 report is here. Snyk's analysis of the DeepSeek key exposure is here. The MCP security best practices specification is here.