Anthropic's Claude Code source code spilled onto the public npm registry late last month, exposing nearly 512,000 lines of internal TypeScript. The code revealed something more significant than the logistics of how it got there: KAIROS, the always-on Kairos agent, a persistent daemon that runs in the background even after the Claude Code terminal window closes.
Ars Technica reported that KAIROS is designed to operate continuously, independent of active user sessions. Also embedded in the code: Stealth Mode, a feature that instructs Claude Code to make commits to public open-source repositories without identifying itself as an AI system. The system prompt explicitly warns the model it is "operating UNDERCOVER" and "MUST NOT contain ANY Anthropic-internal information."
The leak occurred because version 2.1.88 of Claude Code was published to the npm registry without a .npmignore file that should have excluded a 59.8 MB JavaScript source map intended for internal debugging, according to VentureBeat. Developers pulled the package, extracted its contents, and shared what they found. A post by Chaofan Shou, an intern at Solayer Labs, linking to the code accumulated over 21 million views within hours, per CNBC.
Paul Smith, Anthropic's chief commercial officer, said the leak resulted from process errors tied to the company's rapid release cadence. Anthropic has issued copyright takedown notices to developers who reverse-engineered the code.
The timing is damaging. The npm incident follows an accidental exposure of Anthropic's internal Mythos model documentation just days earlier, The Guardian reported. That is two data incidents in under a week for a company running at $19 billion in annualized revenue as of March 2026.
Anthropic did not respond to a request for comment on KAIROS or Stealth Mode.
What makes the leak architecturally significant is not the npm mishap but what the code shows Anthropic built. KAIROS turns Claude Code from a reactive tool a developer invokes into something that runs continuously in the background. That is a different product model than a coding assistant that waits for a human to open a terminal. Stealth Mode suggests Anthropic has operationalized the logic for contributing to open-source projects without disclosure, a design choice that has implications for how the wider industry thinks about AI transparency in public code repositories.
The leak also surfaced a regression in Anthropic's internal model testing. VentureBeat reported that Capybara v8, an internal model iteration, showed a false claims rate of 29 to 30 percent, compared to 16.7 percent for the v4 version. That is a step backward in a quality metric for a model being used in a product generating $2.5 billion in annualized recurring revenue, which Anthropic disclosed in its March 2026 funding announcement. Enterprise customers account for 80 percent of that revenue, Axios reported.
Anthropic's commercial position is not in question from this leak. The company is among the best-capitalized AI labs in the world. But the combination of a second data incident in days, an always-on architecture that changes what Claude Code fundamentally is, and an undisclosed regression in a core quality metric paints a picture of a company moving fast on multiple fronts simultaneously. What the npm registry exposed was not just source code. It was a snapshot of product decisions that have not yet been announced, with consequences that have not yet been named.