On March 31, security researcher Chaofan Shou published findings showing that @anthropic-ai/claude-code version 2.1.88, published to the npm registry on March 30 at 22:36 UTC, contained a bundled source map file — cli.js.map — inside the published tarball. Anyone who downloaded the package could extract the map and read the original TypeScript source it reconstructed. Shou documented the finding and the exposed code was subsequently mirrored to a public GitHub repository.
The source map revealed a production-grade system of roughly 1,902 TypeScript and TSX files comprising approximately 523,991 lines of source code, according to a count of files and line data in the exposed archive. Architectural documentation preserved in the mirror describes a modular tool-based architecture with around 40 built-in tools, a roughly 46,000-line query engine handling API calls and caching, a multi-agent orchestration layer internally referred to as "swarms," a bidirectional IDE bridge connecting VS Code and JetBrains to the CLI via JWT-authenticated channels, and a persistent memory system storing user and project context across sessions. The tool system, permission gates, prompt injection defenses, and caching logic are all readable in the exposed source.
This was the second operational security incident at Anthropic in five days. On March 26, Fortune reported that Anthropic had left nearly 3,000 internal assets publicly accessible via a misconfigured content management system, including details of an unreleased AI model, draft blog posts, and information about an exclusive CEO retreat. The unreleased model, according to documents Fortune reviewed, was described internally as the most capable Anthropic had trained, with "significantly better performance in reasoning, coding, and cybersecurity" than prior versions. An Anthropic spokesperson attributed the CMS exposure to human error in the CMS configuration and said it was unrelated to Claude or other AI tools. Anthropic secured the data after Fortune notified the company.
Neither incident involved a sophisticated attack. The source map exposure was a build configuration error — source maps should not ship in production packages. The CMS exposure was a default-permissions error — assets uploaded to the content store were public by default unless explicitly set private. Both are the kind of basic configuration mistakes that happen in engineering organizations of any size. The package remains on npm; whether Anthropic pushed a fixed version is unclear from public records.
The broader context is harder to dismiss. Anthropic has marketed Claude Code as a tool it uses internally to automate its own software development, a pitch that depends on an implicit claim of operational rigor. Two exposures in five days — one exposing the company's own codebase, one exposing unreleased product and model information — is a pattern worth noting regardless of how cleanly each incident is individually explained.
Fortune also noted that AI coding tools, including Claude Code, make exposed misconfigurations easier to find by automating pattern detection across publicly accessible data. Anthropic has said the March 26 CMS exposure was unrelated to its AI tools. The point is not that Anthropic's tools caused this incident — it is that the category of increasingly capable AI-assisted security research means misconfigurations anywhere are found faster. A company's own tooling is part of what widens the surface area for discovery.
The code remains publicly readable at github.com/nirholas/claude-code — the repository credits Chaofan Shou with discovery rather than maintenance.