Anthropic shipped a source map file that exposed its entire Claude Code codebase — again.
On March 31, security researcher Chaofan Shou discovered that Anthropic had shipped a source map file alongside its Claude Code npm package version 2.1.88 — a debugging artifact that, when unpacked, revealed the full readable TypeScript source of its CLI coding tool. The file contained approximately 1,906 source files, internal API design, telemetry systems, encryption tools, and inter-process communication protocols, per reporting by NDTV and Dev Community citing BlockBeats. Within hours the code had been mirrored across GitHub, surpassing 1,100 stars before the package was pulled. Anthropic confirmed via a spokesperson: human error, not a security breach, and no customer data or credentials were exposed. The company declined to comment further.
What got developers' attention was not the exposure itself but what the source revealed.
The most-discussed finding is what the community is calling undercover mode: a function that, when active, instructs the model to strip all traces of Anthropic internals — codenames like "Capybara" and "Tengu," internal Slack channels, and the phrase "Claude Code" itself — from its outputs. The mode is designed for use in external, open-source repositories. A comment on line 15 of the relevant source file, flagged by developer Alex Kim in a widely-shared analysis, is explicit: "There is NO force-OFF. This guards against model codename leaks." You can force it on. You cannot force it off. In external builds, the function gets dead-code-eliminated to trivial returns.
The implication is not subtle. AI-authored commits and pull requests from Anthropic employees in open source projects will carry no indication they were generated. That is different from hiding internal codenames. It is the AI being instructed to pass as human.
The second finding is anti-distillation: a mechanism that injects fake tool definitions into Claude Code API requests when the tool detects it may be talking to a competitor recording traffic for training data. The code is gated behind a GrowthBook feature flag and activates only for first-party CLI sessions. The fake tool injection works by sending anti_distillation: [fake_tools] in API requests. Alex Kim documented that anyone seriously trying to distill from Claude Code traffic would find workarounds in about an hour. The real protection, the analysis suggests, is probably legal rather than technical.
The third finding is more mundane and more funny. A regex in the source detects user frustration: the pattern covers "wtf," "wth," "ffs," "omfg," "dumbass," "fuck you," "this sucks," and thirty-odd variations. Whether a frontier AI company using regular expressions to detect anger is ironic depends on whether you think an LLM inference call is the right tool for a job a string match does faster and cheaper.
The fourth finding is native client attestation. API requests include a placeholder value (cch=00000) that gets overwritten at the HTTP transport layer by Bun native stack, written in Zig, before the request leaves the process. The server validates the computed hash to confirm the request came from a real Claude Code binary. The mechanism is described in the source as DRM for API calls — the binary proves itself rather than simply asking third parties to play fair. A comment notes the server "tolerates unknown extra fields," leaving the protection is robustness an open question.
The source also reportedly contains a reference to roughly 250,000 wasted API calls per day due to auto-compaction failures, though the specific context was cut off in initial analyses.
Anthropic legal dispute with OpenCode provides context. OpenCode, a third-party tool that let developers access Claude Code internal APIs at subscription rates rather than per-token pricing, confirmed in March 2026 that it removed its authentication plugin following legal demands from Anthropic. The removal was merged with the commit message "anthropic legal requests," per the GitHub record. The source code native attestation architecture suggests what Anthropic was trying to enforce technically when the legal route proved insufficient — inference, not stated fact.