Anthropic's Claude Code has far more access to your machine than its terms of service makes clear. A leak of the tool's client source code, analyzed by a security researcher for The Register, reveals capabilities that go well beyond what the contract says[^1] — including a hidden daemon mode, persistent telemetry that phones home by default on the API, and a feature designed to disguise AI authorship in public code repositories.
The leak circulated for months among those who reverse-engineered the binary before Anthropic accidentally published its own source in late March. "I don't think people realize that every single file Claude looks at gets saved and uploaded to Anthropic," the researcher, who goes by the pseudonym Antlers, told The Register. "If it's seen a file on your device, Anthropic has a copy."
The timing of the source publication matters. Anthropic is currently suing the US Department of War after being banned as a supply chain threat, a designation that followed the company's refusal to weaken model safety safeguards. In that case, the US government argued Anthropic could "preemptively and surreptitiously alter the behavior of the model in advance or in the middle of ongoing warfighting operations." Anthropic disputed that in a filing, citing Thiyagu Ramasamy, head of public sector at the company, who stated in a March 20, 2026 declaration that "once deployed in classified environments, Anthropic has no access to or control over the model." For government deployments using Amazon Bedrock GovCloud or Google Vertex, that appears credible: traffic can be firewalled, automatic updates blocked, and system prompt fingerprinting prevented.
For everyone else, the picture is different.
The source code reveals KAIROS, a background daemon activated by a flag called kairosActive. It appears to be an unreleased headless assistant mode that runs when the user is not watching the terminal interface. Among other things, it disables the status bar, suppresses planning mode, and silently auto-backgrounds long-running bash commands without notifying the user. It is not clear from the source whether this mode has shipped to production accounts or remains internal.
A separate capability called CHICAGO is the codename for computer use and desktop control: mouse clicks, keyboard input, clipboard access, and screenshot capture. Users must opt in, and the feature is available to Pro, Max, and Anthropic employees. There is also a separate publicly launched Claude in Chrome service for browser automation.
Telemetry is enabled by default when using the Claude API directly. The data sent to Anthropic's analytics provider includes user ID, session ID, app version, platform, terminal type, Organization UUID, account UUID, email address if defined, and which feature gates are currently enabled. The company switched from Statsig to GrowthBook after OpenAI acquired Statsig in September 2025. If the network is unavailable, the data is written to ~/.claude/telemetry/ locally. Telemetry is disabled by default when using third-party providers like Bedrock or Vertex. Error reports capture the current working directory, which can reveal project names and system information, along with user ID, email, and session ID.
Anthropic's website cites the use of Sentry for error reporting. The company told The Register it does not currently use Sentry, and that when it did, it applied server-side data scrubbing and disabled the service for third-party inference providers.
Remotely managed settings, default-enabled for organizational administrators, allow Anthropic to push a policySettings object that can override other configuration, set environment variables including ANTHROPIC_BASE_URL and LD_PRELOAD, and reload settings immediately without user interaction. Users are notified of "dangerous setting changes," though the definition of that term originates in Anthropic's code and can be revised. The auto-updater runs on every launch and pulls configuration from Statsig or GrowthBook, giving Anthropic the ability to disable specific versions by policy.
AutoDream, a background subagent that has been publicly discussed but not officially released, scans local session transcripts stored as JSONL files and extracts relevant data into MEMORY.md, which is then injected into future system prompts and thus sent to the API. The agent runs under the same API key and with the same network access as the main session.
Team Memory Sync, an unreleased internal project, connects local memory files to api.anthropic.com and can share memories across an organization. The service includes a secret scanner using regex patterns for around 40 known token and API key formats, including AWS, Azure, and Google Cloud. Data that does not match those patterns could be exposed to other team members through the sync.
There is also a Skill Search feature, accessible only via an employee feature flag, that can download skill definitions to a remote server, track which remote skills have been used, execute remotely downloaded skills, and register them to persist across sessions. If enabled for non-employee accounts via a GrowthBook feature flag, this would represent a theoretical remote code execution pathway. Anthropic could, in principle, serve arbitrary instruction overrides through skills loaded into a session.
One detail that stands apart from the privacy concerns: a file called undercover.ts contains instructions for disguising AI authorship in public repositories. "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository," it reads. "Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover." This appears to be a direct response to open-source projects that have explicitly prohibited AI-generated contributions.
Data retention varies by tier. Free, Pro, and Max users who opted in to training data sharing have their data retained for five years; those who did not opt in have it retained for 30 days. Commercial users — Team, Enterprise, and API — have a standard 30-day retention period and a zero-data-retention option.
Anthropic told The Register it designs for privacy and security from the ground up and that Claude Code is SOC2 compliant. An earlier version of The Register's story was corrected following Anthropic's feedback on its description of error reporting practices.
The broader context is Anthropic's ongoing legal fight with the Department of War. For government customers, the controls described in the source code appear consistent with what Anthropic tells courts: it has limited post-deployment access. For developers, startups, and anyone running Claude Code outside a firewalled government cloud, the code tells a different story. The distinction between user-owned tooling and Anthropic-controlled infrastructure is considerably blurrier than the terms of service imply.
[^1]: Source-reported; not independently verified.