Anthropic's Datadog traces make the frontier-lab DIY myth look thinner
A fresh browser inspection of Claude.ai is making Anthropic's stack look less vertically self-contained than frontier labs like to imply. Hunterbrook reported Thursday that Claude.ai was running Datadog's real user monitoring code in production, which would make Datadog, the observability company that tracks crashes, logs, and performance, visible inside one of the industry's flagship AI products.
That matters because it adds new product-level evidence to an older vendor relationship. Datadog had already said it worked with Anthropic. The new pressure point is that public traces now suggest Anthropic may be using commercial observability deeper inside Claude itself, not just offering Datadog hooks to outside developers building on Anthropic's APIs.
The newest evidence comes from Hunterbrook, which disclosed that it is long Datadog and said a live browser inspection of Claude.ai found Datadog's real user monitoring software development kit initialized with the service name "claude-ai," the environment set to production, and a 100 percent sample rate. Hunterbrook also traced Datadog-related telemetry behavior in Claude desktop and Claude Code. The position disclosure matters. So does the fact that the public artifact trail does not begin with Hunterbrook.
Anthropic's own documentation already points in the same direction. In a live telemetry page for Cowork, Anthropic says essential telemetry on third-party infrastructure includes crash reports, error stack traces, and performance timings by default unless managed settings disable it. The same page names browser-intake-us5-datadoghq.com as the host for performance timing.
Datadog has also been public about working with Anthropic for nearly two years. In a June 2024 press release, Datadog said its LLM observability product was generally available across providers including Anthropic. In August 2024, Datadog said it launched a native Anthropic integration for LLM observability, which means tools for tracking how AI applications perform, fail, and behave in production. In a separate 2025 post, Datadog said its cloud cost tools can ingest Claude usage and cost data through Anthropic's admin API, with breakdowns by model, workspace, API key, service tier, cache hit rates, code execution, and web search activity.
That by itself would not prove Anthropic uses Datadog deeply inside Claude. It could still describe tools Datadog sells to customers building on Anthropic's APIs. The more convincing upgrade is the public issue trail around Anthropic's own software.
In January, a Claude Code user reported on GitHub that the tool was making continuous network calls to datadoghq.com every few seconds during a session. In February, another issue showed a cached GrowthBook feature flag named tengu_log_datadog_events, along with event batching settings. In late March, another public issue referenced OTLP, short for OpenTelemetry Protocol, exports to Datadog's us5 endpoint.
Put together, those traces make the story harder to dismiss as one hedge fund's motivated sleuthing. They suggest Datadog plumbing was visible across multiple Anthropic surfaces months before Hunterbrook published its browser inspection.
The counterforce matters too. Anthropic does not appear to be a pure Datadog shop. Grafana Labs said in a 2025 press release that Anthropic is one of its customers. That fits the more cautious read: not that Anthropic replaced every other observability tool, but that its stack is mixed and commercial. That also fits Datadog's own public pitch that large AI customers want to consolidate fragmented tooling.
So the strongest claim here is narrower than the market version. The public evidence does not prove Anthropic is definitely the unnamed eight-figure customer Datadog cited on its earnings call. It does show something more durable for builders and buyers to watch: one of the most important frontier labs appears to be running more of Claude on purchased observability infrastructure than the industry's build-it-all-yourself mythology would imply.
That changes the pressure in a specific way. Frontier labs still own the models, the distribution, and much of the economics. But once those models become products, the hard part starts to look familiar: logging, tracing, cost controls, incident response, and the unglamorous systems that tell you what broke. If Anthropic is buying more of those systems instead of building all of them, the next leverage fight in AI may sit lower in the stack than the labs like to admit.