On Monday, April 6, Claude went dark for roughly 90 minutes. Users trying to log in to the AI assistant via web, mobile, or the Claude Code developer tool hit errors starting around 15:00 UTC. By 16:30 UTC, Anthropic had resolved the issue, according to the company's status page. The incident was the eighth disruption to the service in the first five days of the month, according to the same incident timeline.
But for most of that 90 minutes, Claude's own status page reported no problem at all. "All Systems Operational," it said. Meanwhile, user reports flooded Downdetector, a crowd-sourced outage tracker: 2,500 by 8:26am PDT, climbing to 10,000 by mid-morning, according to GV Wire. The gap between what the status page showed and what thousands of users were experiencing stretched to nearly an hour.
The discrepancy is not unique to Anthropic. On Hacker News, where engineers gathered to trade outage stories, one commenter described the pattern as "fake pr stuff" attached to status pages. Another argued the real problem is that the industry has "accepted that whatever they define as 'functioning' is suitable," according to Hacker News. Google Gemini has faced similar complaints.
The April 6 outage raises a question regulators and policymakers are beginning to ask: should AI services, which millions of people now depend on for work, coding, and communications, operate under reliability standards closer to those applied to power grids and telecommunications?
The comparison is not abstract. A business that can't access Claude to complete a contract or a developer whose automated pipeline fails because Claude Code is unreachable faces a problem structurally similar to one caused by a grid failure. The difference is that utility customers have recourse, tariff protections, and in some jurisdictions, guaranteed response times. AI users have none of that.
Anthropic declined to comment on the regulatory question. The company confirmed the outage duration and the status page timeline. The April 6 incident appears to have stemmed in part from an OAuth token expiry issue that affected some Claude Code instances selectively, leaving some developers locked out while others on identical configurations continued working normally, according to posts on Hacker News.
The broader context is a service under sustained stress. Eight incidents in five days is not a normal operational record for a flagship AI product. The disruptions span elevated error rates on specific model versions, desktop app failures, and the login outage that hit both consumer and developer surfaces. Whether this represents a scaling problem, a deployment maturity gap, or something more systemic is a question Anthropic has not answered publicly.
The status page discrepancy is the part that bothers engineers most. When a grid operator says the lights are on, customers trust that. When an AI company says all systems are operational while thousands of users can't log in, that trust breaks differently. "Transparency is the key to build trust," one commenter on Hacker News wrote after Anthropic updated its status page. "No one expects a perfect service."
That tolerance, however, has limits. Regulators in the European Union and the United States are beginning to examine whether AI platforms that integrate into critical workflows should meet minimum uptime and disclosure standards. No rule exists yet. But a service that can't tell its users it's having a bad day may find that regulators start asking why not.