Cloud providers are running the same play they ran on enterprises in 2008 — proprietary vertical compute stacks built around custom silicon, with switching costs baked in from day one. Anthropic's deal to access 3.5 gigawatts of next-generation TPU compute through Broadcom, disclosed in an SEC filing on April 6, 2026, is a concrete signal that this pattern is accelerating toward AI labs.
Broadcom filed an 8-K with the SEC confirming two long-term agreements. The first grants Anthropic access to approximately 3.5 gigawatts of next-generation TPU-based AI compute capacity through Broadcom starting in 2027, part of a multi-gigawatt commitment Anthropic has disclosed. SemiAnalysis had been tracking the structure of this deal for approximately one year before public reporting emerged.
The second extends Broadcom's existing partnership with Google: a Long Term Agreement to develop and supply custom Tensor Processing Units for Google's future generations through 2031, plus a Supply Assurance Agreement covering networking and other components used in Google's next-generation AI racks through the same year. Neither deal is speculative — both are contracted, multi-year commitments.
The 3.5 GW is not general cloud access. It is capacity built around Google's TPU design, mediated through Broadcom, that will be deeply integrated into how Anthropic runs Claude. Switching away would mean rebuilding at equivalent scale on a different chip architecture — the same logic that made it expensive to migrate off AWS once enterprise workloads were locked into EC2 and S3.
Anthropic's negotiating leverage comes from a deliberate multi-chip strategy. The company has disclosed that it trains and runs Claude across Amazon Web Services Trainium chips, Google TPUs, and Nvidia GPUs simultaneously, with Amazon remaining its primary cloud provider and training partner. Anthropic has kept all three options open — and used that flexibility to extract favorable terms from each.
That flexibility arrived at a moment of extraordinary revenue acceleration. Anthropic's run-rate revenue grew from $14 billion in February 2026 to over $30 billion by April 2026, Reuters reported, more than doubling in roughly eight weeks. The company raised $30 billion in Series G funding in February at a $380 billion post-money valuation, and has committed $50 billion to building U.S. computing infrastructure with data center operator Fluidstack in Texas and New York, according to Anthropic's own announcements. Claude Code, Anthropic's agentic coding tool that launched publicly in May 2025, reached $2.5 billion in annualized revenue by February 2026.
The December 2025 disclosure that Anthropic was Broadcom's previously unnamed $10 billion chip customer — granting access to up to 1 million Google TPUs — gave the first concrete signal of this strategy, CNBC reported at the time. The 8-K confirms the full scale.
Amazon built Trainium to reduce its dependence on Nvidia and to own the price-performance curve for its own workloads. Google has made TPUs a pillar of its cloud strategy for the same reason. Microsoft is building Maia. Each hyperscaler is constructing a vertical stack — custom silicon, proprietary tooling, volume pricing — where the switching cost compounds over time. That is the same playbook AWS ran on enterprises in 2008, and it worked because the lock-in was structural rather than contractual.
The risks in the Anthropic deals are real. The 8-K flags that both agreements are conditioned on Anthropic's continued commercial success. Anthropic's revenue figures are self-reported and unaudited. Broadcom has a history of aggressive guidance. The real test will come when Anthropic begins drawing down the 3.5 GW in 2027 — and when Google's manufacturing yield on next-generation TPUs can meet the committed volumes.
But the direction is clear. Three hyperscalers are building proprietary vertical compute stacks, and the 3.5 GW Anthropic deal via Broadcom is the most concrete evidence yet that this is not a hedging strategy. It is the next phase of cloud infrastructure, and AI labs are the first customers being locked in.