The Broadcom clause that makes Anthropic’s AI deals look different
A single sentence in a Broadcom filing is the only part of Anthropic's latest compute spree that feels genuinely new. Broadcom said Anthropic, the AI startup behind Claude, is set to access about 3.5 gigawatts of next-generation Google TPU capacity starting in 2027, but added a condition: consumption of that expanded capacity depends on Anthropic's continued commercial success. A TPU is Google's in-house AI chip. That clause turns a familiar compute story into something sharper. Anthropic is not just lining up more chips. One of the companies in the supply chain is saying future access depends on the business holding up.
That is narrower than a grand theory about who now controls AI. It is one unusually explicit Anthropic case. But it is still revealing. Reuters reported that Google committed $10 billion in cash to Anthropic at a $350 billion valuation and may invest another $30 billion if Anthropic meets performance targets. Reuters also reported that Amazon will invest up to $25 billion while Anthropic committed to spend more than $100 billion over 10 years on Amazon cloud technologies. Read alongside the Broadcom filing, those deals make Anthropic look less like a startup simply buying compute and more like a customer whose future growth is being contractually threaded through a few giant infrastructure partners.
That distinction matters because we have already done the bigger Anthropic compute story. The new fact here is not that Anthropic needs huge amounts of capacity. It is that Broadcom said part of that future capacity comes with an explicit business condition attached.
The rest of the source trail explains why that clause is load-bearing. Anthropic said its run-rate revenue has surpassed $30 billion and that more than 1,000 customers now each spend over $1 million on an annualized basis. Those are company claims, not independently verified figures. Anthropic also said it plans to expand its use of Google Cloud technologies, including up to one million TPUs, bringing well over a gigawatt of capacity online in 2026. And Reuters reported that Google said it can string together 1 million chips for large training needs, while Alphabet chief executive Sundar Pichai said just over half of the company's machine-learning compute investment this year would be dedicated to the cloud business.
Taken together, those numbers show why Google and Amazon want Anthropic close. They do not prove a new industry order. They do show that once a frontier lab's growth is measured in gigawatts, its suppliers stop looking like background vendors. They become counterparties with their own conditions, economics, and leverage.
Amazon is testing a similar dynamic through its own hardware. Reuters reported that Anthropic expected roughly 1 gigawatt of Trainium2 and Trainium3 capacity by year-end and ultimately up to 5 gigawatts. Trainium is Amazon's in-house AI chip line. If those figures hold, Amazon is not just renting servers to Anthropic. It is trying to make custom silicon and long-term cloud spending part of the growth package.
The caution is still the important part. This is one Anthropic-centered case study, not proof that every frontier lab is now taking visibly conditional compute from suppliers. The strongest operating figures in the story come from Anthropic itself, and the broader thesis about infrastructure leverage remains analysis, not settled fact.
Still, Broadcom's clause is real news because it makes the dependency legible. A frontier AI lab can raise billions, sign giant cloud commitments, and talk about million-chip buildouts. But access to the next block of compute can still hinge on whether the supply chain believes the commercial story. That is a more specific, and more useful, way to read this moment than pretending one Anthropic filing already proves a new map of power across the whole industry.