Anthropic rents all 220,000 chips at SpaceX's Colossus 1, becomes Musk's biggest AI customer.
Anthropic needed compute. It found the one man who once called it a threat to civilization.
In February 2026, Elon Musk wrote on X that Anthropic "hates Western Civilization," citing a direct post from his account at the time. Observers widely characterized that position as fundamental hostility from Musk toward the lab — what some called misanthropic toward a rival. Three months later, Anthropic signed an agreement to rent all of the compute capacity at SpaceX's Colossus 1 data center — more than 220,000 NVIDIA chips drawing 300 megawatts of power — becoming, by its own account, Musk's single largest AI customer. The reversal is not a partnership story. It is a supply-shock story: a company blacklisted by the U.S. government for defense contracts found itself with no viable path to the compute it needed to keep its models running at scale, and the only place with enough power immediately available belonged to the person it had most publicly vilified.
The deal, announced by Anthropic on May 6, gives the AI lab access to more than 300 megawatts of new capacity within a month. The company confirmed it is doubling Claude Code five-hour rate limits for Pro, Max, Team, and Enterprise plans as a result. Reuters reported that in May 2026, Musk spent time with the Anthropic team and called them competent and caring, saying "no one set off my evil detector" — a stark shift from his February posts.
The backdrop is a compute crunch that has no clean exit. The Pentagon's blacklist against Anthropic — effectively barring the lab from federal AI procurement contracts — left it without the government compute partnerships that typically anchor a frontier lab's infrastructure planning. Since the Trump administration effectively blacklisted Anthropic in late February, Claude has experienced multiple outages and the company has repeatedly tightened usage limits during peak hours. In March, Anthropic confirmed it had been reducing Claude session limits on weekdays. New infrastructure investments take 12 to 24 months to translate into available capacity; limits will likely stay tight or worsen through the end of 2026, according to one analysis of Anthropic's capacity situation.
xAI confirmed the partnership in a post on its own news page, framing it as a capacity agreement for Colossus. xAI did not respond to questions about whether its own training operations are paused, deprioritized, or unaffected by the arrangement. Anthropic's blog says it agreed to use "all of the compute capacity" at Colossus 1; xAI's announcement calls it a "capacity partnership" without specifying operational terms on its public news page. Neither company would comment on exit provisions or what operational constraints the agreement places on xAI's own use of the facility.
The financial terms remain undisclosed. Market rates for comparable GPU-hour contracts at this scale typically run roughly $2 to $3 per H100 equivalent — per market-rate analysis by MindStudio — suggesting a deal of this size could represent hundreds of millions of dollars annually. Neither Anthropic nor SpaceX has confirmed a figure.
The reversal landed three weeks after Musk's own AI company, xAI, raised $15 billion in a Series B round that valued it at $120 billion. Colossus is xAI's primary training infrastructure. Renting the full facility to Anthropic means xAI trains on the same chips Anthropic is running inference on — a configuration that neatly serves both companies while raising questions about whether the deal reflects desperation, pragmatism, or a broader realignment between the two labs.
That ambiguity is not abstract. SpaceX is moving toward a public listing, and investors in that IPO cannot see the terms of Anthropic's dependency on Musk's own infrastructure — a competitor that shares the same founder. The undisclosed deal structure also means xAI's investors cannot assess whether their training schedule is now hostage to a rival's inference demand. The exposure is asymmetric: xAI's investors are bearing the operational risk of Colossus allocation without visibility into the contractual terms.
Compute is the load-bearing constraint in AI right now. Labs with the best models cannot always run them at scale, and the organizations with the most power capacity are not always the ones with the most goodwill in the industry. Anthropic solved its shortage by renting from the one company that had the capacity and the incentive to say yes.
What happens next: whether Anthropic discloses financial terms, whether the arrangement triggers any regulatory review given compute concentration at a single facility controlled by a Musk-founded company, and whether other labs hit with similar government designations follow the same path to the same landlord.