The most telling image from the Anthropic-OpenAI rivalry isn't a courtroom or a congressional hearing room. It's from an AI summit in India, where Prime Minister Narendra Modi tried to stage a group photo of the industry's CEOs holding hands. Sam Altman held Modi's hand. When it came to reaching for Dario Amodei on his other side, Altman didn't. Amodei, on cue, took the hand of the person beside him. Not Altman's.
That moment — staged, witnessed, photographed — captures the Anthropic-OpenAI split better than any press release. What started as a personal rupture between two men in 2020 has become one of the most consequential rivalries in technology, worth estimates of $300 billion or more on paper, and considerably more than that in questions about who controls the future of artificial intelligence, according to eWeek.
The split's most recent expression is a fight over the Pentagon. Anthropic walked away from a contract worth up to $200 million because the agreement's "any lawful use" clause would have permitted its AI systems to power mass surveillance and autonomous weapons, NPR reported. OpenAI signed the same contract within hours. The episode crystallized a philosophical divide that runs deeper than any business competition — and raises a question neither company has fully answered: can AI labs govern themselves, or does the government eventually decide which companies get to run the most powerful systems?
The numbers are staggering by any measure. OpenAI closed a $110 billion raise in February 2026 at a $730 billion pre-money valuation, the largest private technology fundraising in history, with participation from Amazon, SoftBank, and Nvidia. Anthropic raised $30 billion the same month at a $380 billion post-money valuation. The two companies are collectively embedded in the most sensitive institutions on earth — defense agencies, intelligence services, financial systems — and neither seems willing to share the access the other has negotiated.
The personal fracture dates to 2020. In a conference room that year, Altman called Dario Amodei and his sister Daniela — who ran policy at OpenAI — and accused them of organizing negative board feedback against him. Amodei produced the executive Altman cited as the source of the organized opposition. That executive denied it. Altman denied having said it. Both Altman and Amodei then shouted at the same executive in the same room. "I felt psychologically abused by Altman," Amodei told friends afterward, Business Insider reported.
Amodei's conditions for staying were straightforward: report directly to the board, and never work with Greg Brockman again. Both requests were rejected. On December 29, 2020, Amodei left OpenAI with seven colleagues and founded Anthropic in early 2021, eWeek reported. The institutional friction that produced the split, though, ran deeper than the personal.
Brockman had proposed — in what internal sources described to The Decoder as a serious policy discussion — whether OpenAI could eventually sell access to its most advanced AI systems to foreign governments, including nations on the UN Security Council. Amodei found the idea borderline treasonous. Separately, Amodei discovered a handshake deal — made before he left — that gave both Brockman and Ilya Sutskever, OpenAI's chief scientist at the time, the power to fire him. Altman had assured him he would not report to them, eWeek reported.
The rivalry played out publicly over the following years through Super Bowl advertisements, congressional testimony, and a scramble for government contracts. The inflection point came in early 2026. On March 5, the Defense Department's acquisition arm formally designated Anthropic a supply chain risk — a designation that blocked the company from certain federal contracting categories, CNBC reported. Within hours, Altman announced OpenAI's own Pentagon agreement. The administration framed it as a commitment to responsible AI deployment. Altman posted that the agency "displayed a deep respect for safety." He acknowledged the negotiations were "definitely rushed," CNBC reported.
The financial consequences were immediate. U.S. uninstalls of ChatGPT's mobile app jumped 295 percent day-over-day on February 28, TechCrunch reported. One-star reviews surged 775 percent the same day. Anthropic's Claude climbed to the top of the U.S. App Store, with downloads jumping 51 percent day-over-day.
An internal memo Amodei sent to Anthropic staff — which TechCrunch reviewed — was unsparing. He called OpenAI's safety framing around the Pentagon deal "safety theater" and described Altman's public comments on the negotiations as "straight up lies" and "gaslighting." OpenAI disputes this characterization. In separate communications reported by the Wall Street Journal, Amodei compared the Altman-Musk legal fight to historical conflicts between powerful figures with incompatible visions. The Journal characterized those descriptions as paraphrased summaries of private communications, not direct quotes. The Journal's full investigation into the decade-long rivalry between the two companies is behind a paywall, limiting what can be independently verified through that reporting.
The personal dimension has sharpened as the stakes grew. In communications with colleagues reported by the Wall Street Journal, Amodei called Greg Brockman's $25 million donation to a pro-Trump super political-action committee "evil." Brockman and OpenAI have not publicly responded to that characterization. Jan Leike, who led OpenAI's superalignment safety team before leaving for Anthropic in mid-2024, posted publicly that at OpenAI "safety culture and processes have taken a backseat to shiny products." OpenAI disputes this framing.
The two companies are now government AI vendors. Anthropic has had models operating on classified Defense Department networks since its July 2025 contract. OpenAI's new agreement, signed within hours of Anthropic's blacklisting, was described by Altman as a partnership for "defense and intelligence applications." The philosophical distinction Anthropic has long drawn — between itself as a company with principled constraints and OpenAI as a company willing to negotiate those constraints — faces an immediate test when both companies are simultaneously Defense Department contractors.
What the Pentagon episode made unavoidable is that the question of who governs frontier AI is not one the labs can answer among themselves. The $200 million that Anthropic turned down was real money. The contract OpenAI signed hours later was the same contract, with the same language. Whether "any lawful use" is a meaningful red line or a contractual formality that no AI company of sufficient scale will ultimately refuse may be the defining question of the next phase of the industry. Both companies are now positioned to help answer it — together, on the same side of the same contract, with fundamentally different public rationales for why they signed.
The Wall Street Journal contributed reporting to this story.