Palantir Has What OpenAI and Anthropic Want
The defense contractor became the bridge to Pentagon contracts—then triggered the wake up call that ended Anthropic's exclusive access.
The Pentagon's dramatic breakup with Anthropic has revealed an uncomfortable truth for AI startups: defense contracts are valuable, but the defense establishment is wary of dependence on any single AI provider.
According to Pentagon under secretary Emil Michael, Anthropic's Claude was the only AI model authorized in classified settings—until the department realized how deeply embedded it had become.
"I went to Secretary Hegseth, I said this would happen and that was like a whoa moment for the whole leadership at the Pentagon that we're potentially so dependent on a software provider without another alternative," Michael recalled on the All-In podcast.
The wake-up call came after Anthropic asked Palantir whether its AI was used in the U.S. military's raid on Venezuela in early January. While Anthropic characterized the inquiry as routine, the Pentagon and Palantir interpreted it as a potential threat to their access.
"I'm like, holy shit, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk?" Michael said.
The incident triggered a rapid restructuring. OpenAI signed a similar deal to what Anthropic had. xAI's Grok was brought into classified systems. The Pentagon is trying to get Google's AI allowed in classified settings too.
"I'm not biased," Michael said. "I just want all of them. I want to give them all the same exact terms because I need redundancy."
The irony: Palantir—the company with the defense contracts that OpenAI and Anthropic desperately wanted—ended up being the reason the Pentagon woke up to its dependency on Anthropic. Now everyone wants in, but the Pentagon is ensuring it never relies on just one again.
Primary source: Fortune