The Trump administration hit a legal wall trying to blacklist an AI company. Now it is testing a slower way to get what it wants.
On April 24, a federal appeals court rejected the Justice Department's request to delay enforcement of a preliminary injunction keeping Anthropic, the AI safety company behind the Claude chatbot, in good standing with federal contractors. Two days earlier, President Trump signaled openness to a deal with Anthropic — walking back months of confrontational rhetoric, according to The Atlantic. The sequence offered a rare look at the limits of executive pressure on the AI industry: direct coercion runs into constitutional problems, but Washington has not abandoned its goal of shaping which AI companies can access government markets.
The instrument the administration reached for first was the Defense Production Act, a Korean War-era statute giving the president broad authority to direct industrial production. On February 24, Defense Secretary Pete Hegseth invoked its primary compulsion power, known as Title I, against Anthropic, threatening to force the company to let the Pentagon deploy its models without restrictions on lethal autonomous weapons or domestic mass surveillance. Anthropic refused. The government designated it a supply-chain risk — a status typically reserved for foreign companies linked to adversary nations — effectively barring federal contractors from working with the company. Per the company's court filings, the designation put at risk between hundreds of millions and multiple billions of dollars in revenue.
On March 26, U.S. District Judge Rita Lin blocked the designation. The government's conduct, she wrote, constituted classic illegal First Amendment retaliation: Anthropic was punished for going public with its dispute. The DOJ's own attorney later told the court that Hegseth's X post — declaring that no contractor could conduct commercial activity with Anthropic — carried no legal force. It was, the attorney said, a social media post.
Constitutional scholars note that full nationalization — the formal seizure of AI infrastructure requiring companies to hand over models — would almost certainly trigger a different problem. The Fifth Amendment prohibits taking private property without compensation, and AI infrastructure at scale is worth trillions. Charlie Bullock, a senior researcher at the Institute for Law and AI, put it directly: the government is unlikely to produce what the industry is collectively worth. The legislative route faces the same obstacle. Multiple senators have floated bills ordering federal agencies to study nationalization, but no proposal has advanced to a floor vote.
What is moving instead is the middle path. Sam Altman has described intelligence as a utility like electricity or water, sold by the meter. Jensen Huang has suggested AI should be treated as national infrastructure, like roads. Alan Rozenshtein, a law professor who has studied the DPA's application to AI, noted that invoking the act's compulsion power would constitute an effective partial nationalization of the industry. Utility regulation treats frontier AI like the power grid — giving the administration leverage without requiring compensation at market rates. No formal seizure, no constitutional trigger, no trillion-dollar payout. Just rules about who can access compute, at what price, and under what conditions.
The injunction does not resolve the underlying policy question. What it established is the legal floor: the administration cannot blacklist a company for going public with a contract dispute. The question now is whether a regulatory framework, rather than a blacklist, can achieve what the DPA threat could not.