U.S. Says Anthropic Is an 'Unacceptable' National Security Risk
Government lawyers say AI startup could disable or alter technology to suit its own interests rather than U.S. priorities.
Government lawyers say AI startup could disable or alter technology to suit its own interests rather than U.S. priorities.
The U.S. government has called Anthropic an "unacceptable risk" to national security, saying the AI startup could disable or alter its technology to suit its own interests rather than the country's priorities in a time of war.
In a 40-page legal filing in U.S. District Court, government lawyers said they questioned whether Anthropic was a "trusted partner," noting that AI systems "are acutely vulnerable to manipulation." Giving Anthropic access to the Department of Defense's warfighting infrastructure would "introduce unacceptable risk into DoW supply chains."
The filing was the government's first response to lawsuits from Anthropic challenging Defense Secretary Pete Hegseth's decision to label the company a "supply chain risk" last month—a designation previously used only to bar foreign companies posing national security threats.
The rift began with negotiations over a $200 million contract for AI in classified systems. Anthropic said it did not want its AI used for mass surveillance of Americans or with autonomous lethal weapons. The Pentagon countered that it was not up to a private company to dictate how it uses technology.
When the two sides couldn't agree, Hegseth declared Anthropic a supply chain risk, effectively cutting the company off from working with the U.S. government.
Primary source: The New York Times