Trump administration weighs mandatory pre-release AI vetting — built on an agency that does not legally exist
The Trump administration is weighing an executive order that would require frontier AI companies to submit their models for government review before public release — a sharp reversal from its deregulatory posture and a policy built on an agency that does not legally exist yet.
The proposed order would give the NSA, the Office of the National Cyber Director, and the Director of National Intelligence authority to review AI systems deemed sufficiently powerful before they reach the open market. It marks the most concrete step yet toward codifying the voluntary pre-release assessment program the White House has run for more than a year under an informal arrangement with major AI developers.
The catalyst, according to reporting from multiple outlets, was Anthropic's Mythos model. Mythos, an AI system built for cyber defense research, demonstrated the ability to find software vulnerabilities that had gone undetected for 27 years, including a bug in OpenBSD, a security-critical operating system used by governments and banks worldwide. When Anthropic sought to expand access to the model from roughly 50 organizations to about 120, the White House objected, The New York Times reported.
The administration appears to have decided that a model powerful enough to alarm the intelligence community was being distributed too broadly under a system with no legal teeth.
The program Anthropic has been participating in is the White House's AI Safety Institute, or CAISI — the Center for AI Safety and Institute. CAISI has reviewed more than 40 frontier AI models to date, including evaluations of unreleased systems from Google, Microsoft, and xAI, Toms Hardware reported. The reviews happen under voluntary agreements. Companies choose to submit; CAISI has no statutory authority to compel them.
That is the legal gap at the center of the proposed policy. CAISI has no permanent legal standing — no legislation has passed to codify it — yet the administration is now considering mandatory pre-release submission backed by executive order authority, Toms Hardware reported. Some lawmakers have introduced draft legislation to give CAISI a permanent mandate. Nothing has passed. The agency has been running on voluntary cooperation and administrative action.
The administration revoked President Biden's 2023 AI executive order on its first day in office in 2025, stripping away the prior framework for AI governance. David Sacks, who led the deregulatory push as the administration's AI czar, left the role in March 2026. His departure coincided with a shift in posture — the intelligence community had grown increasingly concerned about AI systems that could discover vulnerabilities faster than defenders could patch them, and about the absence of any legal mechanism to demand visibility into those systems.
The proposed executive order is, in effect, an attempt to build legal authority around a voluntary club that AI companies have been joining because the alternative — being frozen out of government partnerships and procurement — was worse than compliance. The threat of a mandatory submission requirement is being used to enforce what the administration already gets through voluntary agreement.
Collin Burns, a former researcher at Anthropic and OpenAI, was installed briefly as CAISI director and pushed out after four days when White House officials raised concerns about his ties to the AI company he was meant to oversee, Toms Hardware reported. The episode illustrated the structural tension: the agency reviewing AI labs for national security risk was itself populated by people whose careers crossed back and forth across those same labs.
For now, the assessment regime remains voluntary. Google, Microsoft, and xAI have agreed to let the government test their models before public release under the existing CAISI framework, Toms Hardware reported. Anthropic submitted Mythos under the same arrangement.
What happens if the executive order is signed — and whether it survives a legal challenge from companies that argue mandatory pre-release review amounts to compelled speech or prior restraint — is what to watch next. A mandatory submission regime would almost certainly be contested in court. A voluntary regime that functions because companies fear the alternative is harder to challenge legally, but also harder to defend as durable policy.
A majority of voters — 57 percent in an NBC poll conducted in March — said the risks of AI outweigh its benefits, Seoul Economic Daily reported. That sentiment is the political fuel for an executive action built on an agency that exists on paper but not in law.