The Government Just Killed Its Best Chance to Understand Anthropic
The Trump administration fired Collin Burns after just four days as director of CAISI, the federal body responsible for evaluating frontier AI models for national security risk, citing his prior employment at Anthropic as an irreconcilable conflict of interest. This creates a…

Collin Burns spent four days as director of the Center for AI Standards and Innovation. He was fired because he used to work at Anthropic.
That is the story. It is as simple and as strange as it sounds.
Burns, a former researcher at both OpenAI and Anthropic, was appointed to lead the federal body responsible for evaluating frontier AI models for national security risk. He started the job on Monday. By Thursday, the White House had forced him out. The reason, according to The Washington Post: his prior employment at the same lab whose flagship model CAISI is supposed to be assessing.
The replacement is Chris Fall, a former director of the Department of Energy's Office of Science who brings what the Commerce Department called "scientific leadership." Fall is a career government scientist. He is not a frontier AI researcher.
The contradiction at the center of this is not subtle. CAISI, originally stood up by the Biden administration as the AI Safety Institute in November 2023, was renamed under Trump and given a new mandate: evaluate frontier AI systems for national security risk. The body's own website describes its role as assessing models that "pose national security risks." That is not an academic exercise. Anthropic's own engineers have described their Mythos model as capable of identifying and exploiting zero-day vulnerabilities across every major operating system and browser. Anthropic has released Mythos only to roughly 40 organizations precisely because the cyber capabilities are considered too dangerous for broad distribution.
CAISI's job, by design, is to understand systems exactly like Mythos. To do that job credibly requires people who have worked at the frontier labs building those systems. The moment those people walk through the door, their lab affiliation becomes grounds for removal.
Dean Ball, a former Trump administration AI adviser, put it plainly on X: Burns had "given up valuable Anthropic stock and moved across the country to take the government position." His reward, Ball wrote, was "a punch in the face."
The dismissal is the latest move in a running feud between the Trump administration and Anthropic. In February, President Trump called the company "left-wing nut jobs" after it refused to remove safety safeguards from its AI for use in autonomous weapons and surveillance. Defense Secretary Pete Hegseth subsequently labeled Anthropic a "supply chain risk to national security." Anthropic sued the Pentagon over the designation, with its legal challenge still working through the courts. And yet, last Friday, Anthropic CEO Dario Amodei walked into the West Wing for what both sides called a productive meeting with White House chief of staff Susie Wiles. Trump told CNBC afterward he thought they would "get along just fine."
They did not get along just fine. Burns was axed five days later.
The OTA Problem
In 1995, Congress eliminated the Office of Technology Assessment, the independent body that gave lawmakers technical expertise to understand what they were regulating. For a decade, legislators had leaned on OTA analysts to explain semiconductor physics, satellite capabilities, and the implications of encryption standards. When OTA vanished, the expertise vanished with it. The consequences showed up slowly, then all at once: legislation written by the industries being regulated, hearings where members visibly did not understand the technology at the table, rules that assumed capabilities that did not exist and missed ones that did.
CAISI is not OTA. But the structural logic is similar. The federal government has decided it needs to evaluate frontier AI systems for national security risk. Those systems are being built by a small number of labs, most notably Anthropic, OpenAI, and Google DeepMind. The people who understand those systems best are the ones who built them. Bringing those people into government service is the most direct path to credible technical assessment. And now, repeatedly, those people are being pushed out for exactly that background.
Kemba Walden, former national cyber director, told Fortune that Anthropic's Mythos represents both a leap in defensive AI capabilities and a set of inherent risks that expose vulnerabilities in critical infrastructure. That kind of dual-edged assessment requires deep technical knowledge to make. The DOE Office of Science, which Fall led, oversees national laboratories that conduct nuclear weapons research. That is not the same skill set.
The Commerce Department said Fall "brings the scientific leadership needed to ensure America leads the world in evaluating frontier AI models." That is a description that could apply to many government scientists. It does not address whether Fall has worked with or evaluated systems at the frontier of AI capability. The job CAISI is being asked to do, by its own published mandate, requires exactly that.
There is a version of this story in which the White House simply could not tolerate an Anthropic alumnus in a sensitive position, regardless of his qualifications. And there is a version in which the problem is more structural: the government wants to assess frontier AI but has made it structurally impossible to hire the people capable of doing the assessment.
Both versions are true at the same time. That is the story.
Burns did not respond to a request for comment. The Commerce Department declined to elaborate beyond its statement on Fall.





