Three regulators. Three responses. One AI model.
That is the actual story of Anthropic's Mythos Preview — not the capability announcement itself, but the regulatory infrastructure being stress-tested by a single piece of software. In the week since Anthropic disclosed that Mythos could find and exploit vulnerabilities at scale, the US Treasury, the Bank of England, and the European Central Bank have each responded in a distinct mode. The US convened an emergency session. The UK sent a joint ministerial letter. The ECB is doing what the ECB does: asking questions through its regular supervisory process.
The three-speed response matters because it reveals something about how governments are learning to think about AI risk. There is no playbook for this. There is not yet a regulatory framework that says a model exceeding some capability threshold on a cyber-offense evaluation triggers a specific response. What happened instead was that officials with different mandates, different tolerances for public intervention, and different relationships with the banks they supervise each decided unilaterally what to do.
The US moved fastest. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with bank chief executives last week, according to Reuters. The message was clear enough that multiple outlets characterized it as a warning.
The UK went further in public. Technology Secretary Liz Kendall and Security Minister Dan Jarvis sent a joint letter to banks stating that Mythos was substantially more capable at cyber offense than any model previously tested by the government's own AI Security Institute, Reuters reported. That is an extraordinary claim from a minister: not that the model might be dangerous, but that it represents a measurable step beyond anything the government had previously evaluated.
The ECB is taking the systematic approach. Supervisors are gathering information about the model and will ask banks about their preparedness through the regular supervisory dialogue — the ongoing process by which the ECB monitors bank risk management, Reuters wrote. No ad-hoc top-level meeting has been scheduled. The ECB had already listed technology risk as one of its supervisory priorities for 2026 through 2028, so this falls within an existing framework rather than requiring a new one.
Bank of England Governor Andrew Bailey put the case most plainly at a public event. It would be reasonable, he said, to think the recent wave of cyberattacks targeting financial institutions in the Gulf states was the most significant new challenge — until Anthropic's disclosure last Friday, which may have found a way to crack the whole cyber risk world open, Reuters reported.
The underlying evidence is real. Mythos Preview identified a sixteen-year-old vulnerability in FFmpeg, a widely used open-source software library, according to Reuters. The Cloud Security Alliance, an industry coalition, warned that Mythos represents a step change that lowers the cost and skill floor for discovering and exploiting vulnerabilities faster than organizations can patch them. Costin Raiu, a researcher with decades of vulnerability analysis experience, noted that banking systems running older IBM technology would be particularly exposed. TJ Marlin, CEO of the cybersecurity firm Guardrail Technologies, said Mythos can analyze complex legacy architectures where undiscovered vulnerabilities are now accessible to threat actors in a way they were not before.
Anthropic itself responded by announcing Project Glasswing — a private evaluation program that invited JPMorgan Chase and dozens of other organizations to examine the model and develop defenses. JPMorgan confirmed participation. Anthropic disclosed the vulnerabilities it found privately to the affected organizations before going public.
The question regulators are now facing is whether the disclosure model that Anthropic used — private disclosure to affected vendors, followed by public acknowledgment — is sufficient, or whether a model with this level of demonstrated cyber-offensive capability requires something more. Mandatory disclosure to a government body. Capital requirements tied to AI-exposed infrastructure. Incident reporting within a fixed window.
Trump's position, delivered in a Fox Business interview, was the most direct from any world leader on the topic. On whether the government should have safeguards on AI technology in banking, including a kill switch: there should be, Reuters reported. It is not yet clear what that means in practice, or whether Congress has appetite to legislate it.
What regulators are managing is a genuine uncertainty: the vulnerabilities Mythos found in a controlled evaluation exist in real systems. The timeline between a model like this being disclosed and threat actors attempting to exploit the same class of vulnerabilities is not measured in years. The parallel supervisory responses across the US, EU, and UK are a sign that governments understood this quickly. Whether they move from understanding to action before an incident forces the question is the open one.
What to watch: whether any of the three jurisdictions announces mandatory disclosure requirements, capital add-ons for AI-exposed banks, or incident reporting rules specific to frontier model capabilities. And whether Anthropic's Project Glasswing model — private evaluation followed by coordinated disclosure — becomes an industry standard or a regulatory target.