ASIC told firms to act on Mythos cyber risk. They have no way to measure their exposure.
Australia's corporate regulator told financial institutions today to start hardening their defenses against a frontier AI model that can find and exploit software vulnerabilities at machine speed — and immediately created a problem. The institutions being ordered to act have no access to the system being flagged as dangerous, meaning they cannot independently measure their own exposure.
ASIC commissioner Simone Constant wrote directly to the financial sector this week urging urgent action on frontier AI risks, specifically naming Anthropic's Mythos model, according to Reuters. ASIC has not published the full letter; its contents are known only through Reuters's reporting, which described required disclosures including an inventory of AI vendor relationships, disclosure of which third-party models are in use, an assessment of model access point security, and a definition of each firm's "exposure level" to frontier AI supply chains — all by a July 1 deadline. ASIC declined to comment on the record beyond what Reuters published. A separate letter from APRA, Australia's prudential regulator, sent April 30 to banks and superannuation funds, flagged the same class of risk and added specific prudential standards language, according to ABC News Australia, though APRA has also not published its full text.
The asymmetry Constant identified is structural. Australian banks, power providers, and critical infrastructure operators have no access to Mythos through Project Glasswing, Anthropic's program for sharing the model with vetted organizations for defensive testing, according to ABC News Australia. Major US firms, including Microsoft, Apple, Amazon, and Cisco, plus about 40 unnamed critical infrastructure operators, do have access through Glasswing. No Australian institution can independently verify its exposure by testing against the model being flagged as the threat. Whether Glasswing findings flow to downstream customers who run workloads on Microsoft or Amazon infrastructure remains an open question; Anthropic describes Glasswing as a restricted direct-participant program and declined to comment on whether defensive insights are shared beyond the vetted cohort.
The global picture Constant cited reinforces the structural exposure. A survey by the Cambridge Centre for Alternative Finance found that 81 percent of financial services firms are adopting AI at some level, while only 20 percent of regulatory authorities have adopted advanced AI tools themselves, according to the Cambridge Centre report. Nearly half of 130 regulatory authorities surveyed are still in the Exploring stage for AI adoption or not engaged with AI at all. Only 24 percent of regulatory authorities globally collect data on industry AI adoption, and 43 percent have no plans to start within the next two years.
ASIC's core argument is that waiting for complete certainty before acting is itself a risk posture. Constant wrote: "Do not wait for perfect clarity to address the threat posed by new AI models. Instead, act now, and act with discipline."
What to watch next: whether the July 1 deadline produces meaningful disclosures or whether firms tell ASIC they cannot comply with requirements to measure exposure to a model they cannot access. Australian banks and financial industry groups have not made public statements about their Mythos readiness or their ability to meet the reporting requirements — the gap between what the regulator is asking for and what institutions can actually produce is one of the story's central unresolved tensions. Whether Canberra will use the 2025 Anthropic MOU to negotiate Glasswing access for Australian firms remains publicly unanswered. And if any Australian institution later discloses a cyber incident it suspects involved frontier AI, that will be the first real test of whether ASIC's guidance translates into enforceable controls — or remains the most specific warning yet issued by a major regulator, with no domestic testing pathway to match it.