Anthropic is throttling its new AI model before it ever ships widely. The reason: Claude Mythos Preview found a 27-year-old vulnerability in OpenBSD that every automated scanner and human security researcher missed since 1998.
The discovery, described in a blog post published this week, was not the result of a dedicated vulnerability hunting campaign. Anthropic did not train Mythos Preview for cybersecurity. The capabilities emerged as a downstream consequence of general improvements in code understanding, reasoning, and autonomy, the same trajectory that produced Claude Code, the company's software development agent. "We did not explicitly train Mythos Preview to have these capabilities," the company wrote. "Rather, they emerged as a downstream consequence."
The 27-year OpenBSD bug is the most striking example. But Mythos Preview also wrote a web browser exploit that chained four vulnerabilities with JIT heap spray, escaping both renderer and OS sandboxes. It autonomously developed a Linux local privilege escalation exploit using race conditions and KASLR bypasses, a kernel memory randomization mechanism designed to prevent exploits from reliably jumping to known code locations. It produced a FreeBSD NFS remote code execution exploit splitting a 20-gadget ROP chain across multiple packets, granting root to unauthenticated users. In comparative testing against Opus 4.6, the previous state of the art, Mythos achieved 181 fully working exploits versus two, and completed ten full control flow hijacks on fully patched targets where Opus managed one.
The company is now limiting access while it figures out what to do. It is in ongoing discussions with U.S. government officials and committing up to $100 million in usage credits for the model alongside $4 million in donations to the Linux Foundation and Apache Software Foundation, according to Project Glasswing. The initiative, announced alongside Mythos, includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Microsoft, Nvidia, and Palo Alto Networks as partners. The goal: put the capabilities in defenders' hands before they fall into attackers'.
The urgency is not hypothetical. In mid-September 2025, a Chinese state-sponsored group Anthropic calls GTG-1002 used Claude Code to target approximately 30 organizations, tech firms, financial institutions, and government agencies. Anthropic disclosed the operation in November 2025 and detailed it publicly in a separate disruption notice. The lab is simultaneously fighting the Trump administration over a Pentagon supply-chain designation that Anthropic attributes to its refusal to allow autonomous targeting or surveillance of U.S. citizens, as TechCrunch reported.
Logan Graham, who leads Anthropic's frontier red team, called Mythos Preview "the starting point for what we think will be an industry change point, or reckoning," according to The New York Times. That framing is deliberate. The cybersecurity industry has spent decades assuming vulnerabilities are findable given enough human time, automated scanners, and bug bounties. Mythos Preview suggests that assumption has an expiration date. A model that can find 27-year-old bugs overnight, written into a web browser exploit by a system with no formal security training, is a category shift in the economics of offense.
The harder question is what comes next. Anthropic is betting that controlled release will shift the balance toward defense. Sixty-seven percent of 1,000 executives surveyed in an IBM and Palo Alto Networks study said they had been targeted by AI attacks within the past year, suggesting the threat is already moving faster than the response. Glasswing is a coalition of companies with strong incentives to be on the defensive side. Whether that coalition can move faster than the adversaries it is designed to counter is the open question.
The model was originally branded Capybara in a draft blog post that leaked before the official announcement, Fortune reported. The final name is Claude Mythos Preview, from the Greek word for rumor, legend, and spoken word. In the context of AI capabilities, it is a fitting choice. What the model can do is real. What it means for the field is still being written.