India Wants a Sovereign AI. Anthropic Won’t Give It One.
India wants Anthropic to build it a sovereign AI. Anthropic does not want to give India one.
That tension sits inside every paragraph of a May 5 regulatory circular from India's securities regulator, SEBI, which named Anthropic's Claude Mythos model specifically and ordered every broker, fund, exchange, and depository in Indian financial markets to overhaul its cybersecurity defenses against it. The directive — signed by Deputy General Manager Mamata Roy and addressed to the entire regulated universe of Indian securities — is sweeping: patch immediately, run AI-powered vulnerability scans, lock down APIs, join a centralized security monitoring platform, and model AI tools as a threat scenario in every future risk assessment. ThePrint
The order is also, structurally, impossible to comply with.
No Indian company, bank, or government agency has secured access to Claude Mythos. Not one. Project Glasswing — Anthropic's restricted-access program that lets a curated group of organizations use the model for defensive cybersecurity work — has roughly 40 participants globally, including Amazon, Google, Apple, and Microsoft. India is not among them. MeitY Secretary S. Krishnan confirmed on April 28 that New Delhi is "still working out logistics with US authorities." MediaNama
That creates the contradiction the entire Indian regulatory response pivots on: SEBI is ordering financial institutions to defend against a model they cannot legally access to evaluate or test. The advisory does not explain how an organization verifies its defenses against a capability it has never operated. MediaNama
The cybersecurity case for the order is real. SEBI's concern, stated plainly in the circular, is that Mythos can identify and exploit vulnerabilities "using speed and scale" — threatening data confidentiality, application integrity, and system reliability across a market where every participant is interconnected. One breach cascading into a market-wide failure is the scenario the regulator is trying to prevent. ThePrint
The numbers behind that concern are not abstract. Mythos Preview — the restricted version Anthropic opened to controlled testing before halting the program — found over 2,000 previously unknown software vulnerabilities in seven weeks, including flaws that survived decades of human security review. It developed working exploits on the first attempt in over 83 percent of cases. ThePrint
India's own government had already flagged the threat at the cabinet level. Finance Minister Nirmala Sitharaman chaired a meeting on April 23 with the Reserve Bank of India, the National Payments Corporation of India, and CERT-In — the national computer emergency response team — calling the Mythos situation "unprecedented." MediaNama
What makes the circular significant is not the technical advisory, which mirrors guidance CERT-In issued to banks and telecoms on April 26. It is that SEBI went further than any Indian regulator before it by naming a specific commercial AI model in a formal directive — turning what has been a policy discussion into a compliance obligation. The regulator formed a dedicated task force, cyber-suraksha.ai, drawing representatives from market infrastructure institutions, transfer agents, and qualified regulated entities, to coordinate the response and report incidents on priority. MediaNama
The indirect access path Anthropic has offered India does not close the gap. Claude Security — Anthropic's enterprise-grade defensive tool, positioned as the compliant alternative to Mythos — runs on Opus 4.7, the previous generation model. On the Firefox 147 benchmark, Opus 4.7 produces two working exploits against Mythos's 181. That is a roughly 90-times capability difference. MediaNama
India's own data localization requirements compound the problem. The National Payments Corporation of India operates under 2018 rules requiring all transaction data to be stored on servers within India. Mythos is hosted on US-based infrastructure. The circular does not address how regulated entities are meant to use a foreign-hosted model to satisfy domestic data-residency obligations — or whether that tension has been resolved at all. MediaNama
Globally, the pressure for access is intensifying. India Today reported that the International Monetary Fund has raised alarm on finance sector risks tied to Mythos, and that India, the European Union, and Canada are all seeking entry to Project Glasswing. India Today
The deeper question underneath the regulatory response is what India actually wants. Sovereign AI — the idea that a nation's AI infrastructure should operate under its own jurisdiction, laws, and strategic control — has become the stated goal of New Delhi, Brussels, and Ottawa simultaneously. But the term elides a distinction that matters: is India asking Anthropic to host Claude on Indian servers to protect citizen data, or to ensure the model's outputs align with Indian strategic interests? Those are not the same problem, and they do not have the same solution. MediaNama
Anthropic has not responded publicly to the data-residency question. The company has said only that Project Glasswing access decisions are made based on criteria including an organization's security practices, regulatory environment, and alignment with Anthropic's own use-case restrictions. MediaNama
What to watch next is concrete: whether the US government approves India's Glasswing application, what the cyber-suraksha.ai task force produces in its first threat-sharing report, and whether CERT-In's next advisory — expected to address the data localization tension directly — changes the compliance picture for the regulated entities SEBI just put on the clock.