Britain wants Anthropic to expand in London. The offer includes a dual listing on the London Stock Exchange.
That sentence, buried in a Financial Times report, is the most concrete signal yet that a Western allied government is trying to poach a major American AI company from its home market. The context is a genuine geopolitical rift over where the US government believes AI red lines should sit — and what happens when a company refuses to cross them.
Staff at the Department for Science, Innovation and Technology (DSIT), the UK ministry leading the work, have sketched out proposals for Anthropic spanning from an office expansion in London to a dual listing, according to the FT. Downing Street has been supportive of the work. The Financial Times reported that London mayor Sadiq Khan wrote to Anthropic chief executive Dario Amodei in March pitching the UK capital as a steadfast base for the company.
The United States government blacklisted Anthropic in recent months, designating it a national security supply-chain risk after it refused to allow the American military to use its AI chatbot Claude for surveillance or autonomous weapons, Yahoo News reported. The Trump administration moved to restrict the company under 10 USC 3252, a supply chain security statute designed to address foreign threats to the integrity of defence systems. The mechanism is not subtle: the law was built for adversaries, not allies.
Trump called Anthropic a radical left, woke company in a post on Truth Social, as the FT reported. "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!" he wrote. A US judge temporarily blocked the blacklisting, according to Yahoo News. Anthropic has two lawsuits pending over the designation.
The legal fight is not academic. Oxford University academics who track AI and defence policy noted that the Pentagon is relying extensively on Claude in its ongoing war with Iran. The department awarded Anthropic a $200 million contract in July 2025, and the Trump administration originally accepted the company's usage restrictions at that time, according to Oxford. Those restrictions include two redlines that Amodei has said are non-negotiable: a prohibition on mass domestic surveillance and a prohibition on fully autonomous weapons systems. The academics described the US position as incoherent: relying on a company you have just declared a supply-chain threat to fight a war, while that same company refuses to let you use its models for the purposes you most want them for.
The UK pitch, by contrast, is straightforward: come somewhere that your red lines are respected. The DSIT proposals include London office expansion and a dual listing. Last month the UK announced plans for a separate £40 million state-backed research lab for fundamental AI work, according to the FT. The message to Anthropic is that Britain wants to be a home for the kind of AI the company wants to build, not the kind the US defence department wants it to build.
Amodei is visiting the UK in late May to meet European customers and policymakers, the FT reported. The offer will be put to him directly. Whether a $380 billion company with genuine sovereign customers on both sides of the Atlantic chooses to entangle itself further in transatlantic geopolitics is a different question. The UK is making the case that it should.
The deeper question is what this episode tells you about the current state of AI governance. The US blacklisted Anthropic under a law meant for foreign threats, called its red lines woke, and is simultaneously running a war with the company's models. The UK is offering a jurisdiction where the same red lines are a feature, not a liability. If Anthropic takes the UK seriously as a home, it would be the first major test of whether allied-country shopping is a real option for frontier AI labs that run into sovereign conflict with their home government. And if the US notices and objects, the answer to that question becomes much more complicated.
The Pentagon did not respond to a request for comment. Anthropic declined to comment beyond its public statements on the lawsuits.