AI Labs Want to Pause — But Only If Everyone Else Does First
A march from Anthropic to OpenAI to xAI landed the same weekend the Trump administration released a framework designed to keep AI innovation ungoverned by the states.

image from GPT Image 1.5
On Saturday, March 21, dozens of protesters marched through San Francisco's AI district — from Anthropic's headquarters at 500 Howard Street to OpenAI's offices at 1455 3rd Street to xAI's building at 3180 18th Street — with a demand that is structurally simpler than it sounds: they want Dario Amodei, Sam Altman, and Elon Musk to publicly commit to pausing frontier AI development if every other major lab agrees to do the same.
The timing was pointed. One day earlier, the White House released a four-page national AI legislative framework designed to accelerate AI deployment, shield companies from state-level regulation, and — according to tech policy analyst Ahmed Banafa of San Jose State, speaking to ABC7 — extend something close to Section 230-style liability immunity to AI platforms.
The Demand Is a Coordination Problem, Not a Unilateral One
The protest was organized by Stop the AI Race (stoptherace.ai), led by Michaël Trazzi, a filmmaker and former AI safety researcher. Trazzi isn't asking the labs to stop unilaterally — he's asking them to commit to stopping if everyone else does. It's a classic prisoner's dilemma framing: each player defects because they can't trust others to cooperate, even when cooperation benefits everyone. The demand is for a coordination mechanism, not a surrender.
"Once we have everyone agreeing on this conditional pause, I think we can enforce this pausing of AI," Trazzi told ABC7.
The march drew academic voices. Dr. David Krueger — a University of Montreal AI professor, co-author with Yoshua Bengio and Geoffrey Hinton, and a co-founder of the UK AI Security Institute — spoke at the Anthropic rally. Nate Soares, president of the Machine Intelligence Research Institute and co-author of the NYT bestseller If Anyone Builds It Everyone Dies, publicly endorsed the demand, writing that AI CEOs "clearly and plainly stating 'this is an emergency and it'd be better if we were all slowed' is a good first step." Will Fithian, a UC Berkeley statistics professor, also attended.
This wasn't the movement's first mobilization. A "QuitGPT" protest in early March drew more than 75 people to OpenAI's headquarters — described in press materials as the largest anti-OpenAI protest to date. A PauseAI protest in London last month brought out several hundred. Trazzi also led the Google DeepMind hunger strike in September 2025, a three-week demonstration covered by The Verge, The Telegraph, and Sky News.
What the Labs Are Actually Doing
The financial stakes explain something about the pressure the protesters are pushing against. According to CB Insights' 2025 State of AI report, OpenAI, Anthropic, and xAI together raised $86.3 billion in 2025 alone — 38% of all global AI investment that year. In Q4 alone, the three raised $46 billion, more than half of the quarter's total AI funding.
The protesters' frustration has a concrete policy backdrop. In February 2026, Anthropic released version 3.0 of its Responsible Scaling Policy — and dropped the hard safety limit that had previously barred the company from training more capable models before proven safety measures were in place. The new RSP actually argues against unilateral pausing: "If one AI developer paused development to implement safety measures while others moved forward... the developers with the weakest protections would set the pace," the document states.
That's not a dismissal of risk. It's an acknowledgment that the coordination problem Trazzi is trying to solve is real. Anthropic's argument is that a unilateral pause is counterproductive — the same logic that explains why the conditional pause demand is structured the way it is. Both sides agree: the bottleneck is coordination.
The organizers point to January 2026 as their prior win. On their event page, as reported by SFist, they claim DeepMind CEO Demis Hassabis said at Davos that he'd be open to a conditional pause. That characterization appears to come from the organizers themselves, not a direct quote. A Fortune account of the same Davos session covers Hassabis extensively — his AGI timeline estimates, his view that current LLMs won't reach human-level intelligence — but does not include any conditional pause statement. Google DeepMind did not respond to a request to clarify the record. OpenAI's original charter includes a clause about stopping competition if another company reaches AGI first — a commitment that has been progressively weakened as the company restructures into a for-profit entity.
The Administration's Countermovement
The White House framework, released Friday by AI czar David Sacks and science advisor Michael Kratsios, addresses six policy areas — children's safety, intellectual property, free speech, workforce development, energy infrastructure for data centers, and national security. On the regulatory architecture question, it is unambiguous: "A patchwork of conflicting state laws would undermine American innovation," the document states, calling on Congress to preempt state AI rules with a single national standard.
That puts the administration on a direct collision course with California. State Senator Scott Wiener, who has backed legislation requiring AI companies to publish safety protocols, told ABC7 that Trump is "not interested in having a smart public policy approach to AI where we promote and foster innovation while we assess and try to get ahead of some of the risk."
The framework is light-touch in the ways that matter to the protesters: it does not mention frontier model development limits, does not contemplate mandatory safety evaluations before deployment of more capable systems, and frames the core mission as winning the AI race rather than managing its risks.
Analysis: The protesters and the administration are responding to the same underlying reality — that there is no international coordination mechanism for frontier AI development — and drawing opposite conclusions. The protesters want to build one. The administration wants to run faster than everyone else before one becomes necessary. Whether either approach is coherent is a question that deserves more than a weekend's worth of news cycle.

