The people who built Google’s ethical AI brand are now bargaining to reclaim it
Google DeepMind workers have ten working days to find out whether the company will recognize their union — and so far, Google is not cooperating.
On May 5, workers delivered a letter to Google's UK management requesting formal recognition of the union their colleagues voted to form last month. Under UK employment law, the clock started immediately. If Google does not voluntarily recognize the union before the ten-day window closes, the dispute moves to a legal ballot administered by the UK's Central Arbitration Committee. The company says it has received no formal ballot and disputes that a valid election has occurred.
The vote — confirmed by the Communication Workers Union and reported by WIRED, passed with 98 percent of participating members in favor. Roughly 1,000 DeepMind staff at Google's London office would be represented if recognized — making it, if successful, the world's first union at a frontier AI lab.
The immediate pressure is the classified AI deal Google signed with the Pentagon in late April. On May 1, the Department of War announced it had signed agreements with Google, SpaceX, OpenAI, Nvidia, Microsoft, AWS, Oracle, and Reflection Learning to deploy AI on classified military networks. Google is required to honor any lawful government request for AI use, cannot veto specific applications, and must modify its safety filters if the government asks. There is no contractual barrier to mass domestic surveillance. Anthropic refused equivalent terms; the Pentagon responded by ordering the military and defense contractors to stop using Anthropic products within six months.
The escalation follows a two-year erosion of the protections Google's own employees used to push back. In 2018, roughly 4,000 Google workers protested Project Maven — a Pentagon contract to build AI for analyzing drone footage. Google backed down and did not renew the contract. Workers had won. In 2021, Google signed the $1.2 billion Project Nimbus cloud contract with the Israeli government. When workers protested that contract in 2024, Google fired 50 employees. In February 2025, Google's parent company Alphabet removed the weapons development pledge from its published AI principles entirely. The text, which had committed Google not to use AI for "weapons or other technologies whose principal purpose is to cause injury," vanished without announcement.
Workers at the time described the removal as a signal that the ethical commitments were hollow. The union drive began shortly after, in February 2025, according to WIRED and The Guardian. Workers cited the Pentagon deal as the precipitating pressure. More than 600 Google employees separately signed an open letter to CEO Sundar Pichai urging refusal of the classified AI agreement. Alphabet president Kent Walker defended the work in an internal communication, writing that Google had "proudly worked with defense departments since Google's earliest days."
The contrast with 2018 is the crux of why workers say the union is necessary now. "In 2018, we had enough leverage to make Google listen," one DeepMind worker told The Guardian, speaking anonymously. "Now the company has spent seven years removing every formal protection we used to push back." The union is not primarily a wages-and-hours drive. Workers are asking for three things: reinstatement of the weapons pledge, an independent ethics body with real authority to refuse projects, and a right to refuse work on contracts that violate human rights law.
Alex Turner, a research scientist at Google DeepMind, posted publicly: "I spent the last two months trying to prevent this. Google affirms it can't veto usage, commits to modify safety filters at government request, and uses aspirational language with no legal restrictions."
Independent analysts who spoke to Fortune described Google's deal as notably more permissive than the equivalent agreement OpenAI signed. Charlie Bullock, a partner at the law firm LawAI, told Fortune that OpenAI's contract appeared to include "some kind of contractual guarantee" against mass domestic surveillance that Google explicitly declined. Seán Ó hÉigeartaigh, a researcher at Cambridge's Centre for the Study of Existential Risk, called Google's terms "strictly weaker" than the OpenAI agreement.
The Department of War announcement said the GenAI.mil platform — the system through which the military uses these AI agreements — has 1.3 million users, has processed tens of millions of prompts, and has deployed hundreds of thousands of AI agents in the five months since launch.
The unionization effort covers only the roughly 1,000 UK-based DeepMind staff, not Google's broader global workforce. But workers and labor analysts say the scope reflects the concentration of AI research talent in the London office — the same office that produced Gemini and a significant share of Google's most capable models.
What happens next depends on whether Google voluntarily recognizes the union before the ten-day window closes. Even if the union wins recognition, it would represent only a subset of DeepMind's global workforce, and any contract-level governance rights would require separate negotiations.
Workers and analysts say the significance of the recognition effort goes beyond the UK. "This is the first time the people who build frontier AI have tried to claim a formal seat at the table on how it's used," Thomas-Benjamin Seiler wrote in The Next Web. "The 2018 protest worked because Google still pretended its ethics commitments meant something. Now the company has removed even that pretense."
The union drive will test whether collective bargaining can accomplish what individual ethical appeals no longer can — and whether the world's most capable AI lab will treat governance of its most powerful systems as a management prerogative, or as a question the people who build it deserve to answer.