Google struck a classified deal with the Pentagon that lets the government adjust the search giant's own AI safety filters at will, and more than 580 Google employees, including senior DeepMind researchers, signed a letter urging CEO Sundar Pichai to refuse the work, Reuters confirmed on Tuesday, citing The Information.
The agreement, signed this week, clears Google's Gemini AI models for any lawful government purpose, including on classified networks. Under its terms, Google must help the government adjust the AI safety settings and filters Google itself built, The Information reported, and explicitly does not give Google the right to control or veto lawful government operational decisions. The Pentagon did not immediately respond to a request for comment.
The deal puts Google in direct competition with OpenAI and xAI for classified AI work. It also suggests the government secured something OpenAI said other labs had to give: reduced safety guardrails. OpenAI explicitly noted in its own Pentagon contract announcement that other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies in national security deployments. Google declined to comment for this story.
The internal resistance is unusual in its breadth. More than 580 Google employees, including more than 20 directors and VPs and senior DeepMind researchers, signed the letter, The Next Web reported, citing concerns that proximity to AI technology creates a responsibility to prevent unethical uses. Demis Hassabis, the CEO of Google DeepMind, told staff he is comfortable with the deal and that it is consistent with responsible AI development. The letter was also reported by The Washington Post and Business Insider.
Google has pursued Pentagon work before. It won a share of the Pentagon's $9 billion Joint Warfighting Cloud Capability contract in December 2022 alongside Amazon, Microsoft, and Oracle. In March, Google deployed Gemini AI agents to the Pentagon's three-million-strong workforce at the unclassified level, with eight pre-built agents for tasks including summarizing meeting notes, building budgets, and checking actions against defense strategy. The classified work is a different category, and the safety filter provision is what triggered the internal revolt.
The history matters here. In 2018, more than 4,000 Google workers and at least 12 resignations killed Project Maven, a surveillance AI contract worth a few million dollars. The protest forced Google to adopt AI principles pledging not to pursue weapons or surveillance technology, and to let the Maven contract expire in March 2019. Palantir took it over. Palantir Maven investment has since grown to $13 billion. Then, in February 2025, Google removed from its AI principles the pledge to avoid weapons or surveillance technologies, citing global competition for AI leadership in a blog post co-authored by Demis Hassabis.
The precedent for what happens when a lab pushes back is recent and stark. Anthropic refused to remove guardrails around autonomous weapons and mass surveillance in contract negotiations with the Defense Department, according to CSA Labs. The breakdown led to an extraordinary government action in early March 2026: Anthropic was designated a supply-chain risk and lost a $200 million contract. The Pentagon's fiscal 2027 budget request asks for $54.6 billion for the Defense Autonomous Warfare Group, a 24,000% increase over the prior year.
For the 580-plus employees who signed the letter, the concern is concrete: their work will now run on networks where Google has no visibility and no veto. The safety filters they helped build can be changed by the government without Google getting a say. OpenAI's own blog post made the trade explicit: other labs, it said, removed guardrails to get the contract. Google, which spent years publishing safety research and cofounding the Frontier Model Forum, appears to have made the same calculation.
What comes next is unclear. The researchers' letter has been delivered. The deal is signed. Whether it triggers a deeper staff exodus, or gets treated as a settled question inside the company, is the thing to watch.