Google's AI principles no longer prohibit weapons work. Here's why
Google is back in the Pentagon's good graces — and its competitors made the opening When thousands of Google employees signed a petition in 2018 demanding the company abandon Project Maven — a Pentagon program using Google's AI to analyze drone surveillance footage — leadership listened.

image from FLUX 2.0 Pro
When thousands of Google employees signed a petition in 2018 demanding the company abandon Project Maven — a Pentagon program using Google's AI to analyze drone surveillance footage — leadership listened. Google pulled out. The company's public position, reinforced repeatedly over the following years, was that it would not allow its technology to be used for weapons or surveillance.
Eight years later, Google is signing contracts with the Department of Defense to deploy AI agents across unclassified Pentagon networks. Not because the political environment changed. Because its AI lab competitors made it possible.
The story of how Google became the Pentagon's most reliable AI partner is inseparable from the story of how its competitors stumbled. OpenAI faced internal backlash over its own defense work and amended the terms of its Pentagon agreement after employees pushed back. Anthropic, which built its reputation on constitutional AI and strict safety constraints, filed two federal lawsuits in March challenging a Pentagon designation that bars it from the defense contractor ecosystem. Google's AI principles — the ones that were updated in 2025 to remove the commitment against weapons and surveillance applications — do not include those constraints.
"We are going to be leaning more into" national security work, Tom Lue, Google DeepMind's VP of global affairs, told employees at an all-hands meeting, according to Business Insider. CEO Demis Hassabis told staff he is "comfortable" with the balance Google is maintaining. Lue framed the company's updated position as a calculus: whether the benefits of a given contract substantially outweigh the risks.
That calculus has produced a business opportunity. Emil Michael, the Under Secretary of Defense for Research and Engineering, said of Google: "I have high confidence they're going to be a great partner on all networks." Google launched eight Gemini-powered AI agents on GenAI.mil — the Defense Department's enterprise AI platform — last week, expanding a relationship that began in December 2025 when more than 1.2 million government employees gained access. The company is already in discussions about moving to classified networks.
How the door opened
The rebuilding of Google's Pentagon relationship did not happen by accident. On February 26, Thomas Kurian, Google's cloud CEO, sat down with Emil Michael, the Pentagon official responsible for selecting AI tools for the Defense Department. By that point, two of Google's main competitors had problems.
Anthropic was in a formal dispute with the Pentagon after the Defense Department designated it a supply chain risk — a label normally reserved for foreign adversaries — effectively cutting it off from defense contracts. Anthropic refused to drop two commitments: that its AI would not be used for mass surveillance of Americans, and that its models would not power autonomous weapons without human oversight of targeting decisions. The Pentagon, through Defense Secretary Pete Hegseth, argued that a private company's terms of service should not constrain military options. Anthropic filed two federal complaints in March, arguing the designation constitutes political retaliation for protected speech.
OpenAI, which had signed a defense agreement, faced backlash from its own employees and amended the terms days later.
Google, by contrast, had no such complications. The company makes its own chips, runs its own cloud infrastructure, and sits on an $34.5 billion quarterly profit. It has the balance sheet to absorb controversy and no stated commitments that would require it to refuse work. When Kurian pitched Michael on Google as a defense AI partner, he was offering something its competitors could not: reliable, unconstrained access.
The irony embedded in the Palantir memo
On March 9, Deputy Secretary of Defense Steve Feinberg issued a memo making Palantir's Maven Smart System the official AI program of record for the U.S. military. The decision — expected to take effect by September 2026 — cements AI-enabled targeting and battlefield decision-making as core Pentagon capability. Maven already handles command-and-control for the thousands of strikes the U.S. military has carried out against Iran over the last three weeks.
The memo also ordered oversight of Maven moved from the National Geospatial Intelligence Agency to the Pentagon's Chief Digital Artificial Intelligence Office within 30 days. Future Palantir contracts will be handled by the Army.
What Feinberg's memo does not mention — but Reuters reported separately — is that Palantir's Maven runs on Anthropic's Claude. The company Anthropic itself cannot sell to the Defense Department is embedded in the DoD's primary AI targeting system. Anthropic's lawsuit challenges exactly this dynamic: that the supply chain designation is designed to sever a relationship the Pentagon's own systems already depend on.
The gap between what Google promised and what it is delivering
Google DeepMind employees are not uniformly comfortable with this trajectory. In February, a group wrote to Demis Hassabis and Sundar Pichai calling for a formal public commitment refusing DoD contracts involving weapons, autonomous targeting, or battlefield intelligence operations, and demanding an independent ethics review board. The letter drew a direct line to 2018: "The tension between Google's workforce and its leadership over military work is not new."
Google's updated position does not make that line of protest easy to sustain. The 2025 revision of Google's AI principles removed the earlier pledge not to use the company's technology for surveillance or weapons. What replaced it — a case-by-case benefits-versus-risks framework — is doing significant work. Lue described it at the all-hands as a process for analyzing use cases, not a set of bright lines.
What Google is delivering to the Pentagon right now is document summarization, contract review, and research support — the same category of administrative work it sold to any enterprise customer. The escalation is not in what the AI does today; it is in the relationship and the stated intention to expand.
What this means for builders
Two stories landed this week in close proximity. The Feinberg memo locks AI targeting into the permanent infrastructure of the U.S. military through Palantir. The Google expansion puts Gemini agents in front of three million Defense Department employees on a platform Google is building out. They are two parts of the same shift: defense AI is moving from procurement experiments to operational backbone, and the companies that can supply it without friction are the ones winning.
For anyone building AI products or infrastructure, this is a market signal with a short shelf life. The defense sector is real, large, and growing. The ethical lines that Anthropic and OpenAI drew are proving expensive to maintain. The question is not whether the big AI labs want defense contracts — it is which ones are willing to operate without the constraints that made them distinctive in the first place.
Google's answer, delivered in an all-hands meeting and confirmed in a cloud CEO's meeting with a Pentagon acquisition official, is that it will go as far as the contracts take it.

