The AI Ethicist Who Constrained Google Now Backs Its Defense Work
Jasjeet Sekhon, a causal inference researcher who built Bridgewater's AI lab, joins Demis Hassabis as DeepMind's new strategy lead across research, commercialization, and policy.

image from FLUX 2.0 Pro
For most of a decade, Demis Hassabis stood as the clearest example of an AI researcher who had thought carefully about the ethics of powerful technology. When he sold DeepMind to Google in 2014 for $650 million, he made them sign an Ethics and Safety Review Agreement as a condition of the deal — a mechanism that gave him unusual leverage over how Google's acquisition deployed the research. The specific terms of that agreement and what it was designed to constrain have been reported variously; what is documented is that Hassabis had concerns about what Google might do with DeepMind's technology if left unconstrained.
That version of Hassabis appears to have been revised. In a January 2026 internal DeepMind town hall — a recording of which was reviewed by Business Insider — Hassabis told staff that he was "very comfortable" with the balance Google was striking on defense work.
"Obviously it's a very complicated world as we can all see, but I think it's incumbent on us to work with democratically elected governments and to provide the unique capabilities we're world-class in to help the world be safer and be a benefit to the world," he told employees.
The statement landed without much fanfare inside the company. Outside it, with context, it reads differently.
What Google is actually doing with the Pentagon
The contracts in question are not, at least officially, about weapons. At the same January town hall, Tom Lue, Google DeepMind's VP of global affairs, described the Pentagon work as summarizing information, extracting text from contracts, and other "back office type operations." This month, Google won a contract to deploy AI agents across the Department of Defense's unclassified networks. A Google DeepMind spokesperson pointed reporters to a blog post describing the tools as being used for document drafting and project planning.
Lue framed Google's guiding principle as whether the "benefits substantially exceed the risks" — language that replaced the simpler, harder prohibition Google removed from its AI principles in February 2025. That update quietly erased the company's previous pledge not to develop AI for weapons or surveillance purposes, a change that drew sharp criticism from Amnesty International and others at the time.
The broader context: a running debate about what AI labs will and will not permit in war
The Google-Anthropic contrast has sharpened over the past year. Google re-engaged with the Pentagon in 2025, Business Insider reported, in a move that preceded the more recent flareup between the DoD and Anthropic. The two companies have taken different positions on military AI, a distinction that matters as the Defense Department becomes an increasingly significant customer for frontier AI labs.
Anthropic has drawn particular scrutiny. A Pentagon whistleblower alleged that the company could sabotage its own AI tools during active military operations — a claim Anthropic denies in court filings reviewed by WIRED. "Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations," Thiyagu Ramasamy, Anthropic's head of public sector, wrote in a March 20 filing. "Anthropic does not have the access required to disable the technology or alter the model's behavior before or during ongoing operations." The company also proposed contract language explicitly disclaiming any right to control or veto lawful military decisions.
The two sides came close. In the negotiations that preceded the supply-chain risk designation, the Pentagon told Anthropic they were nearly aligned on terms — offering to accept the company's existing conditions if it deleted a specific phrase about "analysis of bulk acquired data," according to court filings. Anthropic declined. A week later, the Trump administration declared the relationship over, and OpenAI moved in.
The DoD has designated Anthropic a supply-chain risk, a move that will prevent the department and its contractors from using Claude. Anthropic has sued, a hearing is scheduled for March 24, and customers have begun canceling deals, the company said.
Hassabis, for his part, has moved toward the Defense Department while many of his peers have moved away from or been moved away from it. Google is not alone in this: OpenAI has sought deeper partnerships with consulting firms and the military, and Microsoft has continued and expanded its defense work. The question of where the line is — what AI labs will and will not build for — is one the industry is working out in real time, one contract at a time.

