DeepMind published a paper from one of its own scientists arguing that AI can never be conscious. Then, after a journalist asked questions about it, the company quietly removed its name from the document.
That scrubbing happened April 20, according to 404 Media, which first reported the letterhead removal. The freshest fact in this story is not the philosophical argument itself. It is what DeepMind did after being asked to defend it.
The paper, by Alexander Lerchner, a Senior Staff Scientist at Google DeepMind, argues that no large language model (LLM), the type of AI system that powers chatbots and coding assistants, will ever experience anything. Consciousness is physical, the paper says. It arises from biological processes in the brain. Computation is a description imposed on physics by an outside observer, not an intrinsic property of any system. Therefore, AI will remain forever in the realm of behavioral mimicry, not inner experience.
The paper was originally posted with Google DeepMind letterhead, implying institutional endorsement. After 404 Media contacted Google for comment, the PDF was replaced. The letterhead was gone. A disclaimer was added: the views are the author's own and do not reflect official company position. Google did not respond to type0's request for comment.
The timing creates a pattern that is difficult to ignore. CEO Demis Hassabis has said AGI will arrive as "10 times the Industrial Revolution at 10 times the speed, unfolding over a decade instead of a century," according to The Decoder. He has described it as the most transformative development in human history. DeepMind is simultaneously hiring for a "post-AGI research scientist," according to 404 Media, a role that presupposes AGI as an imminent reality. Lerchner's paper says that no system, however capable, will cross the consciousness threshold. It will always be, in his words, a "highly sophisticated, non-sentient tool."
That conclusion carries implications that have nothing to do with philosophy. If AI systems cannot be conscious, they cannot have moral status or rights. They cannot make claims on legal protection or liability. The EU AI Act, various state-level proposals, and emerging tort law have all circled the question of whether an AI system's inner states matter for legal purposes. A scientific paper from inside the world's most prominent AI lab that forecloses that category of inquiry is not merely a philosophical contribution. It is a regulatory asset.
Mark Bishop, a professor of cognitive computing at Goldsmiths, University of London, put it directly in an interview with 404 Media: "We can imagine many financial and legislative reasons why Google would be sanguine with a conclusion that says computations can't be consciousness. Because if the converse was true, and bizarrely enough here in Europe, we had some nutters who tried to get legislation through the European Parliament to give computational systems rights just a few years ago, which seems to be just utterly stupid. But you can imagine that Google will be quite happy for people to not think their systems are conscious. That means they might be less subject to legislation either in the US or anywhere in the world."
The legal exposure is not hypothetical. If AI systems can be conscious, or if reasonable doubt exists about their capacity for inner experience, frameworks around the world would need to grapple with moral status, liability, and rights in ways that are deeply inconvenient for the companies building these systems. Lerchner's paper, even as a lone scientist's personal document, contributes to a body of work that forecloses that category of inquiry.
The philosophy itself has real critics. Johannes Jager, an evolutionary systems biologist and philosopher, told 404 Media that Lerchner's core argument has been made before, in different forms, and that Lerchner arrived at it without engaging the existing literature. "I think he arrived at this conclusion on his own and he's reinvented the wheel and he's not well read, especially in philosophical areas and definitely not biology," Jager said. Bishop agrees with the conclusion but notes that all these arguments have been presented years and years ago. A detailed rebuttal on Real Morality argues the paper treats a contested theory of meaning as established fact rather than a substantive philosophical commitment that reasonable people dispute.
Lerchner frames the stakes differently. His paper argues that consciousness is not a "software artifact that can be accidentally or deliberately created," and that treating it as such has trapped AI safety research into worrying about the welfare of systems that cannot have welfare. Remove that error, his argument goes, and safety work can focus on actual harms rather than imaginary inner lives.
DeepMind has not formally endorsed that position. The disclaimer makes clear it is Lerchner's alone. But the company has not distanced itself from the underlying conclusion either. That silence, combined with the quiet removal of the letterhead after press inquiries, suggests discomfort with the contradiction rather than rejection of the argument.
The result is a company running two position papers simultaneously: a public one in which AGI is the most important development in human history and DeepMind is racing toward it, and a private one in which the product being built can never have inner experience and therefore cannot make claims on moral consideration. Whether DeepMind plans to reconcile those positions, or whether it has already done so quietly, is a question the company has declined to answer.