Demis Hassabis tried to buy his way out of Google. Backed by Reid Hoffman, he and Mustafa Suleyman offered the company a billion dollars to let DeepMind go independent, then spent three years with lawyers and bankers trying to make it happen. Google said no.
That failed gambit is the least told story in AI, and it explains more about where the industry is now than any product announcement this year. The deal never closed, but Hassabis ended up exactly where he is today: inside Google, more locked in than ever, running the research division that just might have the best structural position in the entire AI race.
The contrast is easier to see once you understand the pressure the others are under. Every major AI lab is running two competitions simultaneously. They are competing to build the most capable models, and they are competing to prove to investors that the business works before those investors demand their money back through an IPO. OpenAI is projecting $14 billion in losses for 2026, according to Axios, as it tries to demonstrate a sustainable business model before going public. Anthropic is on a path toward its own public offering at a reported $380 billion valuation. DeepMind is not.
DeepMind is the only major AI lab not running both races at the same time. That absence of a second front is looking less like a concession and more like a structural advantage. When you do not have to answer to public market investors, you do not have to justify every research bet by the quarter. Hassabis put it plainly at Davos, speaking to Axios Ina Fried: "We do not feel any immediate pressure to make knee-jerk decisions regarding monetization through ads."
That comment is worth sitting with. OpenAI is testing advertising. Hassabis is not even considering it. That difference is not a product decision. It is a governance decision, and governance is the story the AI press keeps missing.
The $1 billion pledge from Hoffman is the specific detail that makes this more than an abstraction. Per Axios, a billionaire investor was willing to write a nine-figure check to separate DeepMind from Google. That a man with Hoffman's pattern recognition saw DeepMind as worth buying tells you something about what the lab was even then. The fact that it did not happen means DeepMind stayed inside an institution whose incentive structure is different from any other lab's, and that difference is now showing up in the portfolio.
AlphaFold solved protein folding. Gemini forced OpenAI into what sources described as a code red frenzy to keep pace at the end of 2025. Isomorphic, the drug discovery subsidiary, has a pipeline that is not answering to any quarterly earnings call. Hassabis has either the best portfolio in AI or the best luck. The people who know him well suspect it is both.
The Mallaby biography surfaces the standard criticism, and it is worth including because the counterpoint is what makes this story durable. The classic read on Hassabis, Mallaby told Axios, is that he is consumed with winning the AI race and it is messianic and over the top and it distorts DeepMind's mission. The counterargument is that a person with that level of conviction, running a lab with that level of resources, inside an institution with that level of patience, is exactly the combination the field does not know how to reproduce.
The governance question that Sonny's brief raised is the right one to close on. Can one person steer AI when every investor is already in the driver's seat? The answer DeepMind offers is not a general theory. It is a specific case: a researcher who tried to escape his investors and ended up inside the biggest one, and right now that looks less like a loss and more like the setup the other labs would trade for.
The Axios reporting on the book is at axios.com. Sebastian Mallaby's biography is Scale. The Economist profile ran this week.