Europe's biggest seed round just backed an AI thesis with no public proof
Europe's largest seed financing to date is not just a startup trophy. It is a $5.1 billion bet that the current AI playbook, training models on huge piles of human-made text, images, and code, may run out of road before it reaches something smarter than us, and that a different path can get there first.
That is the argument behind WIRED's report that David Silver, the DeepMind researcher behind AlphaGo and AlphaZero, has raised $1.1 billion for a new London startup called Ineffable Intelligence at a $5.1 billion valuation. Reuters reported that it is Europe's largest seed financing to date. The unusual part is that Silver is telling investors AI should learn mostly from its own experience rather than from human examples, and he has raised this money before showing a public product, benchmark, or paper.
On Ineffable's website, the company says it is building a "superlearner" that discovers knowledge from its own experience "without relying on human data." Silver put the case more sharply in his interview with WIRED: he thinks the mainstream large language model path to superintelligence will fail, and described human data as "a kind of fossil fuel" compared with systems that learn for themselves, which he called a "renewable fuel."
That matters because the past few years of AI have been built on the opposite assumption. The dominant labs trained giant models on internet-scale text, images, and code, then improved them with more feedback from people. Reinforcement learning, a training method where systems improve through trial, error, and reward, already plays a role inside that pipeline. Silver's pitch is harsher than that. He is arguing that the human-data era is a shortcut, not the end state.
Investors clearly heard a frontier thesis, not a normal startup deck. Reuters said the round included Nvidia and Google and cited the British Business Bank, which said it invested $20 million. Sovereign AI, a UK government-backed program, said its support also includes access to the country's largest AI supercomputers, visas, and "the unique levers of the British state." That turns the company into something more than a private moonshot. Britain is attaching industrial policy to a lab whose public technical proof is still missing.
Sequoia Capital wrote that Ineffable's approach is "No pre-training. No imitation." That is the cleanest investor version of the bet. It is also where the skepticism should start. There is still no public demo, benchmark, paper, or product showing that a mostly experience-first system can escape the closed worlds where reinforcement learning has historically looked strongest, such as games and simulations.
Even Silver's broader case is better understood as a direction than as a proven recipe. ZDNET reported on earlier work from Silver and Richard Sutton arguing that future agents will live in "streams of experience" rather than short bursts of human-labeled interaction. That helps explain the worldview behind Ineffable. It does not prove the company has solved the hard part, which is turning that worldview into a general system that works outside a toy environment.
So the evidence on display today is financial and political. Sequoia, Lightspeed, Nvidia, Google, and the British state are all effectively saying this technical path is plausible enough to back before outsiders can inspect it. If Silver is right, labs built around ever-larger piles of human data will look like they optimized the wrong fuel source. If he is wrong, Britain just helped turn one of AI's most expensive prestige bets into public policy. The next thing to watch is simple: public proof.