Big Tech built the pipelines, trained the researchers, and published the research that made its own displacement possible. Six companies. Six months. Nearly $4.2 billion in seed and early-stage funding funnelled into AI startups founded by the people the major labs spent years developing — to pursue exactly the work those labs deprioritised.
The structural parallel venture historians reach for is Fairchild Semiconductor. In 1957, eight researchers left Shockley Semiconductor Laboratory — the first major semiconductor company, backed by a Nobel laureate — to found a company that then spun out Intel, AMD, Kleiner Perkins, and Sequoia. The original institution's resources became the raw material for its own displacement. What is happening now in AI has the same architecture.
On April 28, CNBC reported that investors have placed $18.8 billion into AI startups founded since the start of 2025, on track to surpass the $27.9 billion deployed for companies launched in 2024. But the more striking number is where that money is going: not into the next chatbot or coding assistant, but into the specific research territories the major labs have ceded while they chase the next benchmark.
The names read like a cross-section of where frontier AI research actually lives. David Silver spent 13 years building and leading Google DeepMind's reinforcement learning team before leaving to co-found Ineffable Intelligence, which announced a $1.1 billion seed round on April 27, with backing from Sequoia, Lightspeed, Nvidia, and Google's own AI fund, per CNBC. Yann LeCun, until recently Meta's chief AI scientist, launched AMI Labs and raised $1.03 billion at a $3.5 billion pre-money valuation, according to TechCrunch. Tim Rocktäschel, a former DeepMind engineer, founded Recursive Superintelligence; the Financial Times reported the company was raising up to $1 billion, and deal intelligence firm Dealroom confirmed at least $500 million closed at a $4 billion pre-money valuation with GV (Google Ventures) and Nvidia as lead investors. Anna Goldie and Azalia Mirhoseini, both from the AlphaChip team that built AI-assisted chip design inside Google, raised $335 million for Ricursive Intelligence across two rounds. Liam Fedus, a former OpenAI researcher, and Ekin Dogus Cubuk, formerly at Google Brain, launched Periodic Labs with a $300 million seed backed by Andreessen Horowitz, Nvidia, and Jeff Bezos, per TechCrunch. And ex-Anthropic and ex-xAI researchers founded Humans&, which raised $480 million in January at a $4.48 billion valuation, Reuters reported.
Core Automation, a seventh startup, has hired top researchers from both Anthropic and Google DeepMind in recent weeks, Business Insider reported.
The money is following a thesis, not a product. When you're in a race, you narrow focus, said an investor at Eurazeo, a European VC firm. That creates a vacuum. Entire areas of research, new architectures, agents, interpretability, vertical models, are being deprioritised, not because they don't matter, but because they don't win the immediate race, per CNBC. An HV Capital partner put it more bluntly: inside the large foundational labs, the pressure to deliver benchmark performance and maintain rapid release cycles leaves limited room for genuinely exploratory research, particularly outside the dominant LLM paradigm.
Google DeepMind spent a decade building Silver's reinforcement learning team; that team now runs the best-capitalised independent AI startup in Europe. Meta funded LeCun's research division for years; his new lab raised more in a single seed round than most universities receive in a decade. xAI has seen its researchers depart for the new labs, per CNBC. Anthropic is watching its researchers get recruited by startups that did not exist six months ago.
The counter-argument is real: all six companies are pre-product, pre-peer-review. The valuations reflect pedigree premiums as much as technical plans. None of them have shipped a system that a user can evaluate today. The compute and data advantages the incumbents retain are not nothing. Big Tech is not going away.
But the knowledge that left with these researchers — the institutional memory of what failed, what worked, and why certain approaches were abandoned — is also not nothing. The vacuum Eurazeo described is real. The question is who fills it, and on whose terms.
What to watch: whether any of these startups publish technical work, a paper, a demo, a benchmark result, before their next fundraise. That will answer whether the pedigree premium survives contact with actual evidence. It will also tell the incumbents whether the researchers they lost were carrying irreplaceable knowledge, or just the confidence to leave.