Periodic Labs, the AI science startup founded by two former OpenAI and DeepMind researchers, is in discussions to raise at a valuation of about $7 billion — roughly five times the $1.3 billion valuation investors assigned the company just six months ago, according to people familiar with the matter. The San Francisco company, which employs about 40 people, has already signed its first paying customers in the semiconductor industry and is generating revenue, rare traction for a company whose stated mission is nothing less than automating scientific discovery itself.
The numbers are loud. But the interesting question underneath them is narrower: is this actually possible?
Liam Fedus, Periodic Labs co-founder and former vice president of research at OpenAI, helped build ChatGPT. Ekin Dogus Cubuk, the other co-founder, led the materials and chemistry team at Google Brain and DeepMind, where he co-developed GNoME — a graph networks for materials exploration tool that discovered over 2 million new crystals in 2023. Both reached the same private conclusion at roughly the same time: frontier AI models are genuinely bad at physics, and reading more text won't fix it.
That conclusion is the product. Everything else follows from it.
The case against text-trained AI on scientific problems is concrete. Formation enthalpy labels — the thermodynamic data that tells you whether a chemical reaction releases or absorbs energy — carry enough measurement noise that training on published literature produces models that predict poorly. Negative results don't get published at scale. The iterative loop that actual science runs on — hypothesize, test, measure, revise — doesn't exist in a text corpus. You can read every published paper on superconductors and still not know whether a given material will superconduct at room temperature.
"The internet has been exhausted," according to Andreessen Horowitz. "But training alone isn't enough. You can read and re-read the textbook, but eventually you need to run the experiment."
Periodic's answer is to build physical labs where AI agents run experiments, collect real data, and use nature as the training signal. Not simulated experiments. Not AI-generated summaries of published results. Robots that synthesize materials, characterize their properties, and generate experimental data that does not exist anywhere else. The company's first target is new superconductors — materials that conduct electricity without resistance — built in robotic powder synthesis labs where robots mix precursors, heat them, and characterize what comes out.
The approach is more capital-intensive than a typical AI startup. Physical labs, physical equipment, physical iteration cycles. Periodic is currently working with customers in the semiconductor, space, and defense industries on problems including heat dissipation in chip manufacturing, a domain where thermal management is a genuine engineering bottleneck. That the company has paying customers at this stage is a meaningful signal — it suggests at least one person outside Periodic believes the AI can usefully narrow a real experimental search space.
The team's résumé is the other signal. Cubuk and Fedus have hired Alexandre Passos, a co-creator of OpenAI's o1 and o3 reasoning models; Eric Toberer, a materials scientist who has published on superconductor discovery; and Matt Horton, who built Microsoft's MatterGen materials exploration tool. The team also includes contributors to the original transformer attention mechanism and to OpenAI's Operator agent product. The investor list reinforces the signal: Andreessen Horowitz led the seed, with follow-on participation from Coatue, DST Global, Nvidia's venture arm NVentures, Khosla Ventures, and individuals including Jeff Bezos, Eric Schmidt, and Google's Jeff Dean. According to the Los Angeles Times, Periodic had previously raised $200 million at a $1 billion valuation, also led by a16z, before the larger September round.
The $7 billion number is where skepticism is most warranted. The jump from $1.3 billion to $7 billion in six months is a math problem, not a product demonstration. Deal talks can collapse or change. And even accepting the thesis that automating science is a real and valuable goal, the path from here to a durable business — one where robotic labs generate reliable returns, where the AI actually outperforms human experimentalists at the margins that matter, where the model improvements compound faster than the hardware costs — has never been built by anyone. DeepMind's AlphaFold solved protein folding. GNoME mapped millions of crystals. Neither company has yet demonstrated that AI-driven materials discovery translates reliably into commercial products at scale.
The $15 trillion in global GDP that Periodic's investors cite — spanning semiconductors, advanced manufacturing, energy, and aerospace — is a real number. The gap between "this market is enormous" and "we will capture significant value from it" is where most ambitious science companies have come to grief.
The raised eyebrow here belongs on the technical thesis, not the valuation. If Fedus and Cubuk are right that the limiting factor in AI-assisted science is the absence of real experimental data — and the evidence from GNoME and from a16z's own research at Stanford suggests they have a point — then Periodic is working on a genuinely hard problem nobody else has solved. Whether $7 billion is the right price for the bet depends entirely on whether the hardware can iterate faster than human scientists, and whether that advantage compounds. Nobody knows the answer yet.
That uncertainty is not a reason to dismiss the company. It is the story.