In 1957, eight young PhDs left Shockley Semiconductor Laboratory — the most prestigious semiconductor lab in the world — and founded Fairchild Semiconductor. William Shockley called them traitors. They created the modern semiconductor industry. Sixty-nine years later, two researchers from the two best-funded AI labs in the world left their posts within weeks of each other to join a startup with no product, no demo, and no named investors. The pitch is the same one that has seduced ambitious researchers since before Silicon Valley had a name: harder problems, not better pay.
The company is Core Automation. What it has going for it is Jerry Tworek, the former VP of Research at OpenAI and a seven-year veteran who most recently oversaw some of the company's most consequential research, including the project that became o1, the reasoning model that reshaped the AI field in late 2024. Tworek did not leave quietly. On Jan. 5, 2026, he posted to his X account with 35,000 followers: "This is the note I have shared with my team today," and linked to what appeared to be a resignation letter. The post drew 733,000 views in hours. His stated reason, in a subsequent interview on the Core Memory podcast, was blunt: OpenAI had become too conservative for the high-risk pioneering work he thought mattered. "I am leaving to try and explore types of research that are hard to do at OpenAI," he said.
The researchers who followed him did not come because of money. Rohan Anil spent years at Anthropic — the AI safety company behind Claude — and posted his departure on X with a word that says everything: "Jerry Tworek nerdsniped me into starting this with him and others." Anmol Gulati, a research scientist at Google DeepMind working on Gemini, posted the same week that he was "starting something new with some exceptional people." Neither cited compensation. Neither cited equity. Both cited the same bait: unsolved problems.
The pitch they signed up for is extraordinary in its ambition. According to internal materials reviewed by The Information, AI CERTs News, and The Decoder, Core Automation is working on a system it calls Ceres — a model designed to learn continually in production rather than following the standard pretrain-fine-tune pipeline that every major AI lab uses today. The current paradigm trains a model once on a massive dataset and trains a new version from scratch when improvements are needed. This is expensive, slow, and produces systems that forget what they learned before when their weights are updated. Tworek's team is proposing a model that learns from experience the way a human does, without that catastrophic forgetting. The internal pitch claims such a system would need 100 times less training data than current giants. The deck goes further: it proposes revisiting the optimization algorithms at the foundation of deep learning "up to and including gradient descent" — the engine of almost every model deployed today, and a target ambitious enough to unsettle any practitioner who has spent time trying to improve basic training dynamics.
Core Automation is not selling anything yet. Its website, which went live in January 2026, consists of a manifesto, a team page, and a contact email. The company has no published model, no benchmark results, no evidence that its architecture works at frontier-model scale. What it has instead is a $500 million to $1 billion fundraising target at an implied valuation above $5 billion, according to The Information, which reviewed the internal materials. That is not a valuation for a product. That is a valuation for a team and a narrative.
The historical echo is real, but so is the gap between then and now. Fairchild had a working prototype within months. Core Automation has a website. The architectures it is proposing have been attempted in academic settings and have consistently hit the same wall: catastrophic forgetting, convergence instability, the fundamental difficulty of making a neural network learn new things without breaking old ones. These are not administrative problems. They are open problems in the field. The $1 billion ask is partly a bet that this time, with this team, the wall gives.
What makes the pattern worth watching is not whether Core Automation succeeds. It is what its success would mean: if elite researchers can reliably leave the mega-labs to chase unsolved problems with substantial capital already committed, the concentration of talent that defines the current AI race breaks open. Every large lab becomes vulnerable to the same nerdsnipe. The researchers who joined Core Automation did not do so because they needed money or because they lost faith in their employers. They did it because Tworek made the unsolved problem sound more interesting than the comfortable one. That is the same calculation the traitorous eight made in 1957. Whether it produces the same result depends on whether the architecture actually works — a question that will take years to answer.