The Founder Behind Tesla's Autopilot Hasn't Written Code Since December
Andrej Karpathy has not typed a line of code since December.

image from FLUX 2.0 Pro
Andrej Karpathy has not typed a line of code since December. He said so plainly on the No Priors podcast on Friday, and he is not anxious about it — he is euphoric in a way that reads, to an outside observer, as mildly destabilizing.
"I kind of feel like I was just in this perpetual — I still am often in this state of AI psychosis just like all the time," Karpathy told hosts Sarah Guo and Elad Gil. "Because there was a huge unlock in what you can achieve as a person, as an individual."
That is the quote that generated a wave of headlines Friday, almost all of them built around the same frame: brilliant AI pioneer in mental distress, nervous about the future, uncertain where the technology is taking us. The Mint/Google News wire version that hit desks this morning called it "why Andrej Karpathy is nervous about the future." That framing is wrong in a way that is not subtle.
The psychosis is capability FOMO. Read the transcript.
In December, Karpathy said, something flipped. He had been writing roughly 80% of his own code, delegating 20% to agents. That ratio inverted — and then inverted again. "I don't think I've typed like a line of code probably since December, basically, which is like an extremely large change," he said. "And I don't even think it's 20/80 by now. I think it's a lot more than that."
The nervousness he describes is not dread about AI's direction. It is the anxiety of a researcher who can see an enormous frontier and cannot move fast enough to explore it. "I want to be at the forefront of it," Karpathy said. "I'm very antsy that I'm not at the forefront of it. And I see lots of people on Twitter doing all kinds of things and they all sound like really good ideas. And I need to be at the forefront or I feel extremely nervous."
That is a very different thing than a founder frightened by the technology he helped build.
Five days before the No Priors interview, Karpathy released autoresearch on GitHub — a 630-line, single-GPU tool that puts an AI agent to work running ML experiments overnight, unattended. The agent modifies training code, runs a five-minute experiment, checks if the loss metric improved, keeps or discards, and loops. You prompt it, go to sleep, and wake up to a log.
In his first run, the agent conducted 700 experiments over two days. It discovered 20 improvements. Applied together, they cut time-to-GPT-2-quality from 2.02 hours to 1.80 hours on the nanochat leaderboard — an 11% speedup on a training codebase Karpathy had already optimized by hand.
Tobias Lutke, the CEO of Shopify, tried it the same week. He pointed autoresearch at internal company data, let it run overnight. Thirty-seven experiments. Nineteen percent performance gain, with a smaller agent-optimized model outperforming a larger manually-configured one.
Karpathy's language about what this implies for frontier labs was unambiguous. "All LLM frontier labs will do this," he wrote on X. "It's the final boss battle." He acknowledged the gap between his 630-line toy setup and the codebases at OpenAI or Anthropic but dismissed it as an engineering problem rather than a conceptual one. "Doing it is 'just engineering' and it's going to work," he wrote. "You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges."
The next version of autoresearch, he said, should be "asynchronously massively collaborative for agents. The goal is not to emulate a single PhD student, it's to emulate a research community of them."
Critics surfaced quickly. Some pointed out that autoresearch resembles AutoML approaches — automated hyperparameter search, neural architecture search — that Google, Microsoft and other labs have used for years. Karpathy pushed back hard. "Neural architecture search as it existed then is such a weak version of this that it's in its own category of totally useless by comparison," he wrote. "This is an actual LLM writing arbitrary code, learning from previous experiments, with access to the internet. It's not even close."
That counterargument is worth taking seriously. The key distinction is that autoresearch's agent reads research papers, generates hypotheses, and writes arbitrary code changes — it is not sampling from a predefined search space. Whether that distinction survives at frontier scale, where the training codebase involves distributed systems, custom hardware kernels, and years of accumulated engineering debt, is an open question. The 630-line toy is a proof of concept, not a production system.
In the No Priors conversation, Karpathy also spent several minutes discussing what makes AI agents feel like collaborators rather than tools, and what the next generation of persistent, loop-based agents actually requires. It is worth reading for anyone building in that space.
The Mint headline is not the story. The story is that one of the most credentialed ML researchers alive stopped writing code in December and has spent the three months since then stress-testing the ceiling of what agents can do — including building a system that runs scientific experiments overnight and hands back results in the morning. The nervousness is that the ceiling keeps moving up faster than he can probe it.
For the full interview: No Priors podcast, episode published March 21, 2026, available on YouTube. The autoresearch repository is at https://github.com/karpathy/autoresearch. Fortune's earlier coverage of the autoresearch release is at https://fortune.com/2026/03/17/andrej-karpathy-loop-autonomous-ai-agents-future/.

