Six-Year AI Growth Window Opens If Automation Trends Continue
There is a specific number in the new economic model for AI-driven growth, and it is six years. That is how long a calibrated simulation predicts an explosive, self-reinforcing growth event — an economic ceiling being broken — takes to arrive, once software research is fully automated and other sectors hit five percent automation. The model is careful: it does not say the ceiling is about to break. It says the conditions for one are now, for the first time, formally tractable. And it says current empirical data on AI research automation is already tracking the danger zone the model identifies.
What has changed in the last thirty days is not the theory. It is the calibration.
Jack Clark's Import AI newsletter, published Sunday, contains new data points that did not exist when the NBER working paper was finalized. On SWE-Bench, the software engineering benchmark, performance crossed from two percent to ninety-four percent in under three years. On a specific AI safety research task, speedups against human baseline have run: 2.9x in May 2025, 16.5x in November 2025, 30x in February 2026, 52x in April 2026. The METR autonomy dataset shows AI task duration expanding from thirty-second human-scale tasks in 2022 to twelve-hour autonomous runs in 2026. Ajeya Cotra at METR projects hundred-hour task capability by end of this year.
These are not projections. They are measurements. And they are what the NBER model was built to explain.
The paper, by Tom Davidson, Basil Halperin, Thomas Houlden, and Anton Korinek, adds what the authors call an innovation network effect to the standard semi-endogenous growth framework. Standard models, most influentially Bloom et al. (2020), predicted that automating R&D alone could not sustain exponential growth because diminishing returns to ideas eventually dominate. The Davidson et al. model accepts that premise but adds a mechanism the earlier work misses: research automation in one sector improves the productivity of research in adjacent sectors, creating a cross-sector multiplier that can overcome the natural drag. Two loops, working together, can break the Malthusian ceiling. The technological loop: AI improves AI research, which improves AI. The economic loop: higher output finances further research.
Jones and Tonetti, whose Stanford paper the NBER analysis builds on, argued that bottlenecks in energy, data, and materials remain genuinely constraining. The NBER paper's response is that these bottlenecks matter less if automation advances fast enough. That is an empirical claim, and the empirical record is noisy. The six-year figure is a model output, not a prediction; it depends on parameter choices that are themselves contested.
But the more consequential gap is not between economists. It is between the formal model and what practitioners are already doing.
Clark assigns sixty percent probability to fully automated AI R&D, with no human in the loop, by the end of 2028. Anthropic has published results from internal experiments where AI agents, given a research direction, autonomously designed techniques that beat human researcher performance on scalable oversight tasks. OpenAI has stated publicly it wants an automated AI research intern by September 2026. These are not speculative positions. They are product goals.
The labs are not building toward the singularity. They are building toward better products. But the feedback loops they are creating do not read economic papers, and they do not know the difference.
Anton Korinek, one of the paper's authors and a senior fellow at the Peterson Institute for International Economics, has spent years writing about the governance challenges of advanced AI. He is not an accelerationist. The paper's other authors bring different strands of growth theory to the analysis. What they have produced is an economic model with stated assumptions and a specific prediction: if current empirical trends continue, the Malthusian ceiling has an exit ramp within six years. The model does not say anyone will take it. It says the door is now visible, and the empirical evidence suggests we are already standing in front of it.
The question for labs, funders, and policymakers is not whether the model is right. It is whether they are treating its conditions as load-bearing, and the evidence from product roadmaps and published research suggests most are not. What to watch next: whether the labs that have publicly committed to automated research milestones treat those targets as genuine governance constraints — and whether any regulator asks the same question.
Sources:
NBER Working Paper w35155: When Does Automating AI Research Produce Explosive Growth?
Import AI 455: AI systems are about to start building themselves
Jones and Tonetti (Stanford): Automation and Innovation
Bloom et al. 2020 (NBER w31815): The Impact of Artificial Intelligence on Innovation