The term "AGI" has no meaning. Every company uses it differently, and that's the story.
At a DealBook panel in New York last year, Ajeya Cotra watched something revealing unfold. The moderator asked whether panelists believed AI capable of "everything humans can do" would arrive by 2030. Seven or eight hands went up. Then the moderator asked whether AI would create or destroy more jobs over the following decade. Eight out of 10 of the same people said AI would create more jobs than it destroyed.
Cotra, a senior advisor at Coefficient Giving who spent years modeling AI trajectories for Open Philanthropy, found the contradiction stark. "Why is it that you think we will have AI that can do absolutely everything that the best human experts can do in five years, but will actually end up creating more jobs than it destroys in the following 10 years?" she asked on the 80,000 Hours podcast. When she pressed panelists afterward, they backed off immediately: "What does AGI really mean?"
That moment captures where the AI industry's central concept now stands. AGI once pointed toward something specific and terrifying: autonomous systems that surpass human economists at their own expertise, that accelerate self-improvement faster than we can track, that reorder civilization. Today it means whatever the speaker needs it to mean.
The slippage is not accidental.
The benchmark that keeps moving
Walk through what different people mean and you find no shared ground. Dario Amodei, CEO of Anthropic, has described AGI as smarter than a Nobel Prize winner across most relevant fields — biology, programming, math, engineering, writing. Mustafa Suleyman, who leads Microsoft AI, has suggested AGI is any system that can turn $100,000 into $1,000,000. Last month, Jensen Huang told Lex Fridman on a podcast that AGI has already been achieved — using Fridman's definition of an AI that could start and grow a company worth $1 billion.
None of these definitions are equivalent. A system that can pass the bar in most fields is categorically different from a system that can make tenfold returns on investment. Yet all three people run organizations betting trillions of dollars on the premise that AGI is coming, or already here.
Helen Toner, a Georgetown researcher and one of the few independent voices on the AI Safety Board at OpenAI before her ouster last year, put it plainly in a recent piece: The term "AGI" is almost useless at this point. The goalposts have moved so many times that any claim of progress toward it is unfalsifiable. People said AGI would arrive when models could pass the Turing test. Then when they could write coherent essays. Then when they could reason. Each milestone arrived and the definition quietly migrated.
What the researchers actually think
The people closest to building these systems are considerably less credulous than the people selling them.
A survey of 475 AI researchers conducted by the Association for the Advancement of Artificial Intelligence (AAAI) found that 76% believe scaling up current AI approaches to yield AGI is "unlikely" or "very unlikely" to succeed. The majority do not think current machine learning paradigms are sufficient for achieving general intelligence at all. Seventy-seven percent of respondents said they preferred prioritizing the design of AI systems with acceptable risk-benefit profiles over the direct pursuit of AGI.
Those numbers sit in strange tension with the investment thesis. The same companies whose executives are publicly predicting AGI within years are privately being assessed by their own partners against profit thresholds. When Microsoft renegotiated its investment in OpenAI in 2023, according to reporting by The Information, the contract defined AGI as technology capable of generating at least $100 billion in profits. OpenAI is not close to that number. It made $13 billion in revenue last year and burned through $8 billion in cash.
The measurement problem
The absence of a definition is not merely a philosophical inconvenience. It has practical consequences for how the technology is evaluated, regulated, and sold.
Google DeepMind published a paper in late March proposing a new framework. The researchers, including cofounder Shane Legg who first popularized the term AGI in the early 2000s, identified 10 cognitive faculties they argue are essential for general intelligence — perception, reasoning, memory, learning, attention, social cognition, and others. Their key finding: current AI models have a "jagged" profile. They may exceed most humans in some areas, like mathematics or factual recall, while trailing even average people in others, like learning from experience or understanding social situations.
The paper proposed measuring AI systems across all 10 faculties and comparing performance to a representative sample of adults with secondary education. Under that framework, OpenAI's GPT-5 scored 57% — well short of the threshold.
A competing effort comes from François Chollet, the researcher behind the Keras framework, who has argued for years that intelligence should be measured not by what a system already knows but by how efficiently it can learn new skills. His ARC-AGI benchmark uses visual puzzle tasks that humans solve in seconds but that frontier AI models still find surprisingly difficult — because they require the kind of flexible, abstract reasoning, spotting symmetries, and learning from few examples that current systems struggle with. This month Chollet launched ARC-AGI-3, an interactive version where AI agents must explore novel environments and acquire goals on the fly, abilities that come naturally to humans.
These are serious attempts to operationalize something real. But they disagree with each other about what AGI means, which means progress on one benchmark does not necessarily imply progress toward another.
The trap
Eryk Salvaggio, a fellow at Tech Policy Press, recently catalogued six ways that AGI thinking distorts AI research and policy. Among them: it substitutes a vague goal for concrete engineering targets; it lets companies claim progress without showing it; it focuses attention on an imagined future system while deflecting scrutiny from present harms; and it centers the values of the researchers who defined it rather than the communities who must live with the consequences.
"The term remains stubbornly amorphous," the Fortune piece on Huang noted, "despite the fact that several leading AI companies, with collective market valuations of more than $1 trillion, say that AGI is what they are racing toward."
That asymmetry is the real story. When a term means everything, it means nothing. When a goal is undefined, any result can be claimed as progress toward it. The people most invested in AGI as a concept are the ones most able to declare it achieved on their own terms.
Cotra's version of the problem is more grounded. She describes a mainstream view that expects the world in 2050 to look moderately different from today — maybe somewhat better medicine, a few more technologies. People who hold this view can simultaneously believe AGI is coming in 2030 because they don't think AGI, by their definition, changes very much. It just drives mild improvement.
Her alternative view: there's a reasonable chance the world in 2050 looks as different from today as today looks from the hunter-gatherer era. Ten thousand years of compressed progress, driven by AI that automates not just physical labor but all intellectual activity. That's a categorically different claim about what's coming, and it implies a categorically different response.
Which version you believe determines almost everything about how you think AI should be governed, deployed, and studied. The tragedy is that the word used to describe the hinge point of that disagreement has collapsed into marketing.