The gap between what AI could do to the labor market and what it has actually done just got a number: 64 percentage points.
Anthropic's economists, Maxim Massenkoff and Peter McCrory, published a landmark labor market paper in March 2026 that calculated theoretical AI task coverage across occupations. For computer and math workers, that coverage hit 94 percent. The theory said most of what programmers do, AI could do. But when McCrory looked at what Claude actually covers in observed professional use, the figure dropped to roughly 33 percent.
"I was somewhat surprised that the gap between sort of coding in general, which as we point out had something like 94 percent theoretical exposure, but then based on actual adoption, it was closer to 30 percent of the tasks across all the jobs in that pocket of the economy," McCrory told Fortune on April 7 — his most direct public comment on the finding since the paper dropped.
The 64-point gap is the story. Everyone leads with the 94 percent number because it makes for a striking headline. The more interesting question — why the reality is so far behind the theory — is harder to answer and rarely asked.
The paper, published to Anthropic's research blog on March 5, 2026, used a two-stage methodology the authors call "observed exposure." First, they measured which tasks are theoretically possible with current language models. Second, they checked which of those tasks actually show up in Claude usage data from Anthropic's Economic Index. The gap between the two is not a measure of AI capability — it's a measure of AI adoption. And adoption is lagging.
Computer Programmers sit at the top of the theoretical exposure list at 75 percent. Customer service representatives and data entry keyers are close behind at 67 percent. Office and administrative workers overall face 90 percent theoretical exposure. But actual task coverage, the paper notes, falls far short across every category.
The demographic portrait of the most-exposed workers is counterintuitive. They are not the people most observers would expect to be first in line for displacement. Workers in the most exposed occupations earn 47 percent more on average than unexposed workers. They are 4.5 times more likely to hold a graduate degree. They are 16 percentage points more likely to be female. The workers most exposed to AI are, in other words, already privileged by the existing economy — which is itself a finding about where AI adoption has concentrated.
So where are the job losses? The paper finds no systematic increase in unemployment for highly exposed workers since ChatGPT's release in late 2022. "The average change in the gap since the release of ChatGPT is small and insignificant," the authors write. McCrory and Massenkoff do flag one suggestive signal: a 14 percent drop in the job-finding rate for workers aged 22 to 25 in exposed occupations. Young people entering the market are having a harder time getting hired into these roles. The effect is described as "just barely statistically significant" — real enough to note, too thin to declare a trend.
The Bureau of Labor Statistics, working independently, projects a correlation between AI coverage and employment growth: for every 10 percentage point increase in coverage, BLS growth projections drop by 0.6 percentage points. The paper acknowledges this relationship is "slight" and not yet visible in unemployment data.
The methodological disagreement between different estimators of AI-driven job loss burst into public view on the same day McCrory's comments went live. Goldman Sachs published research claiming AI is displacing roughly 16,000 jobs per month — a figure that Fortune reported alongside the Anthropic paper. It also reflects a fundamentally different methodology: Goldman counted net job losses attributable to AI using help-wanted advertising and displacement signals. Anthropic's team used Current Population Survey unemployment data, the gold standard for labor force measurement. Neither approach is wrong, but they are measuring different things. Job postings capture hiring intent. The CPS captures actual employment status. One moves faster. One is more accurate.
Alex Imas, an economist at the University of Chicago who has studied AI and labor markets, offered a blunt appraisal to MIT Technology Review: "Exposure alone is a completely meaningless tool for predicting displacement." The distinction matters. Coverage estimates tell you what AI could theoretically do. Displacement requires price elasticity — demand for human labor falling because AI makes something cheaper. That signal has not arrived in the data yet, whatever the theoretical exposure numbers suggest.
Block, the payments company led by Jack Dorsey, cut nearly half its workforce in early 2026 and cited AI as a factor. The move made for striking copy. Marc Benioff, Salesforce's CEO, called it "AI washing" — per Fortune, using AI as cover for what were partly ordinary cost-cutting decisions. Both characterizations may be partially true. That uncertainty is itself the honest answer: the labor market effects of AI are real but diffuse, lagging, and hard to attribute precisely. They are not zero. They are not the cataclysm that 94 percent coverage implies.
The Durable Lesson from story_7651: attribution errors in AI labor reporting are persistent. The wire has repeatedly misidentified Erik Hurst — a UChicago macroeconomist at the Becker Friedman Institute, not an Anthropic employee — as Anthropic's chief economist. The correct authors are Massenkoff and McCrory. When the wire gets attribution wrong on a research paper, the error propagates across every outlet that cites it. Checking author affiliations against the actual paper, not the wire summary, is not optional.