The Real AI Fear: Hallucinations, Not Job Loss
The question of what people actually fear about artificial intelligence keeps getting answered with job automation, but when Anthropic surveyed 80,508 of its own users in December 2025, a different answer emerged at the top.

image from Gemini Imagen 4
The question of what people actually fear about artificial intelligence keeps getting answered with job automation, but when Anthropic surveyed 80,508 of its own users in December 2025, a different answer emerged at the top. Unreliability — AI that hallucinates, fabricates citations, confidently delivers wrong answers — was the leading concern, cited by 26.7% of respondents, edging out job displacement at 22.2%.
Anthropic, the AI safety company behind the Claude model family, published the full study results in March 2026, billing it as the largest and most multilingual qualitative AI study ever conducted — 159 countries, 70 languages, spanning a week of interviews administered entirely by an AI-powered interviewer. The Financial Times, which first covered the findings (paywalled), framed this as a counter-narrative to the dominant discourse around AI's labor market impact. That framing is defensible, but it needs a caveat before it gets too far.
The fears data was multi-label — respondents could register multiple concerns — so 26.7% and 22.2% reflect frequency of mention, not either/or choices. Many of the people most bothered by hallucinations are also worried about jobs. The ordering says something real about what frustrates people most in daily practice, but it doesn't mean job fears have been displaced. A third major concern, AI autonomy and loss of human control, came in at 21.9% — nearly tied with job displacement, and largely absent from mainstream coverage of the study.
What the data does capture — and this is genuinely striking — is the gap between how AI is discussed in policy circles and what daily users are actually rubbing against. General population surveys tell a different story. A Pew Research Center survey from April 2025 found more than half of US adults extremely or very concerned about AI eliminating jobs, against only 25% of AI experts. A separate Pew survey of US workers from February 2025 found 52% worried about AI's impact on the workplace, 32% expecting it to reduce their job opportunities.
Among Claude's power users, the concern hierarchy shifts. They have moved past abstract fear into direct, lived friction with systems that don't yet work reliably enough. One user in Brazil described it in the study: "I had to take photos to convince the AI it was wrong. It felt like talking to a person who would not admit their mistake." That is not fear of future automation. That is frustration with today's product.
The hopes data is more interesting than the headline numbers suggest. The top stated hope was professional excellence at 18.8% — productivity, career advancement, faster work. But follow-up probing revealed a subtler picture: for many users, productivity was the surface desire and freedom was the underlying one. Personal transformation (13.7%) and life management (13.5%) came second and third, suggesting people want more time, not just more output. A user in Germany said it plainly: "AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry. Right now it is exactly the other way around."
Not everyone's frustration runs in that direction. Business Insider reporter Brent D. Griffiths surfaced several voices from the study that read as direct dispatches from a labor market under pressure. A software engineer in the US: "I personally am charged with shipping these AI systems with a goal of reducing engineering headcount by 30%, and that feels like blood on my hands." A freelance developer in France: "The market is so dead I cannot get one. Entry-level roles that existed four years ago are just gone." A PhD student in Switzerland: "I am paying $200 out of my PhD stipend for Claude Code and other AI tools — just to compete... trying to get a few good papers published before AI takes over. I disgust myself."
These are not abstract fears. The survey was conducted in December 2025, months before Anthropic CEO Dario Amodei made headlines with predictions about AI eliminating entry-level white-collar work. The displacement anxiety was already present in the user base before it became mainstream discourse — the study data foreshadowed what would follow.
Geography slices this differently. Respondents in developing nations skewed optimistic, describing AI as economic infrastructure they had never had access to before. An entrepreneur in Cameroon described reaching "professional level in cybersecurity, UX design, marketing, and project management simultaneously" through AI tools. A healthcare worker in the US described AI lifting the documentation burden from 100 to 150 daily messages from doctors and nurses. Users in the EU and North America more often cited labor market anxiety and regulatory need. East Asian respondents showed notably higher concern about cognitive degradation — the fear of forgetting how to think without AI — a signal that barely surfaces in aggregate numbers but appeared repeatedly in regional analysis.
The 81% who said AI had taken a concrete step toward their vision breaks down as: productivity at 32%, cognitive partnership at 17.2%, learning at 9.9%, technical accessibility at 8.7%, research synthesis at 7.2%, emotional support at roughly 6%. The second-largest response — 18.9% — was "AI has not delivered," specifically citing inaccurate or unreliable outputs. Nearly one in five users, when asked what AI had delivered toward their vision, led with failure. That is both a product feedback signal and an explanation for why unreliability edges out job displacement at the top of the fear list.
There is an obvious methodological wrinkle running through all of this: Anthropic studied its own users using Claude-powered interviewers and Claude-powered classifiers. The researchers are Anthropic employees. The methodology appendix acknowledges the selection bias explicitly — this sample skews wealthy and technically sophisticated, not representative of the global population. The recursive quality is either a methodological innovation in large-scale qualitative research or a limitation on how much you can trust a system categorizing its own users' complaints about itself. Anthropic deserves credit for publishing data that included uncomfortable findings either way.
What to watch: the reliability gap is a product problem as much as it is a trust problem. Labs that narrow hallucination rates don't just improve benchmarks — they address what their actual paying users say is the primary thing standing between them and what they actually want. The Anthropic study also raises a question it does not answer: whether users can fully calibrate their trust in a system whose self-reported behavior is measured by the system itself.

