The educated skeptic is the signal in the AI shopping trust gap
Most people who use AI to shop with it do not trust it to buy anything.
That is the paradox at the center of two fresh surveys on consumer attitudes toward AI in retail. According to EY's ResultsSense research, published May 6, 74% of UK consumers have used AI in the past six months. Only 14% say they are comfortable relying on fully autonomous, agent-led AI systems to make purchases. A Dunnhumby survey of 3,000 grocery shoppers, published April 22, found 48% of respondents described automated retail processes like agentic commerce as intrusive. According to Bakery and Snacks' coverage of the same data, 49% opposed fully agentic AI shopping outright and 29% said agentic shopping excited them.
The gap between experimentation and completion is not a technology problem. It is a design problem, and the companies building the infrastructure for agentic commerce are running out of patience with it.
The finding that troubles the standard "consumers are afraid of new tech" framing is the educated-skeptic result. ResultsSense found that skepticism about AI agents was often highest among educated white-collar workers who use AI regularly. "Familiarity sharpens concerns rather than reducing them," the report noted. They have tried the tools. They use them often. They are the cohort most likely to be building or selling these systems. And they are not ready to let one complete a transaction on their behalf.
Matthew Ringelheim, EY UK's AI leader, put it directly: adoption is "rapidly advancing, but trust is not keeping pace with technological capability." The 14% who say they are comfortable with fully autonomous AI are the ceiling the industry is currently hitting — and they are concentrated among the users who understand these systems best.
Michael Schuh, Head of Retail Media at Dunnhumby, put it directly in the company's press release: the irony is that the personalization consumers say they want is exactly what makes agentic commerce feel intrusive. The same AI that makes recommendations better also makes the transaction feel like it is happening without your input.
For the companies that have spent the past year building agentic commerce infrastructure — Visa's Intelligent Commerce Connect, Stripe's Link-based agent wallet, the FIDO Alliance's work on delegated identity for AI agents — this is the wall they keep hitting. Their pitch is that the trust problem is an engineering problem: scoped tokens, approval-before-spend flows, one-time-use virtual cards, and webhook-level authorization controls can give consumers the granular visibility they need. The surveys suggest the engineering solution has not closed the gap.
The usage-to-completion drop is consistent across surveys. Bain & Company found 72% of US consumers had used AI in some form, but only 24% felt comfortable using it to complete a purchase, and only 10% had actually done so — a figure that Bain's survey instrument suggests skewed toward repeat, low-ticket purchases where the consumer had already decided what to buy. The Bain figure is scoped to AI use in shopping contexts, not general task assistance, which matters because the usage pool is self-selected toward people already comfortable transacting online. The 14% comfort figure from ResultsSense and the 48% intrusion figure from Dunnhumby are not outlier data points. They are the same pattern seen from different survey populations.
The educated-skeptic cohort is the part that complicates the usual narrative. These are not late adopters or technology-averse demographics. They are the early majority — the people who have already integrated AI into their work and personal routines, and who have formed a clearer view than most of what the tools actually do. That they are drawing the line at autonomous checkout suggests the problem is not ignorance. It is a genuine assessment of the current capability boundary.
What the industry is essentially being told by its most sophisticated users is this: the completion rate will not rise because interfaces get better. It will rise when the system demonstrates, persistently and verifiably, that it can be trusted to act within boundaries the user actually set. Better UX is table stakes. The actual ask is accountability — a way to hold the AI to what it was told to do, and a way to undo it if it doesn't.
The behavioral split is documented. Consumers are comfortable using AI to find things. They are substantially less comfortable letting it buy things. The educated, frequent users are not the obstacle. They are the signal.
What to watch next: whether the infrastructure being built now — approval-before-spend flows, scoped tokens, webhook-level rollback controls — can move the needle with the cohort that has already decided AI is useful but not trustworthy. Better UX is the minimum entry point. The actual test is whether the system can prove, at the transaction level, that it stayed within what it was told to do.