When Mastercard's chief digital officer Pablo Fourez told PYMNTS that "as autonomy increases, trust cannot be implied — it must be proven," he was describing a problem that most enterprise agent deployments are quietly papering over. The gap between what a human authorized and what an agent actually executed is where commerce breaks down, disputes arise, and liability becomes unresolvable. Verifiable Intent — an open specification Mastercard and Google announced March 5, 2026 at verifiableintent.dev — is the payments industry's first serious attempt to close that gap with cryptography rather than policy.
The core construct is a three-layer SD-JWT delegation chain. Layer one binds an issuer-signed credential to an identity. Layer two captures the user's intent — the constraints under which an agent may act — in a machine-readable form. Layer three produces an action receipt: proof that what the agent did falls within what was authorized. Each layer is independently verifiable. None depends on the agent's self-reporting.
This is not a Mastercard product. It is a specification — open to any issuer, any agent framework, any payment network that wants to implement it. The integration mappings cover AP2, ACP, and UCP protocols, and the spec is deliberately extensible. Partners at launch included Google, Fiserv, IBM, Checkout.com, Basis Theory, and Getnet, which suggests the federation is trying to avoid the fragmentation trap that has killed half a dozen payments standards before it. The underlying standards stack draws from FIDO Alliance, EMVCo, IETF, and W3C — the same cryptographic and identity infrastructure that already secures physical card transactions and web authentication. MIT Technology Review Mastercard PYMNTS Mastercard Newsroom Verifiable Intent
Eight constraint types define what an agent is permitted to do within any given intent: amount bounds, merchant allowlists, budget caps, recurrence terms, and more. These are not natural-language instructions that a model interprets probabilistically — they are machine-readable assertions that a payment terminal or gateway can evaluate deterministically before completing a transaction. Selective disclosure means each party sees only what it needs to verify authorization or resolve a dispute; the merchant doesn't get the full intent document, and the card network doesn't need the user's identity layer to process the action layer.
The question the spec answers is specific: what does commerce look like when the agent's actions are cryptographically tethered to what the human actually authorized? It looks like a receipt that proves the action was within scope — not a vibe, not a probability, not a model's confidence score. It looks like the difference between an agent that recommends and an agent that transacts.
Agent Pay is where the rubber meets the road. Launched in April 2025 in partnership with Microsoft, Agent Pay ran its first authenticated transactions in Australia in early 2026: a Commonwealth Bank of Australia debit card purchase at Event Cinemas, and a Westpac credit card transaction for Thredbo accommodation, both executed by a sovereign large language model called Matilda built by Maincode. Mastercard's own research puts the stakes high: AI-powered agentic commerce could influence 55% of Australian consumer transactions by 2030, worth up to A$670 billion. Globally, roughly 43% of CFOs expect high impact from agents handling budget reallocation; another 47% expect moderate impact. Mastercard Newsroom Mastercard Newsroom (APAC) Google Blog
Those numbers are projections, and projections in this space have a poor track record. But the structural problem Agent Pay and Verifiable Intent are solving is real and well-documented: 48% of Australians have used AI assistants to shop online, 78% expect AI shopping to become mainstream — and more than 90% have concerns about privacy and security. Thirty percent say they'll only use AI shopping from brands they already trust. The trust gap is the adoption gap.
Here is the part that should make any enterprise security team pay attention: when you cannot deterministically distinguish what an agent was authorized to do from what it actually did, you have an accountability gap. That gap is where fraud lives, where regulatory exposure accumulates, and where the business case for agentic commerce stalls. The 90%+ privacy and security concern rate in Mastercard's own research is the survey evidence of that gap; the absence of a cryptographically verifiable audit trail is the structural cause. An industry that has spent two decades building card-present transaction security is now trying to apply the same logic to an agent-present world — where the agent, not the human, initiates the action.
Fourez's framing is accurate but understated. Trust does not just need to be proven — it needs to be auditable after the fact, by parties who do not trust each other, without requiring a shared database or a trusted intermediary for every transaction. That is a harder engineering problem than a signed receipt, and it is the problem that Verifiable Intent's layered SD-JWT structure is attempting to solve.
The spec is new. The first production deployments are small and geographically concentrated. Independent security audits of the implementation have not yet surfaced publicly. Whether the federation of partners can sustain a shared governance model for the intent schema — whether the constraint types expand to cover the full range of commerce scenarios — is genuinely open. There is no shortage of historical examples where payments standards bodies produced technically elegant specifications that the market routed around.
But the infrastructure question is correctly framed. An agent that recommends is a chatbot with extra steps. An agent that transacts — that moves money, commits resources, binds a consumer to terms — requires something other than a probability score to justify the trust being placed in it. Verifiable Intent does not guarantee that trust. It makes it auditable. That is a meaningful difference, and it is the right place to build.