The web has a standard for this. AI agents do not.
That is the argument Mark Nottingham, an IETF veteran who helped architect modern HTTP, laid out in a blog post last week: browsers operate through public standards — HTTP, cookies, robots.txt — decided in open forums where both users and sites have a voice. AI agents, by contrast, run on proprietary platforms with no equivalent framework. When your browser stores a cookie, both sides know the rules. When an AI agent books a flight on your behalf, neither the airline nor you have any way to verify what the agent was authorized to agree to, what data it shared, or what commitments it made.
The standards community's first concrete response arrived in March, when an IETF working group chartered in early 2026 published its first draft for giving AI crawlers a cryptographic identity. The spec lets a website verify that an HTTP request was signed by a bot operated by, say, Anthropic, rather than someone spoofing Claude's user agent string. It is public, auditable, and genuinely useful. In March 2025, AI crawlers generated more than 50 billion requests per day across Cloudflare's network, according to the research outlet No Hacks, and the crawl ratio between major bots varies dramatically: Googlebot reaches 1.70 times more unique URLs than ClaudeBot, nearly three times more than Meta-ExternalAgent, and 714 times more than CCBot. The identity problem Nottingham describes is not abstract.
What the draft does not address is the harder question: what the crawler is allowed to do after you're in.
The spec explicitly limits its scope to "operator identity" and states it "does not address agent behavior or data handling obligations." Knowing that ClaudeBot is operated by Anthropic tells you who to call if something goes wrong. It tells you nothing about what ClaudeBot will do with the data it collects, what actions it's authorized to take on a user's behalf, or what constraints it operates under when it acts as an intermediary between a human and a third-party service.
Nottingham's point is that the agent trust problem is not primarily about spoofed crawlers. It is about the absence of a shared, publicly governed framework for what an AI agent can and cannot commit a user to. The working group was chartered to work toward standards-track specifications for authentication techniques and bot information mechanisms by April 2026; the current draft achieves the first of those, partially. Behavioral constraints, data handling obligations, and the broader "collective bargaining" model Nottingham describes — where both sides of an agent-site interaction have transparent, collectively governed rules — are not in the document.
The alternatives that exist today — proprietary platform policies, terms of service negotiations between AI companies and websites, TEE (trusted execution environment) attestation — each address pieces of this. None of them are collective. TEE attestation, which uses secure hardware to cryptographically verify what code ran in an isolated environment, has attracted attention as a technical solution. But Nottingham's critique is that even perfect TEE attestation for a single agent-platform interaction does not solve the collective action problem: if agents can ask for intrusive permissions, the world becomes one where they constantly do exactly that, and every negotiation is bilateral and proprietary.
That dynamic — each AI company separately negotiating with each website, each device maker, each service provider — is the world the current standards work does not prevent. The draft expires September 3, 2026, which means the working group has roughly five months to either advance the spec or explain why it cannot. Several people working in the space say privately they expect the scope to expand beyond pure crawler identity — that the community knows the current draft is insufficient and the September deadline is the forcing function.
Whether that expansion happens in time, and whether it produces something publicly governed rather than vendor-controlled, is the open question the next version will answer or fail to. What's already visible is the shape of what is missing. A standard that tells you who signed the request does not tell you what they were authorized to do when they made it. Those are two different problems, and solving the first while leaving the second to proprietary negotiation is, as Nottingham notes, a world where the power imbalance that web browsers spent two decades correcting gets rebuilt — just faster, with more agents.