Persona, the identity verification vendor Anthropic has hired to check some Claude users, retains your face, device fingerprints, government ID number, and selfie analytics for up to three years after a single check runs, according to Malwarebytes. It has been caught running facial recognition against watchlists. In October 2025, a breach at Discord exposed roughly 70,000 government IDs submitted through a Persona-powered age verification process, Decrypt reported. Its investors include Founders Fund, co-founded by Peter Thiel, who also chairs Palantir, Engadget reported.
Anthropic announced the new verification requirement this week, framed as fraud protection: some Claude users are now being asked to upload a government-issued photo ID and a live selfie. The company confirmed it has partnered with Persona Identities to handle the process, according to its own help documentation. Anthropic says the requirement applies to a small number of accounts where it sees activity indicating potentially fraudulent or abusive behavior. It declined to specify which use cases trigger it.
What Anthropic's announcement does not say is how long that data lasts. According to security research published by Malwarebytes in February, Persona's standard retention schedule covers IP addresses, browser and device fingerprints, government ID numbers, phone numbers, names, faces, and selfie analytics for up to three years after an identity check runs. Anthropic did not respond to questions about whether it has any contractual limits on what Persona can do with collected data once verification is complete.
Anthropic is not the first AI company to require government ID from certain users. OpenAI already asks some users to verify their identity with a government-issued ID through Persona, according to OpenAI's own help documentation. The claim that no competitor does this, which appeared in the original wire signal, is inaccurate.
The three-year retention figure comes from reporting by Malwarebytes on a February 2026 incident in which researchers found nearly 2,500 Persona files exposed on a public endpoint, including documentation showing the company had run facial recognition checks against watchlists. In one documented case, Persona's systems screened identities against lists of politically exposed persons, according to State of Surveillance.
Persona is not a startup operating without scrutiny. One of its major investors is Founders Fund, the venture firm co-founded by Peter Thiel, who also co-founded Palantir, the defense intelligence and surveillance software company that has built significant government contracts in the US and allied countries.
The timing is awkward. In February, Anthropic gained millions of new users after OpenAI announced a deal to deploy its AI on Pentagon classified networks. Anthropic publicly turned down that arrangement, Decrypt reported. The users who switched were, in many cases, explicitly motivated by concerns about how their data would be handled under a defense intelligence context. Those same users are now being asked to hand their passport or driver's license to a vendor with a retention schedule that outlasts most employment relationships, backed by the same investor network as Palantir.
This is not necessarily a story about bad actors. Identity verification vendors operate in a regulatory gray zone. There is no federal law in the US that specifically limits how long a private company can retain biometric data like face images, and the EU's GDPR permits retention periods defined by necessity rather than hard limits. What a company says it needs and what it actually keeps are often different things, and the gap is not always visible to the user at the moment of consent.
What to watch: whether any state legislature takes up the question that Congress has so far avoided, which is whether AI companies should face the same restrictions as banks and casinos on collecting and storing government ID data. Several states have considered biometric privacy laws with varying definitions of what requires consent and what can be retained. If even one state passes a law with specific retention limits for AI platforms, it could reshape how the entire industry handles identity verification, because compliance costs make a unified approach more attractive than a patchwork of state rules.
Anthropic said the verification requirement is voluntary for most users. Whether that holds as the company scales Claude access, and whether the privacy-conscious audience that chose Anthropic in February has anywhere else to go, are questions the next few months should answer.