The Atlantic showed ChatGPT can make fake IDs by the hundred
Aarian Marshall wanted to know how far she could get with no special tools. Her answer: more than 100 fraudulent-looking images, including fake driver's licenses, passports, opioid prescriptions, bank alerts and social-media posts, all generated in one afternoon using ChatGPT. The Atlantic published the demonstration on May 2, and it is the most concrete evidence yet that the gap between what AI can produce and what institutions can verify has narrowed to the width of a screenshot.
OpenAI released ChatGPT Images 2.0 on April 21, calling the new model significantly better at rendering dense, small text inside images, according to its launch post. Text rendering was the previous ceiling on document-class fakes. AI image generators had already mastered faces, layouts and color grading. But a blurry prescription label or a poorly lettered ID card still looked generated. That ceiling is lower now.
The trust model that institutions inherited was never built on perfect documents. It was built on production scarcity: forging a convincing ID required access, equipment and skill. Governments issued documents. Hospitals controlled prescriptions. Banks sent alerts. Employers asked for screenshots. The cost of forgery was the guardrail. ChatGPT Images 2.0 attacks that from the cheap side. It does not need to access a government database. It only needs to produce an image that forces the next human reviewer or automated system to decide whether to believe it.
The verification layer is behind the image layer. OpenAI can embed provenance metadata in generated images, but The Atlantic noted that metadata disappears when an image is uploaded to social media or captured as a screenshot. OpenAI says it balances creative freedom with usage policies. Chase gave the harder answer: the industry needs an ecosystem-wide effort, including from AI companies, to strengthen guardrails and stop these crimes at the source.
The official numbers are a floor, not a signal. The FBI's Internet Crime Complaint Center logged 22,364 AI-related complaints and $893 million in adjusted losses in 2025, the first year it tracked AI as a separate crime descriptor. Those are reported losses, not total losses. The professional fraud community is not ready for the handoff. The Association of Certified Fraud Examiners and SAS Institute's 2026 Anti-Fraud report found 77% of anti-fraud professionals reported an increase in deepfake social engineering. Only 7% said their organizations were more than moderately prepared to detect or prevent AI-fueled fraud. Budget was the leading barrier, cited by 84% of respondents. Physical biometrics adoption has risen from 34% of organizations in 2022 to 45% now.
Mason Wilder, research director at the Association of Certified Fraud Examiners, put it to The Atlantic directly: the limits of the applications of this technology are only limited by a fraudster's imagination.
The caveat is real: The Atlantic's test is proof-of-concept reporting, not a catalog of confirmed crimes committed with Images 2.0. Some verification systems, including barcode and QR-code checks embedded in physical IDs, catch fakes because the underlying code is wrong or missing. The preparedness numbers cover AI fraud broadly, not document forgery alone. India downloaded ChatGPT roughly 5 million times during launch week versus about 2 million in the United States, while global daily active users rose only about 1%. That does not prove misuse; it shows a document-capable tool spread fastest in a market where digital identity, banking access and fraud detection are already uneven.
That does not make the demonstration harmless. It narrows what has to be watched next. If the old constraint was production cost, the new constraint is verification cost. The next failure will not necessarily look like a spectacular deepfake. It will look like a pharmacy, lender, hiring team or marketplace support desk facing more plausible documents than its checks were built to handle. The fake-ID machine is running. The harder question is who now pays to inspect everything it can make.