The Accountable None
Investigative reporting revealed 'The Wire by Acutus,' a news site presenting itself as covering technology and regulation, is substantially an AI generated operation with fake bylines (Michael Chen, Riley Cooper) who do not exist outside the site. Analysis using AI detection…

Michael Chen doesn't exist. Neither does Riley Cooper. But their articles — 94 of them, published in less than four months — are real enough to have been cited in political lobbying. They were produced by a news organization that has no masthead, no newsroom, and no humans visibly in the loop.
The Wire by Acutus looks like a small news site covering technology and regulation. Its bylines include Michael Chen and Riley Cooper. It publishes in a consistent editorial voice, at a consistent pace, on topics — AI policy, regulatory risk, political candidates — that align with the interests of the company whose super PAC appears to be funding it.
What it doesn't look like, on close inspection, is a news organization at all.
An investigation published last week by Tyler Johnston at Model Republic found that The Wire by Acutus is substantially an automated operation. Johnston reverse-engineered the site's public JavaScript to expose an AI editorial pipeline: articles generated through what appears to be a machine system, reviewed at a median pace of 44 seconds each, with a "Regenerate Story Draft" button visible in the backend. Of the site's 94 articles published since late December, Johnston found — using AI detection tool Pangram — that 69 percent were classified as fully AI-generated, with another 28 percent flagged as partially AI. Three percent were assessed as human-authored.
The reporters don't appear to exist outside the site. Web searches for Michael Chen, the site's most active byline, found no one publicly associated with Acutus under that name. Nathan Calvin, an advocate at the AI policy group Encode, received an interview request from Michael Chen — a request that, based on the site's infrastructure, appears to have been generated rather than written by a person.
The funding runs through a GOP-aligned firm called Novus Public Affairs, whose publicly listed clients include PhRMA, Targeted Victory, and the New Hampshire Home Builders Association. The super PAC funding The Wire is called Leading The Future. It raised $125 million in late 2025 and entered 2026 with $70 million in cash on hand, making it one of the largest single-topic political operations in the country. Leading The Future filed its super PAC paperwork on August 15, 2025.
The AcutusWire API — a live endpoint exposing the site's editorial database — confirms the pipeline structure: AI review scores, 44-second median review times, and a Regenerate Story Draft function accessible in the backend.
Leading The Future did not respond to questions. OpenAI did not respond to questions.
The Accountability Vacuum
The part that should concern every publication, every regulator, and every reader isn't that a news outlet is using AI — it's that the outlet's publisher cannot be reached, cannot be fired, and cannot be held responsible in any of the ordinary ways that make journalism accountable for what it publishes.
U.S. defamation law has historically required an identifiable defendant — a person or organization that can be sued, served, and held liable for false statements. When a newspaper publishes a defamatory article, there are editors who approved it, publishers who insured against it, and mastheads that can be named in a filing. The system works, when it works, because someone is in the room.
The Wire by Acutus has no identifiable human in the room. Its "reporters" are names assigned to articles generated through an automated pipeline. Its "newsroom" is a JavaScript backend with a Regenerate button. If one of its 94 articles contained a false statement that damaged someone's reputation, there is no editor to call, no publisher to name, and no human being who can explain why it was published.
This isn't a hypothetical gap in the law. It's the present. Leading The Future's $125 million operation has already been used to contact real advocacy groups — as it did with Encode's Nathan Calvin — under false pretenses that the request appeared to originate from a human journalist who does not appear to exist. That is, by any reasonable definition, a deceptive political influence campaign.
OpenAI's own usage policies explicitly prohibit using its products for "deceptive political influence campaigns". The question of whether OpenAI technology was used to generate The Wire's content, and whether that use violates those policies, is one the company has not answered.
The broader implication is structural. If one AI-adjacent super PAC can fund a synthetic newsroom with invented reporters and run it without detection for four months — generating 94 articles, placing them in front of real advocacy targets, and accumulating enough apparent authority to be taken seriously as a source — the same infrastructure is available to any organization with enough money and enough indifference to the conventions of journalism.
This is not a story about one outlet. It's a stress test of the entire accountability framework that journalism has built around the assumption that someone is responsible for what's published. That assumption is now optional.
Sources: Tyler Johnston at Model Republic (April 24, 2026); The Verge; Mashable; NOTUS; AcutusWire.com /api/wire (live API data); FEC filings for Leading The Future.





