AcutusWire Published 94 Articles, 42 Flagged for Revision but Released Anyway
AcutusWire published 94 articles in four months, according to its own API. Of those, 42 were flagged by its own automated review system as not ready to publish — and all 42 went out anyway. The median time a human spent reviewing each piece: 44 seconds.
That is what a public API endpoint tells you about a fake journalism operation when it forgets to lock the door.
AcutusWire is an anonymously operated news site that launched December 29, 2025, and describes itself as expert-sourced independent reporting. It has no named editors, no bylines, no masthead. But it does have a publicly accessible endpoint at acutuswire.com/api/wire, which as of this week returns a JSON feed of every article the site has published — including the internal editorial metadata that its own automated review system generated. Model Republic first reported the site's AI-generated content and its connections to an influence operation tied to OpenAI. Futurism and Mashable corroborated those findings. What the API adds is the mechanical record of what happened after the editorial flags went up.
For each article, the endpoint includes fields from a five-check automated review: four scores out of 100 covering AP style compliance, quote accuracy, source verification, and an internal quality benchmark, plus an overall status. In 42 of the 94 articles in the feed, that status read needs_revision — flagged by the AI reviewer as not ready to publish. Each of those 42 was published anyway. The timestamps also record how long the human side of the process took, from first issue flagged to publication click: a median of 44 seconds.
"It's a content sausage factory," said one engineer who reviewed the exposed data and spoke on condition of anonymity because they were not authorized to discuss client work. "The AI flags problems, a human rubber-stamps the fix, and it goes out the door in under a minute. Nobody's actually reading this."
The site behind those numbers runs on AI throughout. The public JavaScript code — visible to any visitor who opens their browser's developer tools — contains fields labeled AI Background Context, Question Prompts, and aiOriginalText. A separate field describes an AI interviewer that surfaces submitted interviews for review. The default text for that field, still visible in the code: "No interviews uploaded via the reporter API yet. Interviews submitted by the agent will appear here for review."
The pipeline, according to the code and the Model Republic investigation by Tyler Johnston: a human enters a topic and background context. The system generates a draft via AI. A separate tool runs a multi-pass editorial review scored across the five checks. The AI flags issues and proposes corrections. A human resolves them. The median time for that entire process, based on the API timestamps: 44 seconds. On 42 of those 94 articles, the reviewer's own overall status said the piece was not ready to publish. All 42 went live.
The money trail
The question of who built this, and why, is where the OpenAI connection becomes legible.
Leading The Future is a super PAC founded in 2025 with a stated mission of supporting AI-friendly candidates in the 2026 midterm elections. Its principal donors include OpenAI president and co-founder Greg Brockman, who contributed $12.5 million alongside his wife Anna, and Andreessen Horowitz, according to Axios. Zac Moffatt, who co-founded Leading The Future, is also CEO of Targeted Victory — a Republican consulting firm whose client list includes Novus Public Affairs. Patrick Hynes, president of Novus, is the only person on X who has posted links to AcutusWire articles.
Hynes has a track record here. He co-founded the New Hampshire political outlet NH Journal in 2010 with a group of Republican operatives, left to run a pro-Scott Brown Senate campaign PAC, and is now listed on Novus's website as working for the New Hampshire Home Builders Association and PhRMA, the pharmaceutical industry's lobbying arm. PhRMA has spent record sums lobbying against pharmacy benefit managers — and AcutusWire ran a piece attacking PBMs in January, 10 days before Trump signed sweeping PBM reform into law. Multiple pieces target specific Republican Senate candidates in ways that align with Leading The Future's stated electoral strategy.
The chain is circumstantial. No document directly places Brockman or OpenAI in the control room of AcutusWire. Leading The Future is funded by OpenAI's president. Targeted Victory's CEO co-founded it. Novus lists Targeted Victory as a client. Hynes, running Novus, posted every AcutusWire article. The overlapping interests are not subtle.
The experts who talked to a bot
Nathan Calvin, vice president and general counsel at the advocacy group Encode, received an email in March from an address at acutuswire.com. The sender's name was Michael Chen. The email offered Calvin the chance to answer a Written Q&A for a story about an AI bill in Tennessee. When Johnston ran the message through Pangram, an AI content detector, it came back as fully AI-generated. Web searches for Michael Chen turned up nothing. Calvin responded anyway, declined to comment, and forwarded the exchange to Johnston.
Harvard Business School professor Joseph Fuller was less cautious. He answered AcutusWire's questions about skills-based hiring and was quoted in an article published eight days before a related House bill. He later wrote on LinkedIn that he thought he was speaking with independent journalists.
Fuller is now aware that he answered an AI.
The AcutusWire system only offers Written Q&A format — no phone calls, no video, no back-and-forth. That is not an accident. A written interview can be conducted by a language model without a human in the loop. The expert's entire contribution is mediated by software that extracts what it needs and discards the rest.
What OpenAI's policies say
OpenAI's usage policies, still live at openai.com/policies/usage-policies/, prohibit using its products for political campaigning or lobbying. The company has not responded to multiple requests for comment on whether its models were used to generate AcutusWire content, or whether it was aware of the site's existence.
The Model Republic investigation was published April 24. Futurism and Mashable corroborated its findings. OpenAI did not respond to either outlet by press time.
What the API cannot answer
The exposed endpoint cannot show who resolved the flagged issues on the 42 published articles, or whether they were resolved at all. The 44-second median may represent a human genuinely reviewing each flagged item. It may represent a human clicking approve without reading. It may represent no human at all.
The site is still live. The API is still returning data. The articles are still being published. The next time a real expert gets an email from a reporter at acutuswire.com, they will be talking to the same system that has now been publicly documented — one that generates articles automatically, flags its own quality problems, publishes anyway, and runs its human review process in under a minute.
Whether OpenAI knew that its money was funding that operation — or whether it simply looked the other way while a network of PACs and consultants ran an AI newsroom under a journalism brand — is the question the company's press team still has not answered.