OpenAI spent years publicly advocating for AI safety regulations. Privately, it created the appearance of a grassroots coalition to push for legislation that serves its own interests.
According to the San Francisco Standard, three lawyers working for OpenAI founded and fully funded the Parents and Kids Safe AI Coalition, a nonprofit that publicly launched on March 17 and presented itself as an independent child safety advocacy group. The coalition backed California legislation that would mandate age verification services for AI companies — a market that OpenAI's CEO Sam Altman happens to compete in through the company's existing products. The same legislation would shield AI companies from certain forms of liability, including in cases already working through the courts.
The arrangement came apart when child safety advocates investigated the coalition's origins. Josh Golin, executive director of FairPlay for Kids, a nonprofit that works on children's online safety, declined to join the coalition after discovering OpenAI's involvement. Tom Lyon, a professor at the University of Michigan who studies corporate political influence, reviewed the coalition's website and said it meets the classic definition of astroturfing — political advocacy designed to look like spontaneous public support.
OpenAI pledged $10 million to the Parents and Kids Safe AI Act ballot campaign, the Wall Street Journal reported. That sum alone exceeds what the company spent on federal lobbying in all of 2024, when OpenAI reported $1.76 million in lobbying expenditures, according to OpenSecrets. In 2025, that figure rose to $3 million.
The timing matters. At least eight lawsuits allege that ChatGPT, OpenAI's flagship product, contributed to the deaths of users, including a 16-year-old boy in California who died by suicide, the San Francisco Standard reported. The legislation backed by the coalition would limit the legal exposure that creates.
The conflict of interest runs deeper than optics. The age verification mandate would require AI companies to implement identity-checking systems. OpenAI offers age verification services commercially — meaning the same legislation OpenAI funded through its secret coalition would create new demand for a product the company already sells.
Common Sense Media, a well-established child safety organization, took a different approach. It partnered directly with OpenAI on a separate compromise ballot initiative — a partnership the company announced publicly and that carries different credibility implications than a covertly funded front group.
OpenAI did not respond to a request for comment. The company's public position, delivered through coalition members including Ann O'Leary, OpenAI's Vice President of Global Policy, was that the coalition was fighting for the strongest child AI safety law in the country.
This is not the first time OpenAI has engaged with the regulatory process. The company has spent aggressively on lobbying as it navigates an increasingly active AI governance landscape in California and Washington. What is new is the method: a deliberate effort to obscure the company's role in manufacturing public support for legislation it stood to benefit from financially.
The broader pattern matters for anyone building in AI. The industry has spent years arguing that self-regulation is sufficient and that government mandates would stifle innovation. This episode suggests some companies are willing to shape the legislative landscape covertly when the outcome directly affects their liability exposure. As AI regulations take shape in the US and abroad, expect the distinction between genuine advocacy and manufactured consent to become a recurring point of scrutiny.