Three Labs, One Lesson Plan: How OpenAI, Google, and Microsoft Are Writing the AI Curriculum
What does it mean when the companies that built the AI tell schools how to teach it?
The question sounds paranoid until you notice who just signed the same piece of legislation. OpenAI, Google, and Microsoft — rivals who compete on model capabilities, enterprise contracts, and safety philosophy — this week jointly endorsed the LIFT AI Act, a bipartisan Senate bill that would create NSF grants to develop AI literacy curriculum for K-12 schools, fund teacher training, and bankroll learning tools and evaluation methods. Rare enough that three labs align on anything policy-related. Rare enough that their joint name on the same curriculum definition is worth sitting with.
The pattern is documented. When tobacco companies funded youth smoking prevention programs, academic research using previously-secret industry documents found they were designed to protect the industry, not reduce youth smoking. The WHO documented how ostensibly anti-smoking education programs became instruments of corporate citizenship theater while preventing stronger regulation. When oil companies underwrote climate education, investigative reporting showed they spent upwards of $40 million over two decades on K-12 programs with a pro-industry bent — framing the problem and its solutions in ways that tracked what was convenient for the donor. The mechanism has a name in regulatory literature: the regulated using education to shape the terms of their own scrutiny, before independent critical frameworks can develop.
The LIFT AI Act defines AI literacy as age-appropriate knowledge to use AI effectively, critically interpret its outputs, solve problems in an AI-enabled world, and mitigate potential risks. That language is genuine. But the companies backing the bill would, in practice, have significant influence over which AI tools students learn to use — and the default answer to "what should students learn to use" tends to be the tools that already exist. The companies writing the definition help determine whose tools those are.
The bill is bipartisan — Schiff and Rounds are not natural allies on most technology issues, and the companion House bill comes from Representatives Tom Kean Jr. and Gabe Amo, also bipartisan. The American Federation of Teachers, not an organization that reflexively sides with tech companies, is on the endorsement list. None of that is nothing.
The CHATBOT Act, introduced by Senators Ted Cruz and Brian Schatz the same week, takes a different approach — requiring AI companies to offer parental controls for children under 13. Two bills, two messages about children and AI, both moving at the same time. What gets taught in schools about AI is part of a accelerating policy conversation, and it is now being shaped by the companies with the largest stake in the answer.
The bill is not yet law. The NSF grant structure it creates has no dollar figure attached — the press release and bill text describe the mechanism but not the appropriations. A bill without funding is a statement. What happens when Congress actually appropriates, and who ends up writing the curriculum that gets funded, remains to be seen.
What does not depend on appropriations is the signal: three companies with the most to gain from a world where AI is understood on their terms have decided the moment is now. The framing gets set before the debate begins.