Sam Altman's Policy Play
Sam Altman wants to be known as the man who redesigned the social contract for the age of artificial intelligence. The vehicle is a 13-page policy paper published this week under the title "Industrial Policy for the Intelligence Age," proposing robot taxes, a public wealth fund modeled on Alaska's Permanent Fund, automatic safety net triggers that expand benefits when AI displacement metrics hit preset thresholds, a shift in the tax base from payroll to capital gains and corporate income, and a 32-hour workweek pilot. The ambition is explicit: Altman compared the coming change to the Progressive Era and the New Deal.
The proposals are not new. That is the first thing worth knowing.
Soribel Feliz, who worked on Senate AI policy in 2023 and 2024, told Fortune she has handwritten notes from nine Senate fora sessions where every pillar of the paper was discussed. Lucia Velasco, former head of AI policy at the United Nations and now at the Inter-American Development Bank, put it more directly: OpenAI is the most interested party in how this conversation turns out, and the proposals it advances shape an environment in which OpenAI operates with significant freedom under constraints it has largely helped define. Anton Leicht of the Carnegie Endowment called the paper communications work designed to provide cover for regulatory nihilism.
The timing is not accidental. OpenAI is preparing for an IPO. It has closed a $122 billion funding round at a $852 billion valuation. It is also under genuine regulatory pressure in multiple jurisdictions. A policy paper that positions the company as a constructive architect of its own constraints is a different kind of document than one written by an adversarial regulator. The question is whether the paper is intended to lead that conversation or to occupy the space so that more restrictive alternatives cannot gain traction.
What the paper actually proposes is worth examining on its merits. The robot tax idea has been discussed in academic economics literature for years. The universal basic income framing has been a staple of tech conference keynotes since at least 2016. The automatic stabilization triggers are technically interesting but require agreeing on how to measure AI-driven displacement, which does not yet have a consensus methodology. The wealth fund model is structurally interesting but the numbers in the paper are not specific about contribution rates, fund scale, or what "distribute returns directly to citizens" means in practice.
The cybersecurity material is where Altman was most candid. He told Axios that a major cyberattack enabled by near-future AI models is totally possible within the next year, and that AI models creating novel pathogens is no longer theoretical. These are not policy proposals. They are risk acknowledgments. The policy paper is the frame; underneath it is a company that knows it will be judged by what it has helped make possible.
Nathan Calvin of Encode AI raised a related concern that the paper does not address: OpenAI's past conduct in regulatory processes. He told Fortune the company used intimidation tactics against critics of California SB 53 and New York's RAISE Act, including implied threats related to Elon Musk's involvement. The paper proposes a collaborative relationship between government and AI companies on safety. The track record is more complicated.
For founders and engineers watching this space, the relevant question is not whether any single proposal in the paper will become law. It probably will not, in its current form. The relevant question is what the regulatory environment will look like when it solidifies, and who will have shaped it. Altman is making a bet that the most influential voice in shaping AI policy will be the company that publishes first, proposes most loudly, and occupies the middle ground so thoroughly that the extremes become unelectable. That is a different kind of competitive moat than compute or data. It is also one that is harder to replicate.
Primary sources: OpenAI policy paper: Industrial Policy for the Intelligence Age, Fortune, Axios, The Next Web.