The Founder as Supervisor: What Happens When the AI Does the Work
What if the founder isn't the one doing the work?
That's the question quietly building inside a wave of policy moves, viral posts, and accelerator programs across China and beyond. In the past 18 months, a confluence of AI capabilities, updated company law, and municipal incentive programs has made it technically and legally possible for one person to operate a functioning company — with AI agents handling the execution layer while the human holds the vision and signs the checks.
The World Economic Forum called it the "Raphael model" in a May 2026 essay by Winston Ma: one founder, multiple specialized AI agents acting as scalable associates. The parallel is the High Renaissance painter who ran multiple workshops with dozens of associates executing his designs while he held the creative vision. Today, the WEF argues, a single founder can direct multiple AI agent teams simultaneously — the master creator, the AI the scalable workshop.
China is building the most concrete version of this today. Its revised Company Law, effective July 2024, cancelled earlier restrictions and now allows a single citizen to establish multiple one-person limited liability companies (OPCs). Since October 2025, 23 major Chinese cities including Shenzhen, Shanghai, and Beijing have launched dedicated support frameworks for OPCs, offering free office space and computing-power subsidies. Guangdong province alone aims to create 100 AI-OPC communities by 2028. New OPC registrations have surged, turning the formula "individual + AI agents = company" into a live policy experiment running at city scale.
"China is like a giant Silicon Valley," Lin Zhang, an associate professor at the University of New Hampshire who researches China's digital economy, told Rest of World. "When new technology emerges, the entire bureaucratic system is mobilized to develop it." Duke Wang, co-founder of a startup accelerator in Hangzhou, put it more bluntly: "There are still too few AI talents in China. We need to get everyone to start moving."
The economics look striking in the right context. A Honghub research report published in April 2026 — based on more than 1,500 surveys and 100 hours of interviews — found a 72x human-to-AI labor cost ratio in comparable operational roles: for every dollar spent on AI tooling, equivalent human developer labor costs approximately $72. The figure reflects Chinese wage conditions, where developer costs and AI tooling costs are measured against each other at local rates. Venture investors note that most OPCs will not grow into viable businesses; the 72x ratio is a trajectory indicator, not a universal constant — but it is the direction of travel globally as AI tooling costs fall. The Honghub study also found that 75% of solo founders in China now come from non-technical backgrounds, suggesting the barrier to entry is shifting from technical skill to operational judgment.
The West is running a parallel experiment, less policy-driven. In early 2026, lawyer Zack Shapiro published "The Claude-Native Law Firm" on X — a post describing how a two-person firm could outperform large legacy law firms by deeply integrating AI agents into its workflow. It received over 7 million views. In the post, Shapiro explained his practice by contrasting a firm's playbook with an individual lawyer's encoded judgment: "The difference between a firm playbook and an individual lawyer's encoded judgment is the difference between giving someone a recipe and teaching them how to cook." His AI, configured with his custom "skills" files encoding his analytical frameworks and formats, applies his specific judgment automatically — something he argued no specialized legal AI product could match. That reaction — seven million people, many of them lawyers, surprised that this was possible — is the adjacent-field signal the Raphael model implies: when execution becomes cheap, the scarce thing is the judgment encoded in the human who owns it.
The governance question is where the model runs into open terrain. In a March 2026 analysis for Bloomberg Law, Winston Ma wrote that the legal infrastructure for AI acting as a quasi-trustee — making binding decisions on behalf of a human principal — is "still evolving." The OpenClaw agent framework, which now has over 355,000 GitHub stars and surpassed React as the most-starred non-aggregator project on GitHub in March 2026, raises what Ma described as "questions on AI agents acting as trustees": when an agent executes a contract, files a regulatory submission, or moves funds, who is legally responsible? The accountability gap between the human who owns the company and the AI that runs it is an open question that existing agency law was not designed to answer — not a concluded finding. China's municipal OPC support programs are building around this ambiguity rather than waiting for it to resolve.
What happens next depends on which problem gets solved first: the technical capability for one person to run a company with AI agents, or the governance framework that determines who is liable when something goes wrong. China is running the experiment at city scale and treating the legal ambiguity as a feature. Western frameworks are still naming the problem. Whoever defines the governance standard for AI-operated companies will effectively write the rules for a structural model that is already underway.
For founders, operators, and the investors who back them, the second-order question is what becomes scarce when execution is cheap. If the AI handles everything from contracts to compliance to customer matching, what remains for the human at the center? The emerging answer — judgment, taste, accountability, relationship — sounds like a professional services answer, but it is also a startup infrastructure answer: the tools, templates, and legal wrappers that let one person be the responsible party while the agents do the work are themselves becoming a product category.