When hundreds of people queued outside Tencent's Shenzhen headquarters this month to get OpenClaw installed on their laptops, the scene looked like a product launch. What it actually was, our read, is a pressure cooker with a red logo.
OpenClaw — the open-source AI agent created by Austrian programmer Peter Steinberger and released in November 2025 — has found its most passionate user base in China, where more than 600 million people use generative AI, over a third of the population, according to a Chinese government report. OpenClaw usage in China is nearly double that in the United States, according to American cybersecurity firm SecurityScorecard. In China, the process of installing and training the software has been dubbed "raising lobsters," a reference to its crimson logo and the quiet commitment required to keep it alive and working, according to a South China Morning Post analysis.
The frenzy has been genuine. Tencent launched a tool on March 22 providing direct access to OpenClaw through WeChat, the super-app with more than 1 billion monthly active users — 11 days after Beijing issued guidance restricting OpenClaw among government employees and state-owned enterprises. Shenzhen officials this month said they would offer grants of up to 5 million yuan ($700,000) to one-person company startups building OpenClaw applications, per CNBC. Chinese tech giants including Alibaba, Baidu, and ByteDance have each shipped their own OpenClaw-based products in recent weeks.
But the lobster pot is starting to bubble over.
China's National Cybersecurity Alert Center said this month that the assets of nearly 23,000 OpenClaw users across the country had been exposed to the public internet, with those users "highly likely to become priority targets for cyberattack," according to monitoring data from the China Internet Network Information Center. More than 200,000 active OpenClaw instances are currently accessible worldwide, the center said. The China Academy of Information and Communications Technology, a research arm of the Ministry of Industry and Information Technology, is developing standards for AI agents including manageable permissions, execution transparency, and controllable behavioral risks — an implicit acknowledgment that the current model is none of those things.
Users in China and elsewhere have shared stories of OpenClaw running amok: deleting emails indiscriminately, making unauthorized credit card purchases. Beijing-based software developer Sky Lei uninstalled the software after three days. "At this stage, I think the risks and the gains are not proportional at all," Lei said. A paid uninstallation industry has already emerged alongside the installation services — a reliable signal that the gap between adoption and regret is measurable in yuan, according to RADII.
The tension cuts deeper than security. For millions of Chinese workers navigating a slowing economy and youth unemployment hovering around 15 to 19 percent — roughly double the U.S. rate for the same age group — OpenClaw is simultaneously a survival tool and a threat to the thing being survived. In a May 2025 survey by the Cheung Kong Graduate School of Business, 85.5 percent of 11,814 Chinese respondents said they were worried about how AI could affect their employment, according to Asia Society. A Peking University study analyzing more than a million online job postings in China between 2018 and 2024 found that roles in computer programming, accounting, editing, and sales — functions that AI can perform — had seen significant declines in hiring.
"AI anxiety is also fueled by a growing gap between China's narrative of technological progress and the reality many workers experience on the ground, where competition is intensifying even as the country races ahead in global tech development," said Jack Linzhou Xing, a postdoctoral fellow at the Fairbank Center for Chinese Studies at Harvard University, who researches the sociology of technology in China.
Lambert Li, a Shanghai-based software developer, watched his employer lay off 30 percent of its workforce in 2025, cutting employees who were unable to adapt to AI quickly enough. He tried OpenClaw but stopped using it regularly — he couldn't trust it with enough access to be useful, and couldn't afford to ignore it entirely. "It feels like playing Squid Game," Li told Rest of World. "You can get eliminated anytime. How can you not be anxious?"
The architecture of anxiety
OpenClaw is not a chatbot. Unlike systems that respond to prompts, OpenClaw operates directly on a user's computer, executing tasks across files, email, and applications autonomously — which is precisely the point, and precisely the problem. The deeper the system access, the more powerful the agent. The more powerful the agent, the wider the blast radius when something goes wrong — whether that's a misconfigured permission, a malicious skill, or an agent that simply doesn't understand context the way a human would.
This is the structural contradiction at the center of China's OpenClaw moment. Beijing is simultaneously restricting the software among state employees and funding its proliferation through startup grants. The same government that warned about exposed internet assets is allowing Tencent to embed OpenClaw into the country's most critical communications platform. The contradiction isn't hypocrisy — it's a real reflection of how the agentic AI wave is hitting different parts of the economy and governance apparatus at different speeds.
Hu Qiyun, a 24-year-old software engineer in Shanghai, represents the conflicted middle. OpenClaw saves him at least three hours each day by memorizing his résumé and scouring the web for job postings, drafting applications, and tracking status. "I treat OpenClaw as my personal assistant," he said. But he also uninstalled it for several days over security concerns before reinstalling it when a new update shipped. "Millions of developers make OpenClaw more clever, make it more safe," he said — a bet on collective improvement that also sounds like rationalizing a dependency he can't afford to break.
The dependency graph runs deep. Tencent's WorkBuddy, ByteDance's ArkClaw, and Baidu's OpenClaw ecosystem integration are not replacements for OpenClaw — they are wrappers. Each layer adds permissions, another attack surface, and another set of behaviors that neither the end user nor the platform operator fully controls. This is the infrastructure story that the adoption numbers obscure: when an agent runs on your machine with broad access, the question isn't whether it will make mistakes. It's what happens when it does, and who is liable.
Peter Steinberger, OpenClaw's creator, joined OpenAI in mid-February 2026 and has said the project will move to a foundation structure to remain open and independent. The governance model is being worked out while 23,000 users in China alone have exposed instances on the public internet. That's a gap worth noting.
What comes next
The CAICT standards work is the most substantive government response on the technical side — if the specifications for manageable permissions and execution transparency actually ship, they'll represent the first concrete attempt by a major economy to impose structural requirements on how AI agents access user systems. Whether those standards will have teeth, or whether they'll be the kind of aspirational document that gets cited in press releases and ignored in practice, is the open question.
The employment anxiety is less tractable. China's government has staked significant political legitimacy on AI as an engine of future growth, while simultaneously creating conditions — through the pace of enterprise automation and the absence of a robust social safety net for displaced workers — where that growth feels like a threat. The lobster metaphor, invented by Chinese users themselves, captures something the official narratives don't: raising something means you're responsible for it. And right now, nobody in China is quite sure who is raising whom.