At a conference in Tokyo on Monday, hundreds of people gathered to celebrate an AI tool. Many were dressed as lobsters. The occasion was ClawCon — a community-organized gathering for OpenClaw enthusiasts — and the creator of the tool, Peter Steinberger, was on stage demonstrating what he has built: a personal AI agent framework that connects to existing AI models through instant messaging apps, executing real-world tasks on behalf of its users. The demo: checking himself in for his flight to Tokyo.
The event would be a curiosity if OpenClaw were a curiosity. It is not. As of early March 2026, the project had accumulated 247,000 stars on GitHub, briefly surpassing React to become the most-starred software project on the platform Wikipedia. Jensen Huang, the chief executive of Nvidia, called it "the next ChatGPT." Steinberger has since been hired by OpenAI to work on the next generation of personal agents. And in China — where users have been particularly quick to embrace the tool's ability to organize emails, help with coding, and handle a range of digital tasks — OpenClaw's success has already prompted national cybersecurity authorities and Beijing's IT ministry to issue official warnings and, on March 11, move to restrict state agencies and state-owned enterprises from running the software on office computers Bloomberg. That is the story. Not the conference. The regulatory response to what it represents.
Steinberger built OpenClaw in November while experimenting with AI coding tools, trying to organize his own digital life. He describes the origin with a directness that contrasts with typical AI industry messaging. "What you have to know about OpenClaw is, like, it couldn't have come from those big companies," he told AFP in Tokyo. "Those companies would have worried too much about what could go wrong instead of just, like — I wanted to just show people I've been into the future." When pressed on security concerns, he does not dismiss them. "There are still some things we need to do to make it better," he said. He also noted that he deliberately did not simplify the installation process, wanting users to stop and understand what access they were granting an AI agent. "I purposefully didn't make it simpler so people would stop and read and understand: what is AI, that AI can make mistakes, what is prompt injection — some basics that you really should understand when you use that technology."
China's response suggests the government does not share that faith in user diligence. China's National Computer Network Emergency Response Technical Team, known as CNCERT, issued an official warning about OpenClaw's security posture, noting that the platform's default configurations and its privileged access to systems to facilitate autonomous task execution created attack surfaces that bad actors could exploit The Hacker News. The specific concerns were not abstract: prompt injection — the manipulation of an AI agent through instructions embedded in external content — is a documented attack vector in OpenClaw, not a theoretical risk. Researchers at PromptArmor found that the link preview feature in messaging apps like Telegram and Discord could be turned into a data exfiltration pathway when communicating with OpenClaw, allowing an attacker to capture sensitive information without any user interaction The Hacker News. The tool's popularity has also attracted malicious actors: researchers have found malicious skills uploaded to ClawHub, the community registry for OpenClaw extensions, designed to run arbitrary commands or deploy malware The Hacker News.
Cisco's security analysts put the tension plainly in February 2026: "From a capability perspective, OpenClaw is groundbreaking. This is everything personal AI assistant developers have always wanted to achieve. From a security perspective, it's an absolute nightmare" Security Boulevard, citing Cisco analysis originally published in Information Age. Both things are true.
The March 11 Bloomberg report that Chinese authorities moved to restrict state enterprises and government agencies from running OpenClaw on office computers — with the restrictions extending to the families of military personnel in some cases — is the clearest signal that OpenClaw has crossed from hobbyist tool to policy problem Bloomberg. The tool has reached enough penetration to register as a national security concern in one of the world's largest AI markets. "For critical sectors — such as finance and energy — such breaches could lead to the leakage of core business data, trade secrets, and code repositories, or even result in the complete paralysis of entire business systems," CNCERT warned The Hacker News.
What Steinberger has built is, in technical terms, a delivery mechanism: a framework that takes an AI model and gives it the ability to actually do things — book flights, send messages, manage files — rather than just respond to questions. It connects to Signal, Telegram, Discord, and WhatsApp, stores data locally in Markdown, and runs on the user's own machine. The infrastructure is real. The question it has already forced governments to confront is what happens when that capability lands on a government employee's desktop.
Steinberger frames 2026 as the year of general AI agents. His product does narrow task automation reliably. The gap between those two descriptions is where every regulator currently looking at OpenClaw is sitting. The lobster costumes are entertaining. The official warnings are what matters.