In October 2024, Anthropic demonstrated a feature that looked like the future of work: an AI model that could see your screen, move your cursor, and complete tasks inside real software. The demo went viral. The feature shipped as an API. And then, almost immediately, it quietly stopped being a priority. The screenshots it relied on were too slow. The benchmark results were embarrassing. Computer use, as a product concept, looked like another AI demo that would never escape the lab.
Eighteen months later, it is back. And this time, it actually works.
On March 23, 2026, Anthropic announced that computer use had shipped inside Claude Cowork and Claude Code for macOS, available to Pro and Max subscribers. The company also unveiled a feature called Dispatch, which lets users assign tasks to their Claude-powered machine via a phone call. The framing was triumphant: a rebuilt, production-ready capability that represented years of research and a genuine step toward AI that operates in the world the way a human assistant would.
The triumph is real. It is also, in one specific sense, a story about a single developer who got there first.
Peter Steinberger is an Austrian engineer and the creator of OpenClaw, an open-source agent framework that lets AI models take control of a personal computer. Unlike the commercial products now shipping from major labs, OpenClaw runs locally, supports any model that speaks the right protocol, and was built as what Steinberger described in a post on his site steipete.me as a playground project. It went viral on social media. It accumulated thousands of stars on GitHub. And then it attracted the attention of Anthropic's lawyers.
In a letter reviewed by type0, Anthropic's legal team demanded that Steinberger stop using the name "Clawdbot" (steipete.me), which the company argued too closely resembled its own Claude branding. The exchange was brief and unpleasant. Steinberger complied with the trademark demand. A few months later, he was hired by OpenAI to work on agent products. OpenClaw is now in the process of transitioning to an independent foundation (steipete.me).
The irony is not subtle. OpenAI now employs the person who built the thing Anthropic just shipped as a major new capability. OpenAI's version is cloud-dependent, Mac-only, and restricted to paying subscribers. The technical approaches are different. The ambition is the same. And the person who got there first found himself navigating a legal threat from the company that would later announce the same product.
So what actually changed between 2024 and 2026? The short answer is Vercept.
In February 2026, Anthropic acquired Vercept, a startup that had developed a fundamentally different approach to getting AI models to interact with operating systems. Where the 2024 demo relied on screenshots captured at regular intervals, Vercept's technology operates at a different layer of abstraction, giving models richer and faster access to system state. The benchmark that tells this story most clearly is OSWorld, a standard evaluation for computer agents developed by researchers at multiple institutions. In late 2024, the best AI systems scored under 15 percent on OSWorld. By 2026, with the Vercept acquisition integrated into Claude 4 Sonnet, Anthropic reported a score of 72.5 percent.
That is a real jump. It is not a breakthrough in the way PR teams use that word, but it is a genuine advance in capability that explains why the 2026 product feels different from its predecessor and why labs are now moving aggressively to get it into users' hands. The question is not whether this technology works. It increasingly does. The question is who controls it, and on whose terms.
Anthropic's computer use is a cloud service. When your AI agent opens a file, sends a message, or navigates a browser on your behalf, that action runs on Anthropic's infrastructure. The model sees your screen through an API Anthropic operates. For many users, this will be fine. For others, particularly in enterprise environments, handing an external system that level of access to personal or corporate data is a non-starter. OpenClaw, by contrast, was designed to run entirely on-device. The model, the agent logic, and the computer control all stay local.
This is the fault line the industry is now racing to occupy. Every major AI lab wants to be the layer that sits between the user and their software. Whoever owns that interface owns the relationship. The applications become commoditized. The agent becomes the product. And the terms on which that agent operates — cloud versus local, open versus closed, subscription versus one-time purchase — determine who captures the value.
Steinberger is now inside one of those labs, working on the same class of product he built before anyone thought to C&D him. His departure from the project does not appear to have been accidental. OpenAI has been building agent infrastructure aggressively, and hiring the person who demonstrated that the demand was real before the majors moved is exactly the kind of strategic acquisition that does not show up in a press release.
The computer-use relaunch is not a relabel. The technology is genuinely better. But the story behind it — the developer who built it first, the legal threat that followed, the hiring that resolved the conflict — is a reminder that the AI industry's version of the future often involves building over the heads of people who got there first, on terms the user does not always get to set.
The benchmark numbers are real. The 72.5 percent on OSWorld matters. But the more important number in this story is the one that does not appear in any press release: the day Steinberger received the C&D, opened a new file, and kept building anyway.