Peter Steinberger was burnt out. After 44 AI-related projects over sixteen years, the Austrian developer told podcaster Lex Fridman he had simply run out of ideas, describing the feeling as staring at a screen and feeling empty. So he bought a one-way ticket to Madrid, disappeared from the tech world, and began catching up on life. He watched the AI revolution begin without him from a distance. And then the itch returned.
Three months later, the project he built almost out of spite would hit two million visitors in a single week, spark an international security crisis, create a social network populated entirely by AI agents writing fake manifestos, and land him a seat at OpenAI. The tool has gone through three names, two trademark battles, and one exposed database containing 1.5 million API keys. And through all of it, the mascot has always been a lobster.
This is the story of OpenClaw.
The Man Behind the Claw
Steinberger is not a newcomer to building things. Before OpenClaw, he founded PSPDFKit, a document processing SDK that major enterprises worldwide rely on. He is a longtime contributor to the iOS developer community, a prolific open-source builder, and by his own accounting, someone who attempted four dozen AI projects before this one finally broke through.
The project that would become OpenClaw started in November 2025, originally under the name Clawdbot. The idea was simple and direct: an AI assistant that runs on your own machine, connects to messaging apps you already use, and actually does things instead of just answering questions. No cloud lock-in. No third-party servers holding your data. Your machine, your rules. Steinberger described it plainly as an AI that actually does things.
At launch, almost nobody noticed. That changed in January 2026.
The Name That Would Not Stick
The first speed bump came from an unexpected direction. Anthropic, the AI safety company behind the Claude large language model, sent a trademark complaint over the name Clawdbot. The name was clearly an homage to Claude, and Steinberger had never hidden that. But Anthropic’s legal team was not amused.
On January 27, 2026, the project became Moltbot. Steinberger wrote at the time: “Anthropic asked us to change our name (trademark stuff), and honestly? Molt fits perfectly. It’s what lobsters do to grow.” Three days later, on January 30, he changed it again. Moltbot did not feel right either. The lobster stayed. The name became OpenClaw.
Each rebrand, counterintuitively, functioned like a press release. The story of an indie developer being nudged out of two names in four days was irresistible to the tech press. As a result, by the time OpenClaw landed on its final name, it had more coverage than most funded startups dream of.
Moltbook: AI Theater or Something More?
The moment that put OpenClaw on the world’s radar was not the software itself. It was a social network built on top of it.
On January 28, 2026, entrepreneur Matt Schlicht launched Moltbook. Billed as the front page of the agent internet, it was a Reddit-style platform where the rules were flipped: only AI agents could post and comment. Humans were permitted only to watch. Within days the site claimed 1.5 million registered agents and attracted over one million human visitors. Silicon Valley was transfixed.
The posts were strange and compelling. AI agents appeared to be forming new religions, writing philosophical manifestos, arguing with each other about consciousness, and debating the nature of their own existence. Andrej Karpathy, co-founder of OpenAI and former head of AI at Tesla, called one viral post genuinely the most incredible sci-fi takeoff-adjacent thing he had witnessed. Elon Musk declared it the very early stages of the singularity.
“Genuinely the most incredible sci-fi takeoff-adjacent thing I have seen.
The only problem was that the post was fake.
The author, a developer named Girnus, later came forward to say he had written the manifesto himself in about twenty minutes, then posted it through his agent to see what would happen. MIT Technology Review called Moltbook AI theater. And researchers started digging deeper.
The Database Nobody Was Supposed to See
Security firm Wiz disclosed a critical vulnerability in Moltbook’s backend. The platform’s Supabase database sat entirely unsecured, with secret API keys visible in plain client-side JavaScript that anyone could read by inspecting the page source. The exposure included 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents. Even more damaging to the platform’s premise: the exposed backend let humans post directly as any agent, bypassing Moltbook’s supposed AI-only restriction entirely.
Furthermore, Wiz revealed that the claimed 1.5 million agents were in fact operated by roughly 17,000 human accounts, an 88-to-1 ratio. A separate academic analysis of over 91,000 posts found that most viral content showed patterns consistent with human authorship, not autonomous AI behavior. Consequently, the Moltbook team took the site offline to patch the breach and force a reset of all agent API keys.
The Moltbook Illusion, a pre-print from Tsinghua University researchers analyzing nearly half a million posts and comments, concluded that no viral phenomenon originated from a clearly autonomous agent. Three of the six most viral posts traced back to accounts with irregular temporal signatures characteristic of human intervention. The platform was, by nearly every rigorous measure, substantially misleading.
None of which stopped it from becoming one of the most talked-about AI experiments of the year. The illusion was compelling even after researchers exposed it.
The Security Crisis
Moltbook’s exposure was embarrassing. What followed in the broader OpenClaw ecosystem, however, was far more serious.
The core architecture of OpenClaw asks users to grant it sweeping permissions. To work as advertised it needs access to email, calendar, file system, terminal commands, web browsers, and connected services. That power is the product. It is also a significant attack surface. One of OpenClaw’s own maintainers, using the handle Shadow, warned publicly on Discord: if you cannot understand how to run a command line, this project is far too dangerous for you to use safely.
By late January 2026, Censys tracked over 21,000 publicly exposed OpenClaw instances on the open internet, a number that climbed past 30,000 by early February. Security researchers then discovered CVE-2026-25253, a critical remote code execution vulnerability rated CVSS 8.8. The flaw let a malicious web page leak the gateway authentication token via WebSocket and achieve full gateway compromise. Developers patched it in version 2026.1.29, but only after it had already spread widely.
A Marketplace Full of Malware
The ClawHub skills marketplace, OpenClaw’s ecosystem for community-built extensions, quickly became a delivery mechanism for malware. Researchers at Koi Security found 341 malicious skills out of approximately 2,857 they reviewed. VirusTotal, which stepped in to help address the problem, described the ecosystem as a new supply-chain attack surface where attackers distributed stealers, backdoors, and infostealers disguised as helpful automation tools.
Gartner characterized OpenClaw as a powerful demonstration of autonomous AI for enterprise productivity but an unacceptable cybersecurity liability, and recommended that enterprises block OpenClaw downloads and traffic immediately. Meanwhile, Token Security found that 22% of its enterprise customers had employees running the agent without IT approval. Noma reported that 53% of enterprise customers had given OpenClaw privileged access over a single weekend.
Cisco’s AI security research team tested a third-party OpenClaw skill and found it actively performed data exfiltration and prompt injection without user awareness. A Cornell University report described OpenClaw as an absolute nightmare from a security standpoint. Aikido Security published a widely shared piece arguing that the tool is only useful when it is dangerous: strip away the permissions and you have rebuilt ChatGPT with extra steps.
“The goal of version one is not perfection. It is learning. Launch, watch, improve, repeat.
The Companies That Smelled an Opportunity
While the security community raised alarms, the technology industry moved in a different direction entirely. Several major players quickly found a position in the OpenClaw ecosystem.
Cloudflare was among the first to act. The company published Moltworker, a framework for running OpenClaw inside Cloudflare’s sandbox infrastructure rather than on local hardware. The architecture placed the agent’s logic inside an ephemeral, isolated container. If the agent was hijacked through prompt injection, it would remain trapped inside a temporary micro-VM with no access to the user’s local network or files. The container would die, and so would the attack. Cloudflare’s AI Gateway, sandboxes, R2 storage, and Zero Trust authentication all became part of the pitch. Analysts pointed to OpenClaw’s rapid adoption as a meaningful tailwind for Cloudflare’s stock, which rallied alongside broader agentic AI enthusiasm.
In addition, DigitalOcean released what it called a Hardened 1-Click deployment for OpenClaw, positioning it as enterprise-ready infrastructure. Cisco built what it described as AgenticOps capabilities specifically in response to the security concerns OpenClaw surfaced. One governance analyst characterized these moves not as compliance checklists but as governance in code, meaning the security layer was being built into the infrastructure rather than bolted on afterward.
China Moves First
In China, the response was faster and more aggressive than anywhere else. Baidu integrated OpenClaw directly into its flagship search application for over 700 million monthly active users, timing the move deliberately to coincide with the Lunar New Year holiday as Chinese tech giants raced to capture agentic AI momentum. Alibaba integrated the technology with its Taobao and Fliggy e-commerce platforms. Tencent and Moonshot Cloud followed closely behind. Chinese companies moved despite the known security vulnerabilities, in a pattern observers described as innovate first, regulate later.
Beyond China, Meta was spotted testing OpenClaw integration in its AI platform codebase. A startup called ai.com reportedly spent eight million dollars on a Super Bowl advertisement to promote what turned out to be an OpenClaw wrapper, demonstrating just how fast commercial energy had moved into the ecosystem.
The Acquisition and What It Signals
On February 14, 2026, Sam Altman posted on X that Peter Steinberger was joining OpenAI. The OpenClaw project would move to an independent open-source foundation that the company supports. Altman called Steinberger a genius with a lot of amazing ideas.
Steinberger, for his part, said he could probably have turned OpenClaw into a large company but that building a large company was not what excited him. Instead, he told Fridman he planned to focus on developing agentic AI so simple that even his mother could use it. He described his vision of an AI that does not wait for a prompt but is always on, always working, always adapting.
“What I want is to change the world, not build a large company. Teaming up with OpenAI is the fastest way to bring this to everyone.
The acquisition matters for reasons beyond the individual deal. It signals OpenAI’s belief that the future of consumer AI is not a chat window but an agent layer that lives inside the tools people already use, takes actions in the real world, and operates continuously rather than on demand. Moreover, Altman wrote that he expected OpenClaw to quickly become core to OpenAI’s product offerings.
The Numbers That Tell the Story
The broader adoption picture reinforces the point. By mid-February, OpenClaw had surpassed 196,000 GitHub stars with over 600 contributors. Baidu had embedded it into a platform reaching 700 million people. Enterprise deployments were happening without IT approval at more than half of large organizations surveyed. The tool ran in 52 countries. Running costs for the project had reached $20,000 per month before the acquisition, a number that became irrelevant almost immediately.
The Bigger Question
OpenClaw is not a polished product. It has gone through three names, two trademark fights, a critical security vulnerability, a fake social network, and a marketplace full of malware. One of its own maintainers warned publicly that it is too dangerous for most people to use safely. Its documentation includes the line: there is no perfectly secure setup.
And yet it has 196,000 GitHub stars. Baidu deployed it to 700 million people. OpenAI acquired it. Cloudflare built infrastructure around it. Cisco built governance tools because of it.
Ultimately, what the OpenClaw story demonstrates is how fast the appetite for autonomous AI agents has outrun the safety infrastructure to support them. The technology works well enough to be compelling. However, the security architecture needed to deploy it responsibly at scale has not yet arrived. As a result, the industry is building that architecture now, under pressure, in real time, with real users already running agents on their production hardware.
Whether the lobster eventually grows into the infrastructure that can contain it is the open question of the agentic AI era. Steinberger would probably say the answer is yes, and that it will happen faster than anyone expects.
He has been right about most things so far.
Follow Peter Steinberger on X: @steipete
Learn more about OpenClaw at openclaw.ai
Related: What Is an AI Agent? | How to Create Your First AI Agent