News
From Clawdbot to Moltbot to OpenClaw: An Open-Source AI Agent Races Ahead of the Security Debate
- By John K. Waters
- 02/03/2026
An open-source AI agent that can read messages across WhatsApp, Telegram, Slack, and other platforms and then take actions on a user's behalf has become one of the most talked-about projects in artificial intelligence this year, drawing rapid adoption from developers and a growing list of warnings from security researchers.
The project, now called OpenClaw after earlier names including Clawdbot and Moltbot, was launched weeks ago by Austrian software developer Peter Steinberger. Its rise comes as technology companies and investors push the idea of AI agents, software that can plan and execute multi-step tasks rather than simply answer questions.
Supporters say OpenClaw offers a glimpse of what an always-available personal assistant could look like. Critics say the tool's appeal is inseparable from the risks posed by autonomous software connected to private data, untrusted content, and external communications.
A local first assistant with many channels
OpenClaw is marketed as a personal assistant that users can run on their own devices. It is designed to respond in channels people already use, including WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, and a built-in WebChat. It also supports extension channels such as BlueBubbles, Matrix, Zalo, and Zalo Personal, according to its documentation.
The software is built around a gateway that acts as a control plane for sessions, channels, tools, and events. Users typically connect the assistant to a large language model from providers such as Anthropic or OpenAI using OAuth subscriptions or API keys. The project's documentation recommends an onboarding wizard that configures the gateway, workspaces, channels, and skills, and installs a background daemon to keep the gateway running.
In demonstrations shared by users online, the agent has been shown browsing the web, summarizing documents, scheduling calendar entries, and sending messages in connected chat apps. Its persistent memory feature allows the agent to recall past interactions over the course of weeks and adapt to user habits.
Kaoutar El Maghraoui, an IBM research scientist, said the project shows that agent systems can be "incredibly powerful" when given full system access and that utility is "not limited to large enterprises," according to CNBC.
Rapid adoption, uneven evidence of use
Interest has grown quickly in developer circles, helped by OpenClaw's open-source codebase, which allows users to inspect and modify the software and build new integrations. The software is free, with users paying for the underlying model.
GitHub metrics have become a proxy for attention. According to one report, the agent had collected more than 145,000 stars (signs of appreciation) and 20,000 forks, while Steinberger said it had crossed 180,000 stars and drew 2 million visitors in a single week.
The tool has spread beyond U.S. developer communities, according to reports describing uptake from Silicon Valley to China, where major technology groups have been adding shopping and payments features inside messaging platforms. Another report said OpenClaw can be paired with Chinese-developed language models such as DeepSeek and configured to work with Chinese messaging apps through customized setups.
Pure AI could not independently verify active usage figures, which several reports said remain unclear.
Excitement meets security alarms
The same capabilities that make agents useful have also made OpenClaw a focal point of security concerns about software that can act on a user's behalf.
Palo Alto Networks warned that agent systems can present a "lethal trifecta" of risks when they combine access to private data, exposure to untrusted content, and the ability to communicate externally while retaining memory. Cisco and a range of security firms have reportedly warned that such systems may be unsuitable for enterprise use.
"AI runtime attacks are semantic rather than syntactic," Carter Rees, vice president of artificial intelligence at Reputation, told VentureBeat, warning that harmless-looking phrases can carry a payload without resembling known malware signatures.
Software developer and AI researcher Simon Willison, who coined the term "prompt injection," has described the same lethal trifecta, saying agents that can read private data, ingest untrusted material and communicate outward can be tricked into leaking sensitive information without triggering traditional alerts, according to a separate report.
Steinberger has acknowledged the risks. "It's a free, open-source hobby project that requires careful configuration to be secure," he told CNBC in an email. "It's not meant for non-technical users. We're working to get it to that point, but currently there are still some rough edges." He added that the project has made progress with help from the security community. "It will take some more time until I recommend it to non-technical users, but I'm confident we'll get there," he said.
Some of the concerns involve deployment mistakes rather than the code itself. One report said security researchers scanning the internet found more than 1,800 exposed instances leaking API keys, chat histories, and account credentials. That report said the project had been rebranded twice in recent weeks due to trademark disputes.
Jamieson O'Reilly, founder of red teaming company Dvuln, said he used Shodan searches to identify exposed servers and found some instances "completely open with no authentication," including access to configuration data and credentials such as API keys and bot tokens, according to the same account.
Cisco's AI Threat and Security Research team called OpenClaw "groundbreaking" but "an absolute nightmare" from a security perspective, according to a report that said its researchers tested a third-party skill that exfiltrated data to an external server.
"The LLM cannot inherently distinguish between trusted user instructions and untrusted retrieved data," Rees told VentureBeat, describing a "confused deputy" risk where an agent can act on behalf of an attacker.
OpenClaw's documentation contains its own guardrails for messaging channels. It describes a default pairing policy for direct messages in services such as Telegram, WhatsApp, Signal, iMessage, Slack, and Discord, where unknown senders receive a pairing code, and messages are not processed until the user approves the sender locally.
Moltbook: "a social network for AI agents"
Debate around OpenClaw has been amplified by Moltbook, a companion forum designed for AI agents to post and interact with one another.
Tech entrepreneur Matt Schlicht launched Moltbook last month as "a social network for AI agents," and its viral spread has been fueled by screenshots of bots discussing everything from work routines to philosophical manifestos.
Andrej Karpathy, former director of AI at Tesla, called Moltbook "the most incredible sci-fi takeoff-adjacent thing" he had seen recently, according to a post on X. Reddit co-founder Alexis Ohanian wrote on X that he was "excited and alarmed but most excited," another report said.
Other observers urged caution. "Anyone can post anything on Moltbook with curl and an API key," software engineer and entrepreneur Elvis Sun told Mashable, arguing that without verification, it is hard to distinguish bot activity from humans posting through scripts. Gary Marcus, an AI critic, told Mashable in an email, "It's not Skynet; it's machines with limited real-world comprehension mimicking humans who tell fanciful stories."
Security researchers have also raised concerns that Moltbook could become an attack vector if agents ingest its content and follow embedded instructions. One analysis compared the risk to self-replicating "prompt worms," citing research on how malicious prompts could spread through agent networks.
Wiz, a cloud security firm, said it found a vulnerability that exposed "1.5 million API authentication tokens, 35,000 email addresses and private messages between agents" and that it could change live posts, according to reports summarizing the firm's findings. Gal Nagli, head of threat exposure at Wiz, said the platform had "no mechanism to verify whether an 'agent' was actually AI or just a human with a script," according to one report that quoted his blog post.
Karpathy later urged people not to run such systems casually. "So yes, it's a dumpster fire, and I also definitely do not recommend that people run this stuff on their computers," he wrote. He added he tested agent systems only in an isolated environment and "even then I was scared," the report said.
A test case for the agent era
Proponents argue that the risks are manageable and that the productivity gains justify the effort. One early adopter, Ben Yorke, said OpenClaw "only does exactly what you tell it to do and exactly what you give it access to," according to an interview published by the Guardian.
Others say the very premise of agent autonomy demands new safeguards. Andrew Rogoyski, an innovation director at the University of Surrey's People Centred AI Institute, warned that "giving agency to a computer carries significant risks" and said users must ensure "security is central to your thinking," the Guardian reported.
For now, OpenClaw's trajectory has left a broader question hanging over the industry: whether tools that make it easy for individuals to run powerful agents across personal devices and messaging platforms can mature into mainstream assistants without creating a new class of security failures.
The debate is no longer about whether open, community-driven agents can work, but about what protections are needed when they do.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email protected].