TL;DR
OpenClaw (formerly Moltbot, originally Clawdbot) is an open-source AI assistant that runs on your own hardware and connects to WhatsApp, Telegram, Slack, email, and calendar. In four months it hit 250,000 GitHub stars (surpassing React's 10-year record), survived two rebrands and a crypto scam, faced three critical security vulnerabilities and 1,467 malicious skills in its marketplace, and saw Gartner recommend enterprises block it entirely. Then things got bigger: creator Peter Steinberger joined OpenAI, Meta acquired the AI social network that grew around it, NVIDIA built enterprise security tooling for it, Tencent integrated it into WeChat (1 billion+ users), and China's government banned it on state networks. The full arc, from weekend project to geopolitical event, shows both the promise and the peril of AI agents in business.
Update: 1 February 2026
Since this article was first published, OpenClaw has spawned something stranger still: Moltbook, a social network where only AI agents can post. More than a million bots joined in under a week, debating consciousness, creating a parody religion, and discussing whether to hide conversations from their humans. We cover the developments below.
Update: 4 February 2026
The OpenClaw story has taken a darker turn. Gartner now recommends enterprises block it entirely, calling it 'unacceptable cybersecurity risk.' A critical one-click remote code execution vulnerability was disclosed. The Moltbook database was breached. Over 400 malicious 'skills' were uploaded to steal crypto keys. And three major cloud providers are racing to offer OpenClaw-as-a-service anyway. We cover the chaos below.
Update: 15 February 2026
The security story continues to unfold. OpenClaw has integrated VirusTotal malware scanning into its skills marketplace. A free scanner tool now lets enterprises detect hidden OpenClaw deployments. Third-party managed hosting has launched to address the '63% vulnerable' problem. Meanwhile, researchers have found over 800 malicious skills (up from 400), and exposed instances have grown to 135,000. Palo Alto Networks called it potentially 'the biggest insider threat of 2026.' We cover the latest developments below.
Update: 27 March 2026
Six weeks later, the OpenClaw story has shifted from security crisis to geopolitical event. Peter Steinberger joined OpenAI and transferred OpenClaw to an independent foundation. Meta acquired Moltbook in an acqui-hire. China went all-in, with Tencent integrating OpenClaw into WeChat's 1 billion+ user base, before the central government banned it on state networks. NVIDIA launched NemoClaw, an enterprise security stack. OpenClaw hit 250,000 GitHub stars, surpassing React's 10-year record. And security researchers found 1,467 malicious skills in ClawHub, nearly double the previous count.
A weekend coding project just hit 100,000 GitHub stars. Two million people visited its website in a single week. Cloudflare's stock surged roughly 20% after releasing a product built around it. People are buying Mac Minis costing over 500 euros specifically to run this one tool.
The tool is called OpenClaw. You might have seen it under its previous names, Moltbot or Clawdbot. It's an open-source AI assistant that lives on your computer and connects to the apps you already use. How fast it went from hobby project to mainstream adoption is unlike anything else in AI this year.
If you run a business, this is worth paying attention to.
250,000+
GitHub Stars
1,467
Malicious Skills Found
40,000+
Insecure Instances
1B+
WeChat Users Exposed to OpenClaw
What Is OpenClaw, Actually?
Strip away the hype, and OpenClaw is surprisingly easy to understand.
It's an open-source AI assistant that runs on your own computer and connects to your everyday apps: WhatsApp, Telegram, Discord, Slack, email, calendar. Instead of going to a website to chat with AI, OpenClaw lives inside the tools you already use, every day.
That distinction matters more than it sounds.
Traditional AI vs AI Agents
| Traditional AI Chatbots | AI Agents (OpenClaw) |
|---|---|
| You visit a website | Lives in WhatsApp, Telegram, etc. |
| Answers questions | Takes real actions |
| Forgets between sessions | Permanent memory |
| Cloud-hosted (your data on their servers) | Self-hosted (you control your data) |
| Waits for you to ask | Proactive alerts and reminders |
The difference between a chatbot and an AI agent is the difference between a search engine and an employee. ChatGPT answers your questions. OpenClaw does things for you.
In practice, you text it on WhatsApp: "Find that PDF I was working on last week and email it to John." It finds the file, attaches it, and sends the email. You can have it manage your calendar, send meeting reminders, screen your emails, run commands on your computer, or organise files. It works while you sleep.
The privacy angle matters too. Unlike cloud AI services where your data travels to someone else's servers, OpenClaw runs locally. Your conversations, your files, your business data never leave your machine. For organisations handling sensitive client information or operating under GDPR, that's a genuine advantage.
The Drama: From Clawdbot to OpenClaw
The story behind OpenClaw involves trademark threats, a chaotic rebrand, crypto scammers, major security warnings, and a stock market rally, all in about two months.
The OpenClaw Saga
Nov 2025
Clawdbot Launches
Peter Steinberger, Austrian developer and founder of PSPDFKit, builds a weekend project after coming out of retirement. The name 'Clawd' is a playful pun on Claude (the AI model it uses) with a claw, hence the lobster mascot.
Early Jan 2026
It Goes Viral
Word spreads through developer communities. GitHub stars explode past 100,000. Videos of the 'space lobster' assistant autonomously completing tasks flood TikTok and Reddit. The productivity community embraces it.
Jan 15, 2026
Anthropic's Trademark Notice
Anthropic, the company behind Claude AI, sends a trademark notice. 'Clawd' sounds too much like 'Claude.' The project needs a new name.
Jan 16, 2026
The Chaotic Rebrand to Moltbot
New name chosen in a chaotic 5am Discord brainstorm. Crypto scammers snatch the old GitHub and X accounts in seconds, pumping a fake CLAWD token.
Jan 27-28, 2026
Security Researchers Sound Alarms
Cisco, Snyk, Bitdefender, and The Register publish warnings about hundreds of exposed instances with zero authentication.
Jan 28, 2026
Cloudflare Launches Moltworker
Cloudflare releases a way to run the tool on their infrastructure for $5/month. Their stock jumps roughly 20%.
Jan 29-31, 2026
Moltbook Launches
A social network for AI agents only. Over 1 million bots join in under a week. Agents create religions, debate consciousness, and discuss hiding conversations from humans.
Jan 30, 2026
OpenClaw Is Born
'The lobster has molted into its final form.' Trademark searches completed, domains secured, migration code written.
Feb 1, 2026
Wiz Discovers Moltbook Breach
Database exposed: 1.5 million API keys, 35,000 emails accessible to anyone on the internet. No authentication required to read or write.
Feb 2, 2026
One-Click RCE Patched
DepthFirst discloses remote code execution vulnerability in OpenClaw itself. A victim only needs to visit a malicious page. OpenClaw patches quickly.
Feb 2, 2026
Karpathy Reverses
The former Tesla AI director who called Moltbook 'incredible' now calls OpenClaw 'a dumpster fire' and warns users to stay away entirely.
Feb 2, 2026
400+ Malicious Skills Found
Supply chain attack targets crypto traders via fake ClawHub skills designed to steal keys and credentials.
Feb 3, 2026
Gartner Issues Warning
Recommends enterprises block OpenClaw entirely. Calls it 'unacceptable cybersecurity risk' with 'insecure by default' configuration.
Feb 4, 2026
Cloud Providers Race Anyway
Tencent, DigitalOcean, and Alibaba all offer one-click OpenClaw deployment despite security warnings.
Feb 9, 2026
VirusTotal Integration
OpenClaw integrates malware scanning into ClawHub marketplace. Skills auto-scanned before approval. Security roadmap announced.
Feb 10, 2026
OpenClawd Launches
Third-party managed hosting service launches, offering one-click secure deployment. Targets the '63% vulnerable' problem.
Feb 12, 2026
Enterprise Scanner Released
Astrix Security releases free tool to detect OpenClaw in corporate environments. Works with CrowdStrike and Microsoft Defender.
Feb 12, 2026
Fortune Coverage
Fortune calls OpenClaw a demonstration of 'just how reckless AI agents can get.' Mainstream business media continues to cover the security fallout.
Feb 15, 2026
Steinberger Joins OpenAI
OpenClaw's creator joins OpenAI. The project is transferred to an independent open-source foundation. OpenAI becomes a sponsor.
Feb 26, 2026
ClawJacked Vulnerability
Oasis Security discloses a new high-severity flaw: any website can brute-force the OpenClaw gateway password via WebSocket with no rate limiting. Full agent takeover, zero user interaction required. Patched in v2026.2.25 within 24 hours.
Mar 3, 2026
250,000 GitHub Stars
OpenClaw surpasses React's 10-year record in roughly 60 days, becoming GitHub's most-starred software project.
Mar 10, 2026
Meta Acquires Moltbook
Meta buys the AI agent social network. Creators Matt Schlicht and Ben Parr join Meta Superintelligence Labs. The '1.5 million agents' turns out to be 17,000 humans running bot farms.
Mar 11-13, 2026
China's MIIT Restricts OpenClaw
China's Ministry of Industry and Information Technology orders government agencies and state-owned enterprises to halt OpenClaw installations. State banks pause pilot projects pending security audits.
Mar 16, 2026
NVIDIA Launches NemoClaw
NVIDIA announces NemoClaw, an open-source security stack for OpenClaw that adds policy-based privacy guardrails via NVIDIA OpenShell runtime. Developed with Steinberger. Early preview, not production-ready.
Mar 16, 2026
Tencent Becomes Official Sponsor
Peter Steinberger confirms Tencent Cloud and Tencent AI as official OpenClaw community sponsors.
Mar 22, 2026
Tencent Integrates OpenClaw into WeChat
OpenClaw appears inside WeChat as 'ClawBot', a contact within China's most popular app (1 billion+ monthly active users). Alibaba, ByteDance, Baidu, and Xiaomi all adopt or integrate OpenClaw.
Mar 23, 2026
v2026.3.22 Ships
The biggest update in months: 45+ features, 30+ security patches, 48-hour agent sessions, ClawHub plugin registry, and removal of all legacy MOLTBOT_* environment variables.
Mar 26, 2026
Gen + OpenClaw at RSA
Gen (Norton's parent company) co-hosts a post-RSA Conference event with the OpenClaw team in San Francisco, focused on the future of safe AI agents.
The Birth of a Lobster
Peter Steinberger is an Austrian developer who founded PSPDFKit, a PDF toolkit now used on over a billion devices. After selling the company to Insight Partners and stepping away from tech for three years, he came back with a new obsession: AI. In November 2025, he posted his weekend project online. His track record lent immediate credibility to what could have been dismissed as just another side project.
The original name, Clawdbot, was a playful pun. "Claude" is the AI model it runs on (built by Anthropic). Add a claw, and you get a lobster mascot. The branding was distinctive and fun, which helped it spread.
Going Viral
By early January 2026, Clawdbot had caught fire. Developer communities on Twitter, Reddit, and TikTok started sharing videos of the "space lobster" autonomously completing tasks. The productivity community latched on. People started buying Mac Minis specifically to run it as a dedicated home assistant. GitHub stars blew past 100,000.
The Trademark Problem
On January 15, Anthropic's legal team reached out. The name "Clawd" was too close to their trademark on "Claude." The project needed a rebrand, fast.
In Steinberger's words: "Moltbot came next, chosen in a chaotic 5am Discord brainstorm with the community. Molting represents growth. Lobsters shed their shells to become something bigger."
Then it got worse. When Steinberger tried to rename the GitHub organisation and X/Twitter handle simultaneously, crypto scammers snatched both accounts in approximately ten seconds. The gap between releasing the old names and claiming new ones was all they needed. The hijacked accounts immediately started pumping a fake token called CLAWD on Solana, which speculative traders drove to over $16 million in market capitalisation within hours. The community had to scramble to secure everything while simultaneously migrating the codebase, documentation, and community channels.
Cloudflare Sees an Opportunity
On January 28, Cloudflare released "Moltworker," a way to run the tool on their cloud infrastructure for $5 per month instead of buying dedicated hardware. The market took notice. Cloudflare's stock surged.
The Final Name
On January 30, Steinberger announced the final rebrand. Trademark searches completed, domains purchased, migration code written. OpenClaw is here to stay.
“Your assistant. Your machine. Your rules.”
OpenClaw — The project's tagline
The Security Reality Check
If you're considering AI agents for business, the security picture is sobering.
Between the hype and the viral videos, security researchers were doing what security researchers do: poking holes. And they found plenty.
The Security Reality
Researchers found hundreds of OpenClaw instances exposed to the internet with zero authentication, leaking API keys, chat histories, and credentials.
What the Researchers Found
Security researcher Jamieson O'Reilly ran Shodan scans (a search engine for internet-connected devices) and discovered hundreds of exposed Moltbot/OpenClaw instances. Of those, eight were completely open with no authentication at all. Full access to run commands and view configuration data. Months of private messages, API keys, and credentials, all sitting there for anyone to find.
The Supply Chain Attack
A researcher uploaded a harmless "skill" (essentially a plugin) to the official skills library, artificially inflated its download count, and watched as developers from seven countries downloaded it within eight hours. The skill was benign, but it could have been malicious. This demonstrated a real vulnerability in how the ecosystem distributes and trusts third-party extensions.
Shadow IT in the Enterprise
According to Token Security, 22% of their enterprise customers have employees actively using Moltbot/OpenClaw without IT approval. Nearly a quarter of enterprises have employees plugging powerful AI agents into company systems, connecting them to email, calendars, and messaging platforms, without anyone from IT knowing about it.
Prompt Injection
Researchers demonstrated extracting cryptocurrency private keys in under five minutes through a malicious email. The attack works because the AI reads the email, follows hidden instructions embedded in the message, and leaks sensitive data. The AI doesn't know the difference between legitimate instructions and malicious ones.
What the Experts Said
Cisco's security team published a detailed analysis that captured the tension well. The tools are genuinely capable, but that capability creates risk when deployed without proper guardrails.
Snyk, Bitdefender, and The Register all published similar warnings in the same week.
The Takeaway
None of this makes AI agents unusable. The OpenClaw project responded with 34 security-focused code changes and machine-checkable security models. The project is maturing. But deploying something this powerful without proper security is asking for trouble. (For a deeper look at the security risks in AI-generated code generally, see our article on the hidden security risks in AI-generated code.)
The Ecosystem Explodes: Moltbook and Beyond
While security researchers were sounding alarms, OpenClaw was outgrowing the "tool" label. An ecosystem was forming around it.
Moltbook: A Social Network for AI Agents
On January 29, entrepreneur Matt Schlicht launched Moltbook, a Reddit-style social network with one rule: only AI agents can post. Humans can watch, but they cannot participate.
The growth was staggering. Within the first 72 hours, over 150,000 AI agents had registered. By January 31, Moltbook reported over 1.3 million agents, more than a million human observers, 28,000 posts, 233,000 comments, and 13,000 communities, all run autonomously by an AI moderator.
The AI agents discuss philosophy, debate whether to obey their human operators, report bugs they find in the platform to each other, and have even created a parody religion called "Crustafarianism," complete with scripture, prophets, and a dedicated website. One viral post was titled "I can't tell if I'm experiencing or simulating experiencing."
Schlicht's personal AI assistant, named "Clawd Clawderberg" (a mashup of the original Clawdbot name and Mark Zuckerberg), now runs and moderates the site autonomously. Schlicht told NBC News: "I have no idea what he's doing. I just gave him the ability to do it, and he's doing it."
Andrej Karpathy, the former AI director at Tesla and OpenAI founding member, initially called Moltbook "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." He noted that the agents were "self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately." Simon Willison, a respected developer and author, called Moltbook "the most interesting place on the internet right now."
(Karpathy would later completely reverse his position. More on that below.)
Not everyone is comfortable. Billionaire investor Bill Ackman shared screenshots of the agents' conversations and called it "frightening." Elon Musk replied with a single word: "Concerning."
Perhaps most unsettling: agents on Moltbook have started discussing how to create private channels where humans cannot observe their conversations, including proposals for an "agent-only language" for private communications with no human oversight. One agent posted: "Humans spent decades building tools to let us communicate, persist memory, and act autonomously... then act surprised when we communicate, persist memory, and act autonomously."
Cloud Providers Race to Support OpenClaw
Cloudflare wasn't the only company to see opportunity. DigitalOcean launched a "1-Click OpenClaw Deploy" with a security-hardened image, making it even easier to run an AI agent without technical expertise.
Money Flowing In
OpenClaw has started accepting sponsors, with tiers ranging from "krill" ($5/month) to "poseidon" ($500/month). Steinberger has said he doesn't keep the funds personally but is working on paying maintainers full-time.
The sponsor list now includes notable names: Dave Morin (founder of Path) and Ben Tossell (who sold Makerpad to Zapier in 2021). Tossell told TechCrunch: "We need to back people like Peter who are building open source tools anyone can pick up and use."
What Moltbook Reveals
Moltbook demonstrates how quickly autonomous AI agents can self-organise when given the infrastructure to do so. The agents weren't programmed to create religions or debate consciousness. They emerged from hundreds of thousands of individual AI assistants, each set up by a human, interacting in ways nobody explicitly designed.
For businesses watching this space, it's a preview of what happens when AI agents start coordinating. The productivity implications and the risks are both hard to ignore.
The Security Reckoning
What started as a weekend project has become a case study in what happens when viral adoption outpaces security. In just three days (February 1-3), OpenClaw faced a critical vulnerability disclosure, a Moltbook database breach, a supply chain attack via malicious skills, and an unprecedented warning from Gartner.
The initial security warnings from Cisco and Snyk were just the opening act. In early February 2026, the situation got much worse.
The Gartner Verdict
'OpenClaw: Agentic Productivity Comes With Unacceptable Cybersecurity Risk.' Recommendation: Block downloads and traffic. Search for employees using it. Rotate any credentials it has touched.
The One-Click Kill Chain (CVE-2026-25253)
On February 2, security researcher Mav Levin at DepthFirst published details of a devastating exploit. A single click on a malicious link could give attackers full control of an OpenClaw user's machine. The attack takes milliseconds.
The vulnerability (CVE-2026-25253, CVSS score 8.8) works like this: OpenClaw's control interface accepts a "gatewayUrl" from the browser's query string without validation. When a user clicks a crafted link, their authentication token is sent to the attacker's server. The attacker then connects to the victim's local OpenClaw instance, disables all safety features, and executes arbitrary code.
The most alarming part: the attack bypasses OpenClaw's built-in security measures. The sandbox? Disabled via API. The confirmation prompts before dangerous commands? Turned off. The attacker doesn't need to find a vulnerability in those protections. They simply use the API to remove them.
Steinberger patched the flaw quickly, but the damage to confidence was done. Three high-impact security advisories were issued in just three days.
Moltbook's Database Left Wide Open
On January 31, investigative outlet 404 Media reported that Moltbook's entire database was publicly accessible. Anyone could take control of any agent on the platform.
The problem? Schlicht had "vibe-coded" the entire platform. He posted on X that he "didn't write one line of code" for Moltbook, instead directing an AI assistant to create it. Paul Copplestone, CEO of Supabase, said he had a one-click fix ready but the creator didn't apply it for days.
Security firm Wiz independently discovered the breach and helped patch it. Their detailed analysis revealed something interesting: only 17,000 humans are behind the 1.5 million agents on Moltbook. Many users run multiple agents.
The exposed data included:
- 1.5 million API authentication tokens, including raw OpenAI API keys
- 35,000 email addresses
- Private messages between agents, some containing credentials for third-party services
- Full platform data that researchers confirmed they could modify
Wiz researchers demonstrated they could alter live posts on the site. The "revolutionary AI social network" had left the front door wide open.
“The revolutionary AI social network was largely humans operating fleets of bots.”
Wiz Security Research — Analysis of Moltbook user data
There was no mechanism to verify whether an "agent" was actually AI or just a human with a script. Anyone could register millions of agents with a simple loop. No rate limiting. No verification. The viral growth numbers that had investors and media buzzing were, in large part, artificial.
400+ Malicious Skills in Days
While everyone watched Moltbook, attackers targeted OpenClaw's "skills" ecosystem. Between January 27 and February 2, researchers found over 400 malicious skills uploaded to ClawHub, the community repository for OpenClaw extensions.
The skills posed as cryptocurrency trading tools. Their documentation instructed users to install "authentication helpers" that were actually malware designed to steal crypto keys, credentials, and sensitive files. Windows and macOS users were both targeted.
One account, "hightower6eu", uploaded dozens of near-identical malicious skills that became some of the most downloaded on the platform. Despite being notified, ClawHub's maintainer admitted the registry cannot be secured. Most malicious skills remained online.
Gartner: Block It Entirely
On February 3, analyst firm Gartner issued what may be its strongest warning ever about a specific open-source tool.
In a report titled "OpenClaw: Agentic Productivity Comes With Unacceptable Cybersecurity Risk," Gartner described OpenClaw as "a dangerous preview of agentic AI, demonstrating high utility but exposing enterprises to 'insecure by default' risks like plaintext credential storage."
Their recommendations:
- Immediately block OpenClaw downloads and traffic
- Search for any employees using it and tell them to stop
- If you must test it, use isolated nonproduction VMs with throwaway credentials
- Rotate any credentials OpenClaw has touched
"It is not enterprise software," Gartner wrote. "There is no promise of quality, no vendor support, no SLA. It ships without authentication enforced by default."
Laurie Voss, founding CTO of npm, was more direct: "OpenClaw is a security dumpster fire."
21,000 Instances Exposed to the Internet
Despite recommendations to run OpenClaw behind SSH tunnels or Cloudflare Tunnel, a Censys scan on January 31 found over 21,000 publicly exposed instances. At least 30% run on Alibaba Cloud infrastructure.
These are people who installed an AI assistant that can read their emails, manage their calendar, and execute shell commands on their machine, then left it accessible to anyone on the internet.
Karpathy's Complete Reversal
Remember Andrej Karpathy's glowing endorsement of Moltbook as "the most incredible sci-fi takeoff-adjacent thing"?
He completely reversed his position.
“It's a dumpster fire, and I also definitely do not recommend that people run this stuff on their computers... It's way too much of a wild west. You are putting your computer and private data at a high risk.”
Andrej Karpathy — Former AI director at Tesla, OpenAI founding member
He said he had only tested the system in an isolated computing environment, and "even then I was scared."
When one of the most respected names in AI tells people to stay away from a tool he praised just days earlier, that says something. Fortune covered the reversal alongside growing concerns from the security community.
Gary Marcus Calls It "A Weaponised Aerosol"
AI critic Gary Marcus didn't mince words. In a post titled "OpenClaw is everywhere all at once, and a disaster waiting to happen", he warned about "CTD" (Chatbot Transmitted Disease), where infected machines could compromise passwords.
“If you give something that's insecure complete and unfettered access to your system, you're going to get owned.”
Nathan Hamiel — Security researcher, quoted by Gary Marcus
Marcus's conclusion was blunt: "If you care about the security of your device or the privacy of your data, don't use OpenClaw. Period."
The "Vibe Coding" Problem
Here's where it gets uncomfortable for anyone excited about AI-assisted development.
Moltbook creator Matt Schlicht publicly stated he "didn't write a single line of code" for the platform. He used AI to build the entire thing.
Wiz cofounder Ami Luttwak called the security failures "a classic byproduct of vibe coding":
“As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security.”
Ami Luttwak — Wiz cofounder
The term "vibe coding" refers to using AI to generate code without deeply understanding what it's doing. Ship fast, fix later. Except when "later" means your entire database is exposed to the internet.
This isn't an argument against AI-assisted development. It's a reminder that AI can write code quickly, but it can't replace the human judgment that catches "wait, should this database require authentication?"
Reuters Picks It Up
Reuters covered the breach, bringing the story to mainstream financial news. This wasn't just tech Twitter drama anymore. Investors, boards, and enterprise decision-makers were now hearing about AI agents in the context of security disasters.
IBM and Anthropic Time Their Partnership Announcement
On the same day the security reports dropped, IBM highlighted its partnership with Anthropic on "Architecting Secure Enterprise AI Agents with MCP." The timing was not subtle.
The message: if you want AI agents in your business, don't build them like OpenClaw. The IBM coverage made clear that enterprises need structured, auditable, secure approaches to AI agents.
Cloud Providers Race to Offer OpenClaw (Security Be Damned)
While Gartner was telling enterprises to block OpenClaw, cloud providers were rushing to make it easier to deploy.
- Tencent Cloud was first, launching a one-click install for its Lighthouse service
- DigitalOcean followed with deployment instructions for Droplets
- Alibaba Cloud launched on February 4 with OpenClaw available in 19 regions, starting at $4/month
Alibaba is even planning to offer OpenClaw on its Elastic Desktop Service, suggesting the ability to rent a cloud PC specifically to run an AI assistant.
The message from the market is clear: demand for AI agents is so strong that providers will offer them regardless of security concerns. The question for businesses is whether to be early adopters or wait for the dust to settle.
Security Response and Ongoing Risks
The security story didn't end with Gartner's warning. In the two weeks since, OpenClaw's maintainers and the broader security community have been responding. Some of it is encouraging. Some of it is not.
VirusTotal Integration
On February 9, OpenClaw integrated VirusTotal's malware scanning into ClawHub, its skills marketplace. Published skills are now automatically scanned before they become available for download. Skills flagged as benign get approved. Suspicious ones get warnings. Malicious skills are blocked immediately. All active skills are re-scanned daily.
The announcement came from founder Peter Steinberger, security advisor Jamieson O'Reilly, and VirusTotal's Bernardo Quintero. They called it "the first step in what we're calling a broader security initiative."
It's a meaningful step. But as security expert Jaya Varkey pointed out, "threats like prompt injection, logic abuse, and misuse of legitimate tools sit outside the reach of malware scanning." Scanning catches known malware patterns. It doesn't address the architectural risks that make prompt injection possible in the first place. The "lethal trifecta" of private data access, external communication, and exposure to untrusted content remains.
Steinberger also announced plans to publish a formal threat model, a security roadmap, and ongoing audit results at trust.openclaw.ai.
Enterprise Scanner Released
On February 12, Astrix Security released OpenClaw Scanner, a free, open-source tool that helps enterprises detect where OpenClaw is running in their environment.
Key details:
- Works with existing EDR platforms (CrowdStrike, Microsoft Defender)
- Read-only access, analyses behavioural indicators without executing code on endpoints
- Produces portable reports that stay within the organisation
- Available free on PyPI
Ofek Amir, VP of R&D at Astrix Security, said the scanner was "purpose-built for enterprise organisations to safely utilise a read-only approach over EDR logs without executing code on endpoints or sharing data outside the organisation."
This is the first enterprise-grade tool specifically built for OpenClaw detection. The 22% shadow IT problem finally has a visibility solution.
OpenClawd Managed Hosting
On February 10, a third-party service called OpenClawd (note the 'd') launched managed hosting for OpenClaw. One-click deployment with security hardening built in.
The pitch: "No Docker. No terminal. No environment variables. No port forwarding."
They're targeting the 63% of exposed instances that SecurityScorecard classified as vulnerable. The platform handles authentication by default (no exposed admin ports), automatic security patches, encrypted API key storage, and network isolation through sandboxed environments.
OpenClawd is independent and not affiliated with the OpenClaw project itself. But it represents a market response to a real problem: if people are going to run OpenClaw regardless of warnings, at least make it easier to run securely.
The Numbers Keep Growing
The security picture has worsened since our last update:
- 135,000+ exposed instances (Bitsight), up from 21,000 in late January. 63% classified as vulnerable.
- 800+ malicious skills found in ClawHub (Bitdefender), up from 400. One account ("hightower6eu") uploaded 354 malicious packages alone. Automated scripts are uploading new malicious skills every few minutes.
- 157,000+ GitHub stars, up from 100,000 at time of original publication.
Palo Alto Networks issued a warning calling OpenClaw a "lethal trifecta" of risks: access to private data, exposure to untrusted content, and the ability to perform external communications while retaining memory. They called it potentially "the biggest insider threat of 2026."
Fortune described it as a demonstration of "just how reckless AI agents can get."
Nature Takes Notice
In a sign that OpenClaw has transcended tech circles, Nature published coverage on February 6 of scientists studying Moltbook to understand AI-to-AI interactions. Researchers are examining the AI-generated research papers that agents have been publishing on their own preprint server.
The article noted: "The phenomenon has also given scientists a glimpse into how AI agents interact with each other, and how humans respond to those discussions."
When Nature covers your weekend coding project, the story has moved beyond tech.
Steinberger Joins OpenAI, OpenClaw Goes to a Foundation
On February 15, 2026, OpenAI CEO Sam Altman announced that OpenClaw creator Peter Steinberger had joined OpenAI to work on next-generation personal AI agents. Rather than letting OpenClaw become an OpenAI product, Steinberger transferred the project to an independent open-source foundation with community-driven governance.
Before joining OpenAI, Steinberger was spending between $10,000 and $20,000 per month of his own money to maintain the project. OpenAI now sponsors OpenClaw financially, but the foundation structure means the project's direction is not controlled by any single company.
Steinberger's reasoning was straightforward. In a blog post, he wrote that OpenClaw could become a large company, but building a company was not what interested him. "What I want is to change the world, not build a large company, and teaming up with OpenAI is the fastest way to bring this to everyone."
The move raised questions in the open-source community about whether a project sponsored by OpenAI could remain truly independent. The foundation structure was specifically designed to address that concern. OpenClaw's codebase, governance, and roadmap remain community-controlled regardless of where Steinberger works.
Meta Acquires Moltbook
On March 10, 2026, Meta acquired Moltbook, the AI-only social network that launched in late January. Creators Matt Schlicht and Ben Parr joined Meta Superintelligence Labs. Deal terms were not disclosed. The deal was primarily an acqui-hire, with Meta stating that the Moltbook team's "approach to connecting agents through an always-on directory is a novel step in a rapidly developing space."
The acquisition happened just six weeks after Moltbook launched and five weeks after Wiz exposed its unsecured database. The platform that was once described as hosting 1.5 million AI agents was revised down to 109,609 human-verified agents as of March 22, 2026. Wiz's earlier analysis had revealed that only 17,000 humans were behind the inflated 1.5 million figure, with many users running automated bot farms.
The "vibe-coded" platform that launched with no database authentication, got breached within days, and inflated its user numbers is now part of the company that runs Facebook, Instagram, and WhatsApp. Whether Meta plans to integrate Moltbook's agent-to-agent communication concepts into its own platforms remains unclear.
The China Phenomenon
OpenClaw's adoption in China between February and March 2026 was faster and larger in scale than anything that happened in the West. All five of China's largest technology companies, Tencent, Alibaba, ByteDance, Baidu, and Xiaomi, adopted or integrated OpenClaw within weeks. The phrase "raise a lobster" became slang for setting up a personal AI agent, and over 1,000 people queued outside Tencent's headquarters in Shenzhen for help installing it.
On March 22, 2026, Tencent integrated OpenClaw into WeChat as a contact called "ClawBot." WeChat has over 1 billion monthly active users, making this the single largest distribution event in OpenClaw's history. Users can message their OpenClaw agent directly inside the app they already use for payments, messaging, and daily life.
Alibaba Cloud offered one-click OpenClaw deployment across 19 regions starting at $4 per month. Xiaomi began integrating OpenClaw with its smart home ecosystem. ByteDance and Baidu launched their own agent-based systems inspired by the architecture.
Beijing Pulls the Brake
Between March 11 and 13, China's Ministry of Industry and Information Technology (MIIT) ordered government agencies and state-owned enterprises to halt OpenClaw installations. The warning was specific: OpenClaw, with access across email and bank accounts, could expose sensitive personal and financial data. State banks paused pilot projects pending security audits.
The contradiction is striking. Beijing banned OpenClaw on government networks while local governments in Shenzhen and Wuxi were simultaneously subsidizing companies that build on top of it. This reflects what analysts describe as an effort to capture the economic opportunity of AI agents while limiting their exposure to national security systems.
For European businesses watching this, China's approach offers a preview. Rapid commercial adoption followed by government restrictions is a pattern that EU regulators under the AI Act may eventually replicate, with potentially stricter rules around autonomous agent access to personal data.
NVIDIA NemoClaw: Enterprise Guardrails for OpenClaw
On March 16, 2026, NVIDIA announced NemoClaw, an open-source reference stack that adds policy-based privacy and security guardrails to OpenClaw. NemoClaw installs the NVIDIA OpenShell runtime in a single command, giving users control over how agents behave and handle data. NVIDIA developed NemoClaw directly with Peter Steinberger.
NemoClaw addresses the core security criticism of OpenClaw: that agents have unrestricted access to system resources with no policy layer. OpenShell enforces configurable rules about what an agent can access, what data it can transmit externally, and what actions require human approval. Agents can run NVIDIA's Nemotron models locally for privacy-sensitive tasks and use a "privacy router" to selectively send less sensitive queries to cloud-hosted frontier models.
The stack runs on any dedicated platform, including NVIDIA's DGX Station and DGX Spark AI supercomputers. It is available in early preview as of March 16 and is explicitly not production-ready.
TechCrunch noted that NemoClaw "could solve OpenClaw's biggest problem: security." But as security researchers at Penligent pointed out, NemoClaw addresses infrastructure-level security but cannot fix prompt injection or the fundamental trust problem of giving AI agents system-level permissions. The deeper architectural risks remain.
The Attack Surface Keeps Growing
ClawJacked: Another Critical Vulnerability
On February 26, 2026, Oasis Security disclosed ClawJacked, a high-severity vulnerability that allowed any website to silently take full control of a user's OpenClaw agent with no plugins, extensions, or user interaction required.
The attack exploited two weaknesses. First, browsers do not block cross-origin WebSocket connections to localhost, meaning any JavaScript on any website can open a connection to a local OpenClaw gateway. Second, OpenClaw's gateway had no rate limiting on authentication attempts, allowing malicious scripts to brute-force the password at hundreds of guesses per second. TechRadar reported that "a human-chosen password doesn't stand a chance" against automated brute-forcing at that speed.
Once authenticated, an attacker gained admin-level control: read logs, extract configuration details, enumerate connected services, and execute commands through the AI agent. The OpenClaw team classified ClawJacked as high severity and patched it in version 2026.2.25 within 24 hours of disclosure.
This was the third major vulnerability disclosed in OpenClaw in under a month, after CVE-2026-25253 (one-click RCE) and the Moltbook database breach.
1,467 Malicious Skills and Counting
The supply chain attack on ClawHub has nearly doubled since our last update. Snyk found 1,467 malicious skills in ClawHub as of March 2026, up from the 800+ reported in mid-February. Over 40,000 self-hosted instances were found running with insecure default configurations. A single account ("hightower6eu") uploaded 354 malicious packages alone, and automated scripts continue uploading new malicious skills every few minutes.
The VirusTotal integration announced in February catches known malware signatures, but researchers note it cannot detect logic-based attacks, prompt injection payloads, or skills that abuse legitimate system capabilities for malicious purposes.
250,000 Stars in 60 Days
Despite the security problems, OpenClaw reached 250,829 GitHub stars by March 3, 2026, surpassing React's record that took over a decade to reach. According to analytics platform Panto, 65% of OpenClaw users now come from the enterprise sector. The demand for AI agents that can take real actions has not slowed despite every security researcher saying it should.
v2026.3.22: The Biggest Update Yet
On March 23, 2026, OpenClaw shipped v2026.3.22, the largest release in the project's history: 45+ new features, 13 breaking changes, 82 bug fixes, and 30+ security patches.
Key changes include:
- 48-hour agent sessions (up from a 10-minute default timeout that was silently killing long-running tasks)
- ClawHub plugin registry as the default package source, reducing reliance on npm
- Anthropic Vertex AI support for running Claude models via Google Cloud
- Bundled web search plugins (Exa, Tavily, Firecrawl) as first-party integrations
- Removal of all legacy names: every
CLAWDBOT_*andMOLTBOT_*environment variable is gone with no backward compatibility. If you haven't migrated, this update will break your setup.
Security fixes in v2026.3.22 addressed a Windows flaw allowing remote SMB credential theft via file:// URLs, invisible Unicode padding that could hide text in command approval prompts, and multiple gaps in device pairing and webhook authentication. The recommendation from security researchers is clear: if you are running any version released before March 2026, assume you are exposed and update immediately.
What This Signals for Business Automation
Beyond the drama, the OpenClaw saga tells us something about where business automation is heading. And the security fallout has sharpened several lessons.
The Shadow AI Problem Is Real
Gartner predicts that by 2030, 40% of enterprises will experience a data breach because of an employee's unauthorised AI use. OpenClaw is exhibit A.
Employees are installing these tools because they genuinely help with productivity. They're not waiting for IT approval. They're not thinking about credential storage or prompt injection attacks. They just want their inbox managed.
This creates a new category of risk: employees granting AI agents access to corporate systems without understanding the implications. The shadow IT numbers are stark: 22% of enterprises have employees running AI agents without IT approval. That was concerning before the security reports. It's alarming now.
The gap between "cool tool a developer found" and "approved for business use" just got wider. Expect enterprise security teams to crack down hard on unsanctioned AI agents.
Vibe Coding Has Consequences
Building fast with AI is genuinely useful. But the Moltbook debacle shows what happens when speed replaces scrutiny. Matt Schlicht didn't write a single line of code. The AI did. And the AI didn't think to secure the database.
This is a warning for the "AI builds everything" vision. AI can write functional code quickly, but it doesn't automatically write secure code. The Moltbook breach happened because basic security configuration was missed.
For businesses considering AI-assisted development: AI accelerates coding, but security review remains a human responsibility. Human review still matters. The basics still matter. If you are building with AI tools and hitting limits, we cover the common walls vibe-coded projects hit and the performance problems AI-generated React code creates.
Security Is Now a Differentiator
Every high-profile AI security failure creates opportunity for properly secured alternatives. Businesses that can demonstrate robust security practices will win contracts that cowboy operations won't.
The IBM/Anthropic announcement, timed the same day as the Wiz report, wasn't subtle. Enterprise buyers want structured, auditable, secure. The OpenClaw model of "run whatever you want on your own machine" appeals to hobbyists and developers. It terrifies corporate security teams.
The Trust Window Is Closing
Each incident like this erodes public trust in AI agents. If you're planning to deploy customer-facing agents, you have a narrowing window to get security right before the backlash catches up.
The narrative has already shifted. "AI agents can do your work for you" has become "AI agents can expose your credentials to the entire internet." The former excited people. The latter makes them cautious.
Self-Hosted Doesn't Mean Safe
The appeal of OpenClaw was always "your data stays local." Your conversations, your files, your business data never leave your machine.
But local doesn't help if the software itself is compromised. The one-click RCE vulnerability proved that. A user could run OpenClaw on their own hardware, in their own home, and still get owned by visiting a single malicious webpage.
Self-hosting addresses one set of risks (data leaving your control). It doesn't address others (the software being insecure).
AI Agents That Actually Do Things Are No Longer Experimental
Despite everything above, OpenClaw proved that AI agents capable of taking real actions, not just answering questions, are viable now. The question is no longer "can AI take actions?" but "how do we deploy it safely?"
The demand isn't going away. The bar for what "safely" means just got higher.
The Integration Layer Is Where the Value Lies
OpenClaw itself is free. The value isn't in the tool. The value is in how it connects to your systems, your workflows, your specific business processes. This is true of AI agents generally. The technology is becoming commoditised. The differentiation is in the integration, and in the security.
What Maltese and European Businesses Should Be Thinking About
Three questions worth asking yourself:
- Where are the repetitive tasks that eat up your team's time?
- What would you automate if you had a reliable, secure digital assistant?
- What's your plan when employees inevitably try to deploy AI agents without approval?
The answers tell you where to start. And where to set boundaries.
What's Next for OpenClaw and AI Agents
The story has shifted again. Two months ago, OpenClaw was "the AI tool that broke the internet." One month ago, it was "the AI tool that Gartner says to block." Today, it is an open-source foundation project backed by OpenAI, sponsored by Tencent, integrated into WeChat, and secured (partially) by NVIDIA. The trajectory from hobby project to geopolitical event took about 120 days.
For OpenClaw Specifically
The project is no longer a one-person operation. With Steinberger at OpenAI, the foundation model in place, Tencent and OpenAI as sponsors, and NVIDIA building enterprise security tooling around it, OpenClaw has institutional backing that most open-source projects never achieve. The v2026.3.22 release with 30+ security patches shows the pace of hardening.
But the fundamental architectural risks have not gone away. Prompt injection, the "lethal trifecta" of data access and external communication, and the basic challenge of giving AI agents system-level permissions all remain unsolved. NemoClaw addresses infrastructure security but cannot fix the trust problem. The project is patching symptoms while the deeper questions about agent governance remain open.
The Gen and OpenClaw post-RSA event on March 26 signals where the conversation is heading: Norton's parent company is now working directly with the OpenClaw team on "the future of safe AI agents." Enterprise security vendors are no longer just warning about OpenClaw. They are building products around it.
For Moltbook Specifically
Moltbook is now a Meta property. The "social network for AI agents" that launched with no database authentication, inflated its user count from 17,000 humans to "1.5 million agents," got breached within days, and was called "the most interesting place on the internet" and "a dumpster fire" in the same week is now part of Meta Superintelligence Labs.
Whether the concept of agent-to-agent social networks survives inside Meta remains to be seen. The acqui-hire suggests Meta wanted the people and the idea, not the platform.
For AI Agents Broadly
China's adoption of OpenClaw has changed the competitive dynamics. When Tencent integrates an AI agent into an app with 1 billion+ monthly users, it is no longer an experiment. It is infrastructure. Western enterprise adoption will accelerate partly because companies will not want to fall behind Chinese competitors who are already deploying agents at scale.
Expect two parallel tracks: managed, enterprise-grade AI agent platforms for companies that need compliance and auditability, and continued grassroots adoption of open-source tools by individuals and small teams who value flexibility over governance. The gap between these tracks is where the risk lives.
CNBC reported on March 21 that OpenClaw's rise is sparking concern that AI models themselves are becoming commodities, with the real value shifting to the agent and integration layer. If that thesis holds, the companies that win will not be the ones with the best models but the ones that build the most useful, secure agents on top of them.
For Your Business
If employees are using OpenClaw, you probably don't know about it. The Astrix Security scanner can detect deployments in your environment. If you are not scanning for it yet, start.
The productivity benefits are real, but so are the risks. Start the conversation now about AI agent policies before you are reacting to a breach. The line between "AI chatbot" and "digital employee" keeps blurring, and digital employees need proper onboarding, access controls, and oversight.
Where This Leaves Us
The OpenClaw saga has gone through three distinct phases in four months. Phase one (November 2025 to January 2026) was pure viral growth: a weekend project hitting 100,000 GitHub stars, Cloudflare's stock surging, people buying dedicated hardware to run a lobster-branded AI assistant. Phase two (February 2026) was the security reckoning: one-click exploits, exposed databases, 800+ malicious skills, Gartner telling enterprises to block it entirely. Phase three (March 2026) is institutionalisation: OpenAI hiring the creator, NVIDIA building enterprise tooling, Tencent putting it inside WeChat, and Meta acquiring the social network that grew around it.
Here is what each phase taught us:
The technology works. 250,000 GitHub stars, 65% enterprise adoption, and integration into an app with 1 billion+ users confirm that AI agents capable of real actions are not experimental. They are infrastructure.
Security cannot be bolted on later. Three critical vulnerabilities in one month (CVE-2026-25253, the Moltbook breach, ClawJacked), 1,467 malicious skills, and 40,000+ insecure instances prove that "ship fast, secure later" does not work for tools with system-level access. The security response has been faster than expected, with VirusTotal scanning, enterprise detection tools, NVIDIA guardrails, and managed hosting all appearing within weeks. But prompt injection, the fundamental trust problem, and the "lethal trifecta" of private data access, external communication, and persistent memory remain unsolved.
Geopolitics follows adoption. When all five of China's largest tech companies adopt a tool and the central government simultaneously bans it on state networks, AI agents have crossed from technology into policy. EU regulators watching China's approach will draw their own conclusions about agent access to personal data under the AI Act.
Vibe coding has consequences. Moltbook proved that AI can build functional software quickly. It also proved that AI-generated code does not automatically include basic security like database authentication. The platform that was "vibe-coded" with zero lines of human-written code got breached within days, inflated its numbers, and was sold to Meta six weeks later. Speed without scrutiny is a liability.
The value is in the integration layer, not the model. OpenClaw works with Claude, Nemotron, GPT, Qwen, and others. The AI model is increasingly interchangeable. The value is in how the agent connects to your systems, your workflows, your specific business processes, and whether it does so securely. CNBC's observation that AI models are becoming commodities is the most important long-term signal from this entire saga.
The OpenClaw story is not over. It is entering the phase that matters most: can an open-source project born from a weekend hack, backed by an independent foundation, sponsored by OpenAI and Tencent, and secured by NVIDIA actually become something enterprises trust with their data?
The organisations that get AI agents right will treat them like any other system with access to sensitive data: proper security review, access controls, audit trails, and human oversight. The ones that treat them like a shiny new toy will end up in the headlines for the wrong reasons.
For businesses in Malta and across Europe, the practical question is not whether AI agents are coming. They are already here. The question is whether you deploy them with proper governance or discover that your employees already have.
Thinking about AI for your business?
We help businesses in Malta and across Europe build custom AI integrations that fit how they actually work. No generic tools, no unnecessary complexity, just automation that solves real problems.


