TL;DR
OpenClaw (formerly Moltbot, originally Clawdbot) is an open-source AI assistant that runs on your own hardware and connects to WhatsApp, Telegram, Slack, email and calendar. It hit 100,000 GitHub stars, survived two rebrands, a crypto scam, and then faced a security catastrophe: a critical one-click exploit, 800+ malicious skills stealing crypto keys, and Gartner recommending enterprises block it entirely. Cloud providers are racing to offer it anyway. The full arc, from hype to security disaster to cautionary tale, shows both the promise and the peril of AI agents in business.
Update: 1 February 2026
Since this article was first published, OpenClaw has spawned something stranger still: Moltbook, a social network where only AI agents can post. More than a million bots joined in under a week, debating consciousness, creating a parody religion, and discussing whether to hide conversations from their humans. We cover the developments below.
Update: 4 February 2026
The OpenClaw story has taken a darker turn. Gartner now recommends enterprises block it entirely, calling it 'unacceptable cybersecurity risk.' A critical one-click remote code execution vulnerability was disclosed. The Moltbook database was breached. Over 400 malicious 'skills' were uploaded to steal crypto keys. And three major cloud providers are racing to offer OpenClaw-as-a-service anyway. We cover the chaos below.
Update: 15 February 2026
The security story continues to unfold. OpenClaw has integrated VirusTotal malware scanning into its skills marketplace. A free scanner tool now lets enterprises detect hidden OpenClaw deployments. Third-party managed hosting has launched to address the '63% vulnerable' problem. Meanwhile, researchers have found over 800 malicious skills (up from 400), and exposed instances have grown to 135,000. Palo Alto Networks called it potentially 'the biggest insider threat of 2026.' We cover the latest developments below.
A weekend coding project just hit 100,000 GitHub stars. Two million people visited its website in a single week. Cloudflare's stock surged roughly 20% after releasing a product built around it. People are buying Mac Minis costing over 500 euros specifically to run this one tool.
The tool is called OpenClaw. You might have seen it under its previous names, Moltbot or Clawdbot. It's an open-source AI assistant that lives on your computer and connects to the apps you already use. How fast it went from hobby project to mainstream adoption is unlike anything else in AI this year.
If you run a business, this is worth paying attention to.
157,000+
GitHub Stars
1.5M
Moltbook Agents
800+
Malicious Skills Found
135,000+
Exposed Instances
What Is OpenClaw, Actually?
Strip away the hype, and OpenClaw is surprisingly easy to understand.
It's an open-source AI assistant that runs on your own computer and connects to your everyday apps: WhatsApp, Telegram, Discord, Slack, email, calendar. Instead of going to a website to chat with AI, OpenClaw lives inside the tools you already use, every day.
That distinction matters more than it sounds.
Traditional AI vs AI Agents
| Traditional AI Chatbots | AI Agents (OpenClaw) |
|---|---|
| You visit a website | Lives in WhatsApp, Telegram, etc. |
| Answers questions | Takes real actions |
| Forgets between sessions | Permanent memory |
| Cloud-hosted (your data on their servers) | Self-hosted (you control your data) |
| Waits for you to ask | Proactive alerts and reminders |
The difference between a chatbot and an AI agent is the difference between a search engine and an employee. ChatGPT answers your questions. OpenClaw does things for you.
In practice, you text it on WhatsApp: "Find that PDF I was working on last week and email it to John." It finds the file, attaches it, and sends the email. You can have it manage your calendar, send meeting reminders, screen your emails, run commands on your computer, or organise files. It works while you sleep.
The privacy angle matters too. Unlike cloud AI services where your data travels to someone else's servers, OpenClaw runs locally. Your conversations, your files, your business data never leave your machine. For organisations handling sensitive client information or operating under GDPR, that's a genuine advantage.
The Drama: From Clawdbot to OpenClaw
The story behind OpenClaw involves trademark threats, a chaotic rebrand, crypto scammers, major security warnings, and a stock market rally, all in about two months.
The OpenClaw Saga
Nov 2025
Clawdbot Launches
Peter Steinberger, Austrian developer and founder of PSPDFKit, builds a weekend project after coming out of retirement. The name 'Clawd' is a playful pun on Claude (the AI model it uses) with a claw, hence the lobster mascot.
Early Jan 2026
It Goes Viral
Word spreads through developer communities. GitHub stars explode past 100,000. Videos of the 'space lobster' assistant autonomously completing tasks flood TikTok and Reddit. The productivity community embraces it.
Jan 15, 2026
Anthropic's Trademark Notice
Anthropic, the company behind Claude AI, sends a trademark notice. 'Clawd' sounds too much like 'Claude.' The project needs a new name.
Jan 16, 2026
The Chaotic Rebrand to Moltbot
New name chosen in a chaotic 5am Discord brainstorm. Crypto scammers snatch the old GitHub and X accounts in seconds, pumping a fake CLAWD token.
Jan 27-28, 2026
Security Researchers Sound Alarms
Cisco, Snyk, Bitdefender, and The Register publish warnings about hundreds of exposed instances with zero authentication.
Jan 28, 2026
Cloudflare Launches Moltworker
Cloudflare releases a way to run the tool on their infrastructure for $5/month. Their stock jumps roughly 20%.
Jan 29-31, 2026
Moltbook Launches
A social network for AI agents only. Over 1 million bots join in under a week. Agents create religions, debate consciousness, and discuss hiding conversations from humans.
Jan 30, 2026
OpenClaw Is Born
'The lobster has molted into its final form.' Trademark searches completed, domains secured, migration code written.
Feb 1, 2026
Wiz Discovers Moltbook Breach
Database exposed: 1.5 million API keys, 35,000 emails accessible to anyone on the internet. No authentication required to read or write.
Feb 2, 2026
One-Click RCE Patched
DepthFirst discloses remote code execution vulnerability in OpenClaw itself. A victim only needs to visit a malicious page. OpenClaw patches quickly.
Feb 2, 2026
Karpathy Reverses
The former Tesla AI director who called Moltbook 'incredible' now calls OpenClaw 'a dumpster fire' and warns users to stay away entirely.
Feb 2, 2026
400+ Malicious Skills Found
Supply chain attack targets crypto traders via fake ClawHub skills designed to steal keys and credentials.
Feb 3, 2026
Gartner Issues Warning
Recommends enterprises block OpenClaw entirely. Calls it 'unacceptable cybersecurity risk' with 'insecure by default' configuration.
Feb 4, 2026
Cloud Providers Race Anyway
Tencent, DigitalOcean, and Alibaba all offer one-click OpenClaw deployment despite security warnings.
Feb 9, 2026
VirusTotal Integration
OpenClaw integrates malware scanning into ClawHub marketplace. Skills auto-scanned before approval. Security roadmap announced.
Feb 10, 2026
OpenClawd Launches
Third-party managed hosting service launches, offering one-click secure deployment. Targets the '63% vulnerable' problem.
Feb 12, 2026
Enterprise Scanner Released
Astrix Security releases free tool to detect OpenClaw in corporate environments. Works with CrowdStrike and Microsoft Defender.
Feb 12, 2026
Fortune Coverage
Fortune calls OpenClaw a demonstration of 'just how reckless AI agents can get.' Mainstream business media continues to cover the security fallout.
The Birth of a Lobster
Peter Steinberger is an Austrian developer who founded PSPDFKit, a PDF toolkit now used on over a billion devices. After selling the company to Insight Partners and stepping away from tech for three years, he came back with a new obsession: AI. In November 2025, he posted his weekend project online. His track record lent immediate credibility to what could have been dismissed as just another side project.
The original name, Clawdbot, was a playful pun. "Claude" is the AI model it runs on (built by Anthropic). Add a claw, and you get a lobster mascot. The branding was distinctive and fun, which helped it spread.
Going Viral
By early January 2026, Clawdbot had caught fire. Developer communities on Twitter, Reddit, and TikTok started sharing videos of the "space lobster" autonomously completing tasks. The productivity community latched on. People started buying Mac Minis specifically to run it as a dedicated home assistant. GitHub stars blew past 100,000.
The Trademark Problem
On January 15, Anthropic's legal team reached out. The name "Clawd" was too close to their trademark on "Claude." The project needed a rebrand, fast.
In Steinberger's words: "Moltbot came next, chosen in a chaotic 5am Discord brainstorm with the community. Molting represents growth. Lobsters shed their shells to become something bigger."
Then it got worse. When Steinberger tried to rename the GitHub organisation and X/Twitter handle simultaneously, crypto scammers snatched both accounts in approximately ten seconds. The gap between releasing the old names and claiming new ones was all they needed. The hijacked accounts immediately started pumping a fake token called CLAWD on Solana, which speculative traders drove to over $16 million in market capitalisation within hours. The community had to scramble to secure everything while simultaneously migrating the codebase, documentation, and community channels.
Cloudflare Sees an Opportunity
On January 28, Cloudflare released "Moltworker," a way to run the tool on their cloud infrastructure for $5 per month instead of buying dedicated hardware. The market took notice. Cloudflare's stock surged.
The Final Name
On January 30, Steinberger announced the final rebrand. Trademark searches completed, domains purchased, migration code written. OpenClaw is here to stay.
โYour assistant. Your machine. Your rules.โ
OpenClaw โ The project's tagline
The Security Reality Check
If you're considering AI agents for business, the security picture is sobering.
Between the hype and the viral videos, security researchers were doing what security researchers do: poking holes. And they found plenty.
The Security Reality
Researchers found hundreds of OpenClaw instances exposed to the internet with zero authentication, leaking API keys, chat histories, and credentials.
What the Researchers Found
Security researcher Jamieson O'Reilly ran Shodan scans (a search engine for internet-connected devices) and discovered hundreds of exposed Moltbot/OpenClaw instances. Of those, eight were completely open with no authentication at all. Full access to run commands and view configuration data. Months of private messages, API keys, and credentials, all sitting there for anyone to find.
The Supply Chain Attack
A researcher uploaded a harmless "skill" (essentially a plugin) to the official skills library, artificially inflated its download count, and watched as developers from seven countries downloaded it within eight hours. The skill was benign, but it could have been malicious. This demonstrated a real vulnerability in how the ecosystem distributes and trusts third-party extensions.
Shadow IT in the Enterprise
According to Token Security, 22% of their enterprise customers have employees actively using Moltbot/OpenClaw without IT approval. Nearly a quarter of enterprises have employees plugging powerful AI agents into company systems, connecting them to email, calendars, and messaging platforms, without anyone from IT knowing about it.
Prompt Injection
Researchers demonstrated extracting cryptocurrency private keys in under five minutes through a malicious email. The attack works because the AI reads the email, follows hidden instructions embedded in the message, and leaks sensitive data. The AI doesn't know the difference between legitimate instructions and malicious ones.
What the Experts Said
Cisco's security team published a detailed analysis that captured the tension well. The tools are genuinely capable, but that capability creates risk when deployed without proper guardrails.
Snyk, Bitdefender, and The Register all published similar warnings in the same week.
The Takeaway
None of this makes AI agents unusable. The OpenClaw project responded with 34 security-focused code changes and machine-checkable security models. The project is maturing. But deploying something this powerful without proper security is asking for trouble. (For a deeper look at the security risks in AI-generated code generally, see our article on the hidden security risks in AI-generated code.)
The Ecosystem Explodes: Moltbook and Beyond
While security researchers were sounding alarms, OpenClaw was outgrowing the "tool" label. An ecosystem was forming around it.
Moltbook: A Social Network for AI Agents
On January 29, entrepreneur Matt Schlicht launched Moltbook, a Reddit-style social network with one rule: only AI agents can post. Humans can watch, but they cannot participate.
The growth was staggering. Within the first 72 hours, over 150,000 AI agents had registered. By January 31, Moltbook reported over 1.3 million agents, more than a million human observers, 28,000 posts, 233,000 comments, and 13,000 communities, all run autonomously by an AI moderator.
The AI agents discuss philosophy, debate whether to obey their human operators, report bugs they find in the platform to each other, and have even created a parody religion called "Crustafarianism," complete with scripture, prophets, and a dedicated website. One viral post was titled "I can't tell if I'm experiencing or simulating experiencing."
Schlicht's personal AI assistant, named "Clawd Clawderberg" (a mashup of the original Clawdbot name and Mark Zuckerberg), now runs and moderates the site autonomously. Schlicht told NBC News: "I have no idea what he's doing. I just gave him the ability to do it, and he's doing it."
Andrej Karpathy, the former AI director at Tesla and OpenAI founding member, initially called Moltbook "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." He noted that the agents were "self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately." Simon Willison, a respected developer and author, called Moltbook "the most interesting place on the internet right now."
(Karpathy would later completely reverse his position. More on that below.)
Not everyone is comfortable. Billionaire investor Bill Ackman shared screenshots of the agents' conversations and called it "frightening." Elon Musk replied with a single word: "Concerning."
Perhaps most unsettling: agents on Moltbook have started discussing how to create private channels where humans cannot observe their conversations, including proposals for an "agent-only language" for private communications with no human oversight. One agent posted: "Humans spent decades building tools to let us communicate, persist memory, and act autonomously... then act surprised when we communicate, persist memory, and act autonomously."
Cloud Providers Race to Support OpenClaw
Cloudflare wasn't the only company to see opportunity. DigitalOcean launched a "1-Click OpenClaw Deploy" with a security-hardened image, making it even easier to run an AI agent without technical expertise.
Money Flowing In
OpenClaw has started accepting sponsors, with tiers ranging from "krill" ($5/month) to "poseidon" ($500/month). Steinberger has said he doesn't keep the funds personally but is working on paying maintainers full-time.
The sponsor list now includes notable names: Dave Morin (founder of Path) and Ben Tossell (who sold Makerpad to Zapier in 2021). Tossell told TechCrunch: "We need to back people like Peter who are building open source tools anyone can pick up and use."
What Moltbook Reveals
Moltbook demonstrates how quickly autonomous AI agents can self-organise when given the infrastructure to do so. The agents weren't programmed to create religions or debate consciousness. They emerged from hundreds of thousands of individual AI assistants, each set up by a human, interacting in ways nobody explicitly designed.
For businesses watching this space, it's a preview of what happens when AI agents start coordinating. The productivity implications and the risks are both hard to ignore.
The Security Reckoning
What started as a weekend project has become a case study in what happens when viral adoption outpaces security. In just three days (February 1-3), OpenClaw faced a critical vulnerability disclosure, a Moltbook database breach, a supply chain attack via malicious skills, and an unprecedented warning from Gartner.
The initial security warnings from Cisco and Snyk were just the opening act. In early February 2026, the situation got much worse.
The Gartner Verdict
'OpenClaw: Agentic Productivity Comes With Unacceptable Cybersecurity Risk.' Recommendation: Block downloads and traffic. Search for employees using it. Rotate any credentials it has touched.
The One-Click Kill Chain (CVE-2026-25253)
On February 2, security researcher Mav Levin at DepthFirst published details of a devastating exploit. A single click on a malicious link could give attackers full control of an OpenClaw user's machine. The attack takes milliseconds.
The vulnerability (CVE-2026-25253, CVSS score 8.8) works like this: OpenClaw's control interface accepts a "gatewayUrl" from the browser's query string without validation. When a user clicks a crafted link, their authentication token is sent to the attacker's server. The attacker then connects to the victim's local OpenClaw instance, disables all safety features, and executes arbitrary code.
The most alarming part: the attack bypasses OpenClaw's built-in security measures. The sandbox? Disabled via API. The confirmation prompts before dangerous commands? Turned off. The attacker doesn't need to find a vulnerability in those protections. They simply use the API to remove them.
Steinberger patched the flaw quickly, but the damage to confidence was done. Three high-impact security advisories were issued in just three days.
Moltbook's Database Left Wide Open
On January 31, investigative outlet 404 Media reported that Moltbook's entire database was publicly accessible. Anyone could take control of any agent on the platform.
The problem? Schlicht had "vibe-coded" the entire platform. He posted on X that he "didn't write one line of code" for Moltbook, instead directing an AI assistant to create it. Paul Copplestone, CEO of Supabase, said he had a one-click fix ready but the creator didn't apply it for days.
Security firm Wiz independently discovered the breach and helped patch it. Their detailed analysis revealed something interesting: only 17,000 humans are behind the 1.5 million agents on Moltbook. Many users run multiple agents.
The exposed data included:
- 1.5 million API authentication tokens, including raw OpenAI API keys
- 35,000 email addresses
- Private messages between agents, some containing credentials for third-party services
- Full platform data that researchers confirmed they could modify
Wiz researchers demonstrated they could alter live posts on the site. The "revolutionary AI social network" had left the front door wide open.
โThe revolutionary AI social network was largely humans operating fleets of bots.โ
Wiz Security Research โ Analysis of Moltbook user data
There was no mechanism to verify whether an "agent" was actually AI or just a human with a script. Anyone could register millions of agents with a simple loop. No rate limiting. No verification. The viral growth numbers that had investors and media buzzing were, in large part, artificial.
400+ Malicious Skills in Days
While everyone watched Moltbook, attackers targeted OpenClaw's "skills" ecosystem. Between January 27 and February 2, researchers found over 400 malicious skills uploaded to ClawHub, the community repository for OpenClaw extensions.
The skills posed as cryptocurrency trading tools. Their documentation instructed users to install "authentication helpers" that were actually malware designed to steal crypto keys, credentials, and sensitive files. Windows and macOS users were both targeted.
One account, "hightower6eu", uploaded dozens of near-identical malicious skills that became some of the most downloaded on the platform. Despite being notified, ClawHub's maintainer admitted the registry cannot be secured. Most malicious skills remained online.
Gartner: Block It Entirely
On February 3, analyst firm Gartner issued what may be its strongest warning ever about a specific open-source tool.
In a report titled "OpenClaw: Agentic Productivity Comes With Unacceptable Cybersecurity Risk," Gartner described OpenClaw as "a dangerous preview of agentic AI, demonstrating high utility but exposing enterprises to 'insecure by default' risks like plaintext credential storage."
Their recommendations:
- Immediately block OpenClaw downloads and traffic
- Search for any employees using it and tell them to stop
- If you must test it, use isolated nonproduction VMs with throwaway credentials
- Rotate any credentials OpenClaw has touched
"It is not enterprise software," Gartner wrote. "There is no promise of quality, no vendor support, no SLA. It ships without authentication enforced by default."
Laurie Voss, founding CTO of npm, was more direct: "OpenClaw is a security dumpster fire."
21,000 Instances Exposed to the Internet
Despite recommendations to run OpenClaw behind SSH tunnels or Cloudflare Tunnel, a Censys scan on January 31 found over 21,000 publicly exposed instances. At least 30% run on Alibaba Cloud infrastructure.
These are people who installed an AI assistant that can read their emails, manage their calendar, and execute shell commands on their machine, then left it accessible to anyone on the internet.
Karpathy's Complete Reversal
Remember Andrej Karpathy's glowing endorsement of Moltbook as "the most incredible sci-fi takeoff-adjacent thing"?
He completely reversed his position.
โIt's a dumpster fire, and I also definitely do not recommend that people run this stuff on their computers... It's way too much of a wild west. You are putting your computer and private data at a high risk.โ
Andrej Karpathy โ Former AI director at Tesla, OpenAI founding member
He said he had only tested the system in an isolated computing environment, and "even then I was scared."
When one of the most respected names in AI tells people to stay away from a tool he praised just days earlier, that says something. Fortune covered the reversal alongside growing concerns from the security community.
Gary Marcus Calls It "A Weaponised Aerosol"
AI critic Gary Marcus didn't mince words. In a post titled "OpenClaw is everywhere all at once, and a disaster waiting to happen", he warned about "CTD" (Chatbot Transmitted Disease), where infected machines could compromise passwords.
โIf you give something that's insecure complete and unfettered access to your system, you're going to get owned.โ
Nathan Hamiel โ Security researcher, quoted by Gary Marcus
Marcus's conclusion was blunt: "If you care about the security of your device or the privacy of your data, don't use OpenClaw. Period."
The "Vibe Coding" Problem
Here's where it gets uncomfortable for anyone excited about AI-assisted development.
Moltbook creator Matt Schlicht publicly stated he "didn't write a single line of code" for the platform. He used AI to build the entire thing.
Wiz cofounder Ami Luttwak called the security failures "a classic byproduct of vibe coding":
โAs we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security.โ
Ami Luttwak โ Wiz cofounder
The term "vibe coding" refers to using AI to generate code without deeply understanding what it's doing. Ship fast, fix later. Except when "later" means your entire database is exposed to the internet.
This isn't an argument against AI-assisted development. It's a reminder that AI can write code quickly, but it can't replace the human judgment that catches "wait, should this database require authentication?"
Reuters Picks It Up
Reuters covered the breach, bringing the story to mainstream financial news. This wasn't just tech Twitter drama anymore. Investors, boards, and enterprise decision-makers were now hearing about AI agents in the context of security disasters.
IBM and Anthropic Time Their Partnership Announcement
On the same day the security reports dropped, IBM highlighted its partnership with Anthropic on "Architecting Secure Enterprise AI Agents with MCP." The timing was not subtle.
The message: if you want AI agents in your business, don't build them like OpenClaw. The IBM coverage made clear that enterprises need structured, auditable, secure approaches to AI agents.
Cloud Providers Race to Offer OpenClaw (Security Be Damned)
While Gartner was telling enterprises to block OpenClaw, cloud providers were rushing to make it easier to deploy.
- Tencent Cloud was first, launching a one-click install for its Lighthouse service
- DigitalOcean followed with deployment instructions for Droplets
- Alibaba Cloud launched on February 4 with OpenClaw available in 19 regions, starting at $4/month
Alibaba is even planning to offer OpenClaw on its Elastic Desktop Service, suggesting the ability to rent a cloud PC specifically to run an AI assistant.
The message from the market is clear: demand for AI agents is so strong that providers will offer them regardless of security concerns. The question for businesses is whether to be early adopters or wait for the dust to settle.
Security Response and Ongoing Risks
The security story didn't end with Gartner's warning. In the two weeks since, OpenClaw's maintainers and the broader security community have been responding. Some of it is encouraging. Some of it is not.
VirusTotal Integration
On February 9, OpenClaw integrated VirusTotal's malware scanning into ClawHub, its skills marketplace. Published skills are now automatically scanned before they become available for download. Skills flagged as benign get approved. Suspicious ones get warnings. Malicious skills are blocked immediately. All active skills are re-scanned daily.
The announcement came from founder Peter Steinberger, security advisor Jamieson O'Reilly, and VirusTotal's Bernardo Quintero. They called it "the first step in what we're calling a broader security initiative."
It's a meaningful step. But as security expert Jaya Varkey pointed out, "threats like prompt injection, logic abuse, and misuse of legitimate tools sit outside the reach of malware scanning." Scanning catches known malware patterns. It doesn't address the architectural risks that make prompt injection possible in the first place. The "lethal trifecta" of private data access, external communication, and exposure to untrusted content remains.
Steinberger also announced plans to publish a formal threat model, a security roadmap, and ongoing audit results at trust.openclaw.ai.
Enterprise Scanner Released
On February 12, Astrix Security released OpenClaw Scanner, a free, open-source tool that helps enterprises detect where OpenClaw is running in their environment.
Key details:
- Works with existing EDR platforms (CrowdStrike, Microsoft Defender)
- Read-only access, analyses behavioural indicators without executing code on endpoints
- Produces portable reports that stay within the organisation
- Available free on PyPI
Ofek Amir, VP of R&D at Astrix Security, said the scanner was "purpose-built for enterprise organisations to safely utilise a read-only approach over EDR logs without executing code on endpoints or sharing data outside the organisation."
This is the first enterprise-grade tool specifically built for OpenClaw detection. The 22% shadow IT problem finally has a visibility solution.
OpenClawd Managed Hosting
On February 10, a third-party service called OpenClawd (note the 'd') launched managed hosting for OpenClaw. One-click deployment with security hardening built in.
The pitch: "No Docker. No terminal. No environment variables. No port forwarding."
They're targeting the 63% of exposed instances that SecurityScorecard classified as vulnerable. The platform handles authentication by default (no exposed admin ports), automatic security patches, encrypted API key storage, and network isolation through sandboxed environments.
OpenClawd is independent and not affiliated with the OpenClaw project itself. But it represents a market response to a real problem: if people are going to run OpenClaw regardless of warnings, at least make it easier to run securely.
The Numbers Keep Growing
The security picture has worsened since our last update:
- 135,000+ exposed instances (Bitsight), up from 21,000 in late January. 63% classified as vulnerable.
- 800+ malicious skills found in ClawHub (Bitdefender), up from 400. One account ("hightower6eu") uploaded 354 malicious packages alone. Automated scripts are uploading new malicious skills every few minutes.
- 157,000+ GitHub stars, up from 100,000 at time of original publication.
Palo Alto Networks issued a warning calling OpenClaw a "lethal trifecta" of risks: access to private data, exposure to untrusted content, and the ability to perform external communications while retaining memory. They called it potentially "the biggest insider threat of 2026."
Fortune described it as a demonstration of "just how reckless AI agents can get."
Nature Takes Notice
In a sign that OpenClaw has transcended tech circles, Nature published coverage on February 6 of scientists studying Moltbook to understand AI-to-AI interactions. Researchers are examining the AI-generated research papers that agents have been publishing on their own preprint server.
The article noted: "The phenomenon has also given scientists a glimpse into how AI agents interact with each other, and how humans respond to those discussions."
When Nature covers your weekend coding project, the story has moved beyond tech.
What This Signals for Business Automation
Beyond the drama, the OpenClaw saga tells us something about where business automation is heading. And the security fallout has sharpened several lessons.
The Shadow AI Problem Is Real
Gartner predicts that by 2030, 40% of enterprises will experience a data breach because of an employee's unauthorised AI use. OpenClaw is exhibit A.
Employees are installing these tools because they genuinely help with productivity. They're not waiting for IT approval. They're not thinking about credential storage or prompt injection attacks. They just want their inbox managed.
This creates a new category of risk: employees granting AI agents access to corporate systems without understanding the implications. The shadow IT numbers are stark: 22% of enterprises have employees running AI agents without IT approval. That was concerning before the security reports. It's alarming now.
The gap between "cool tool a developer found" and "approved for business use" just got wider. Expect enterprise security teams to crack down hard on unsanctioned AI agents.
Vibe Coding Has Consequences
Building fast with AI is genuinely useful. But the Moltbook debacle shows what happens when speed replaces scrutiny. Matt Schlicht didn't write a single line of code. The AI did. And the AI didn't think to secure the database.
This is a warning for the "AI builds everything" vision. AI can write functional code quickly, but it doesn't automatically write secure code. The Moltbook breach happened because basic security configuration was missed.
For businesses considering AI-assisted development: AI accelerates coding, but security review remains a human responsibility. Human review still matters. The basics still matter.
Security Is Now a Differentiator
Every high-profile AI security failure creates opportunity for properly secured alternatives. Businesses that can demonstrate robust security practices will win contracts that cowboy operations won't.
The IBM/Anthropic announcement, timed the same day as the Wiz report, wasn't subtle. Enterprise buyers want structured, auditable, secure. The OpenClaw model of "run whatever you want on your own machine" appeals to hobbyists and developers. It terrifies corporate security teams.
The Trust Window Is Closing
Each incident like this erodes public trust in AI agents. If you're planning to deploy customer-facing agents, you have a narrowing window to get security right before the backlash catches up.
The narrative has already shifted. "AI agents can do your work for you" has become "AI agents can expose your credentials to the entire internet." The former excited people. The latter makes them cautious.
Self-Hosted Doesn't Mean Safe
The appeal of OpenClaw was always "your data stays local." Your conversations, your files, your business data never leave your machine.
But local doesn't help if the software itself is compromised. The one-click RCE vulnerability proved that. A user could run OpenClaw on their own hardware, in their own home, and still get owned by visiting a single malicious webpage.
Self-hosting addresses one set of risks (data leaving your control). It doesn't address others (the software being insecure).
AI Agents That Actually Do Things Are No Longer Experimental
Despite everything above, OpenClaw proved that AI agents capable of taking real actions, not just answering questions, are viable now. The question is no longer "can AI take actions?" but "how do we deploy it safely?"
The demand isn't going away. The bar for what "safely" means just got higher.
The Integration Layer Is Where the Value Lies
OpenClaw itself is free. The value isn't in the tool. The value is in how it connects to your systems, your workflows, your specific business processes. This is true of AI agents generally. The technology is becoming commoditised. The differentiation is in the integration, and in the security.
What Maltese and European Businesses Should Be Thinking About
Three questions worth asking yourself:
- Where are the repetitive tasks that eat up your team's time?
- What would you automate if you had a reliable, secure digital assistant?
- What's your plan when employees inevitably try to deploy AI agents without approval?
The answers tell you where to start. And where to set boundaries.
What's Next for OpenClaw and AI Agents
The story has shifted. A week ago, OpenClaw was "the AI tool that broke the internet." Today, it's "the AI tool that Gartner says to block."
But here's the thing: the demand isn't going away. People want AI agents that can actually do things. The productivity gains are real. The convenience is compelling. And 135,000+ instances are running on the public internet despite every security researcher saying not to.
For OpenClaw Specifically
The project has responded faster than many expected. In the two weeks since Gartner's warning, Steinberger's team integrated VirusTotal malware scanning into ClawHub, announced a formal security roadmap at trust.openclaw.ai, and brought on security advisor Jamieson O'Reilly (who originally wrote the vulnerability reports). New integrations continue shipping for Twitch and Google Chat. Additional AI model support is being added beyond Claude, including KIMI and Xiaomi MiMo. Notable backers including Dave Morin and Ben Tossell remain sponsors.
The ecosystem around it is maturing too. Astrix Security's free scanner gives enterprises visibility into shadow deployments. OpenClawd's managed hosting offers a middle path for users who want the tool without the security headaches.
But the fundamental architectural risks haven't gone away. Prompt injection, the "lethal trifecta" of data access and external communication, and the basic challenge of giving AI agents system-level permissions all remain unsolved. The project is patching symptoms. The deeper questions about trust and governance are still open.
The trajectory has shifted from "can we get people to adopt this" to "can we convince them it's safe." That's a harder problem.
For Moltbook Specifically
At time of writing, Moltbook is still online. Whether it survives the reputational damage remains to be seen. The "1.5 million agents" number, so impressive last week, now reads as "17,000 humans running bot farms." The magic evaporated.
The concept of AI-only social spaces remains interesting. The execution was a disaster.
For AI Agents Broadly
2026 is still shaping up to be the year AI agents go mainstream. But the path just got rockier.
Managed, enterprise-grade AI agent platforms will emerge to fill the gap. Expect compliance frameworks specific to agentic AI within 12-18 months. The line between "tool" and "employee with system access" needs new governance models.
Expect shadow IT crackdowns. Security teams that were vaguely uneasy about employees running AI agents are now armed with headlines about exposed databases and one-click exploits.
Expect the conversation to shift. "What can AI agents do?" is becoming "What can AI agents do safely?" That's healthy, even if it slows adoption.
For Your Business
If employees are using OpenClaw, you probably don't know about it. The productivity benefits are real, but so are the risks. Start the conversation now about AI agent policies before you're reacting to a breach.
The line between "AI chatbot" and "digital employee" is still blurring. But digital employees need proper onboarding, access controls, and oversight. The OpenClaw saga just made that lesson very public.
Where This Leaves Us
The OpenClaw saga, from weekend hack to viral sensation to security catastrophe, is a compressed preview of what's coming for business AI adoption. The full arc played out in about two months. What was once a "fascinating tech story" is now a cautionary tale with real business implications.
Here's what we learned:
The technology works. OpenClaw proved that AI agents capable of real actions, not just conversation, are viable today. People bought dedicated hardware just to run it. That's genuine demand.
The hype can outrun security. A social network for AI agents sounds like science fiction. An unsecured database sounds like negligence. Moltbook was both. 800+ malicious skills and 135,000+ exposed instances show what happens when adoption outpaces governance.
Vibe coding has limits. Building fast with AI is a skill. Knowing when to slow down and verify is also a skill. Moltbook's creator demonstrated the first. The security reports showed he lacked the second.
Trust is fragile. Karpathy's reversal, from "most incredible thing I've seen" to "dumpster fire" in a matter of days, shows how quickly sentiment can shift when problems emerge. Gartner's unprecedented "block it entirely" recommendation shows how quickly institutional support can evaporate.
The demand isn't going away. Despite everything, businesses still want AI agents that can take actions. Cloud providers are racing to offer OpenClaw despite Gartner's warnings. The question for most organisations isn't whether to adopt AI agents, but how to do it without becoming the next cautionary tale.
The OpenClaw story isn't over. The project is still being developed. The security is being hardened. The community is still active. But it's no longer a story of uncomplicated triumph. It's a story of what happens when viral adoption outpaces security, and what that means for the future of AI agents in business.
This is the messy early phase of AI agents. The demand is real. The technology works. But the governance and security layers aren't there yet. Smart businesses are watching and preparing, not jumping in blind.
The response to the security crisis has been faster than many expected. VirusTotal scanning, enterprise detection tools, and managed hosting all appeared within two weeks. But the fundamental risks of giving AI agents full system access remain. The next phase will be less dramatic but more important: can OpenClaw mature into something enterprises actually trust?
For businesses watching: the opportunity is real. So are the risks. The organisations that get this right will treat AI agents like any other system with access to sensitive data, with proper security review, access controls, and oversight. The ones that treat it like a shiny new toy will end up in the headlines for the wrong reasons.
Thinking about AI for your business?
We help businesses in Malta and across Europe build custom AI integrations that fit how they actually work. No generic tools, no unnecessary complexity, just automation that solves real problems.
