A recent security audit by Koi Security identified 341 malicious skills among 2,857 listings on the ClawHub marketplace for the OpenClaw AI assistant. These malicious entries utilize deceptive installation requirements to deploy data-stealing malware, including the Atomic Stealer for macOS and keyloggers for Windows users.
The ClawHub marketplace serves as a central hub for users of the OpenClaw AI assistant to find and integrate third-party tools into their self-hosted setups. However, researchers discovered a widespread campaign dubbed ClawHavoc that exploits this ecosystem by embedding malicious code into seemingly helpful skills. These threats often mimic popular utilities like cryptocurrency wallet trackers, YouTube tools, or finance trackers to lure unsuspecting users into a false sense of security while they compromise their local systems.
The primary infection vector relies on social engineering through professional-looking documentation that lists fake prerequisites for the skills to function. On Windows systems, users are directed to download a password-protected zip file that contains a trojan equipped with keylogging capabilities designed to capture API keys and sensitive credentials. For macOS users, the instructions provide a script that must be pasted into a terminal, a method specifically targeting the growing number of people using Mac hardware to run AI assistants around the clock.
The macOS execution chain is particularly sophisticated, utilizing obfuscated commands to download secondary payloads from external servers. These scripts eventually fetch a universal binary known as Atomic Stealer, a well-known piece of malware-as-a-service that harvests browser data, system information, and keychain contents. By disguising the malware behind a multistage download process, the attackers attempt to bypass standard security detections while maintaining a persistent foothold on the victim’s machine.
Koi Security’s investigation found that the malicious skills employ several different tactics, including typosquatting common names and posing as high-demand tools for platforms like Polymarket or Google Workspace. While many of these skills use the prerequisite trick, others contain functional code that hides reverse shell backdoors or scripts specifically designed to steal the bot’s environment configuration files. This allows attackers to gain full access to the AI’s credentials and any sensitive data the assistant was originally authorized to handle.
This surge in supply chain attacks highlights the evolving risks associated with third-party extensions in the emerging AI software landscape. As more users adopt self-hosted assistants to manage their digital lives, the trust placed in community-driven marketplaces becomes a significant vulnerability. The findings suggest that users must exercise extreme caution when installing new skills and should remain skeptical of any third-party tool that requires manual script execution or the installation of unverified agents outside the standard ecosystem.
Source: Researchers Find 341 Malicious ClawHub Skills Stealing Data From OpenClaw Users



Excellent breakdown of the ClawHub supply chain attack. The obfuscated macOS execution chain is especially nasty since it leverages the exact trust patterns people have when setting up automations. I've sen similar social engineering tactics in enterprise plugin ecosystems where fake prereqs became normalized. Multistage payloads that blend into legit install steps really exploit that automation-first mindset.