Clawdbot, now known as Moltbot after a trademark request, has come under fire for its serious security flaws. The AI agent’s implementation lacks mandatory authentication, making it vulnerable to prompt injection attacks and granting unauthorized shell access. Security researchers quickly validated these vulnerabilities and identified even more, leading to widespread exploitation by commodity infostealers such as RedLine, Lumma, and Vidar.
The security community’s investigation revealed alarming findings about Clawdbot’s security posture. SlowMist warned that numerous Clawdbot gateways were exposed to the internet, putting sensitive data such as API keys, OAuth tokens, and private chat histories at risk. Matvey Kukuy of Archestra AI demonstrated how easy it was to extract an SSH private key through email using prompt injection.
Dubbed “Cognitive Context Theft” by Hudson Rock, Clawdbot has become a prime target for infostealers due to the wealth of personal and psychological information it stores. Attackers can leverage this data for targeted social engineering attacks, making it a significant threat to user privacy and security.
One of the key issues with Clawdbot is its default settings, which leave it vulnerable to attacks. The AI agent, popular for its automation capabilities, gained rapid popularity without users fully understanding its security implications. Many instances were deployed with port 18789 open to the public internet, making them easy targets for malicious actors.
Security researcher Jamieson O’Reilly discovered hundreds of exposed Clawdbot instances through a simple Shodan scan. Some of these instances had no authentication measures in place, allowing for full command execution. O’Reilly also demonstrated a supply chain attack on ClawdHub’s skills library, highlighting the risks associated with unvetted code.
Despite efforts to patch security vulnerabilities, Clawdbot’s core architectural issues remain unresolved. The AI agent’s plaintext storage of sensitive information makes it an easy target for infostealers looking to extract valuable data. With the rapid adoption of AI agents in enterprise applications, the attack surface is expanding faster than security teams can keep up with.
Security expert Itamar Golan emphasizes the need for a shift in mindset when it comes to securing AI agents. Organizations must treat these agents as production infrastructure rather than productivity tools to effectively mitigate risks. Golan suggests taking inventory of all deployed agents, enforcing least privilege access, and building runtime visibility to monitor agent activities effectively.
In conclusion, Clawdbot’s security vulnerabilities pose a significant threat to user data and privacy. Security teams must act swiftly to address these issues and implement robust security measures to protect against potential attacks. As the adoption of AI agents continues to grow, it is crucial for organizations to stay ahead of emerging threats and secure their systems accordingly.

