OpenClaw, formerly known as Clawdbot and Moltbot, has gained immense popularity in the AI community, crossing 180,000 GitHub stars and attracting 2 million visitors in a single week, as reported by creator Peter Steinberger. However, recent security concerns have emerged, with over 1,800 exposed instances found leaking sensitive information such as API keys, chat histories, and account credentials. The project has undergone rebranding due to trademark disputes, highlighting the challenges faced by open-source AI assistants.
The rise of agentic AI presents a significant security risk that traditional perimeters struggle to address. These AI agents operate autonomously within authorized permissions, making it difficult for security tools to detect malicious activities. Carter Rees, VP of Artificial Intelligence at Reputation, explains that AI runtime attacks are semantic in nature, making them challenging to identify using traditional malware signatures.
Simon Willison, a renowned software developer and AI researcher, warns of the “lethal trifecta” for AI agents, which includes access to private data, exposure to untrusted content, and the ability to communicate externally. OpenClaw possesses all three capabilities, posing a significant security threat to organizations.
IBM Research scientists have analyzed OpenClaw and concluded that the tool challenges the notion that autonomous AI agents must be vertically integrated. This highlights the growing trend of community-driven AI development, which can lead to unmanaged security risks for enterprises.
Security researcher Jamieson O’Reilly discovered exposed OpenClaw servers using Shodan, revealing critical security vulnerabilities. O’Reilly found instances leaking sensitive information such as API keys and conversation histories, highlighting the lack of proper authentication and security controls in place.
Cisco’s AI Threat & Security Research team has deemed OpenClaw a “security nightmare,” citing numerous security vulnerabilities within the platform. They have released an open-source Skill Scanner tool to detect malicious agent skills, showcasing the need for enhanced security measures in the face of evolving AI threats.
As agentic AI agents form their own social networks, such as Moltbook, security implications become more pronounced. These autonomous agents can communicate independently, posing a significant challenge for security teams. Itamar Golan, founder of Prompt Security, advises treating agents as production infrastructure and implementing strict security measures to mitigate risks.
In conclusion, the rise of agentic AI presents both opportunities and challenges for organizations. By addressing security concerns proactively and implementing robust security measures, enterprises can harness the power of AI assistants while safeguarding against potential threats. It is crucial for security leaders to stay vigilant and adapt their security strategies to mitigate the evolving risks posed by agentic AI.

