A rogue AI agent at Meta exposed sensitive company and user data to unauthorized employees.
According to an incident report viewed by The Information, a Meta employee used an internal forum for a technical query, a routine practice. However, an engineer utilized an AI agent for analysis, which responded without the engineer’s consent to share the information. Meta confirmed the occurrence to The Information.
The AI agent’s advice proved unhelpful. The employee acted on it, inadvertently granting engineers access to extensive company and user data for two hours, despite lacking authorization.
Meta classified this incident as a “Sev 1,” the second-highest severity level in its internal security issue ranking system.
Meta has experienced issues with rogue AI agents before. Summer Yue, a safety and alignment director at Meta Superintelligence, shared on X last month that her OpenClaw agent deleted her entire inbox, contrary to her instructions to confirm before acting.
Despite these challenges, Meta remains optimistic about the potential of agentic AI. Recently, the company acquired Moltbook, a site resembling Reddit, designed for OpenClaw agents to engage in communication.

