The recent Security and Risk Summit held by Forrester shed light on the growing threat of generative AI as an apex predator within enterprise networks. Just like the shark in the movie Jaws, gen AI operates with relentless efficiency and scale, exploiting chaos to cause harm to its prey. According to Forrester principal analyst Allie Mellen, generative AI has become the new chaos agent, never tiring or sleeping and executing at unprecedented levels.
Mellen presented research data that highlighted the fundamental weaknesses and unreliability of AI systems, emphasizing that AI is wrong a significant amount of the time. One study she referenced from the Tow Center for Digital Journalism at Columbia University found that AI models were wrong 60% of the time, leading to more failed queries than accurate ones.
Jeff Pollard, VP and principal analyst at Forrester, further emphasized the shortcomings of AI systems, citing studies that showed AI agents failing 70 to 90% of the time on real-world corporate tasks. Additionally, nearly half of AI-generated code contains known vulnerabilities, with 88% of security leaders admitting to incorporating unauthorized AI into their workflows.
The Security and Risk Summit also highlighted the impact of gen AI on identity security, with identities becoming prime targets for attackers. With gen AI expanding identity sprawl, traditional governance methods are failing to keep up. Forrester predicts a $27 billion surge in the identity management market by 2029, reflecting the growing complexity and potential chaos introduced by machine identities.
The event underscored the importance of treating AI agents as mission-critical identities and developing AI red team capabilities to detect and mitigate vulnerabilities. Organizations are urged to operate under the assumption of AI failure and implement security controls that can scale to machine speed. Blind trust in automation and legacy infrastructure is discouraged, as these can lead to catastrophic breaches.
Overall, the Security and Risk Summit provided a blueprint for survival in the face of weaponized gen AI. Security and risk management professionals are advised to prioritize governance of AI identities, develop AI red team capabilities, operate under the assumption of AI failure, design security controls for machine speed, and eliminate blind trust in automation. By following these guidelines, organizations can better protect themselves against the growing threat of generative AI.

