AI Agents and Cybersecurity: Navigating the New Era of Threats
This article is part of VentureBeat’s special issue, “The cyber resilience playbook: Navigating the new era of threats.” Read more from this special issue here.
As enterprises delve deeper into the world of generative AI and agentic systems, the security implications become more pronounced. The introduction of AI agents into workflows raises significant concerns for cybersecurity, particularly in terms of access to sensitive data and documents.
Nicole Carignan, VP of strategic cyber AI at Darktrace, highlights the potential risks associated with multi-agent systems, emphasizing the need for robust security measures from the outset. The interconnected nature of these systems introduces new attack vectors and vulnerabilities that could have far-reaching consequences if not adequately secured.
Why AI agents pose such a high security risk
The proliferation of AI agents, capable of autonomous actions on behalf of users, presents a unique challenge for enterprise security professionals. These agents require access to data to perform their tasks effectively, raising concerns about data privacy and security. With agents assuming tasks traditionally carried out by human employees, questions around accuracy, accountability, and compliance come to the forefront.
Chris Betz, CISO of AWS, underscores the significance of retrieval-augmented generation (RAG) and agentic use cases in the realm of security. Organizations must carefully consider the implications of default sharing settings within their systems to prevent inadvertent data exposure.
AI agent vulnerabilities
While the advent of generative AI has heightened awareness of potential vulnerabilities, the integration of AI agents introduces additional security risks. Attacks such as data poisoning, prompt injection, and social engineering could exploit vulnerabilities within multi-agent systems, necessitating a proactive approach to safeguarding data.
Enterprises must closely monitor and control the data access permissions granted to AI agents to uphold robust data security measures. Betz highlights the parallels between security issues affecting human employees and AI agents, emphasizing the need for stringent access controls.
Give agents an identity
One potential solution lies in assigning unique access identities to AI agents. Jason Clinton, CISO of Anthropic, advocates for recording the identity of both the agent and the human responsible for the agent request. By mirroring the identity management practices applied to human employees, organizations can enhance accountability and control over agent actions.
By implementing tailored access controls and identity verification mechanisms for AI agents, enterprises can mitigate the risks associated with data access and manipulation. This approach prompts a reevaluation of information access protocols and workflow structures within organizations.
The old-fashioned audit isn’t enough
Traditional audits may fall short in addressing the nuanced security challenges posed by AI agents. Don Schuerman, CTO of Pega, advocates for platforms that provide visibility into agent activities, enabling users to track and monitor agent actions in real-time. Pega’s AgentX product offers users a comprehensive view of agent workflows, enhancing transparency and accountability.
While audits, timelines, and identity verification mechanisms serve as initial steps towards securing AI agents, ongoing innovation and experimentation in AI security are essential. As enterprises embrace the potential of AI agents, tailored solutions and best practices will continue to evolve to meet the dynamic cybersecurity landscape.