AI agents are becoming increasingly prevalent in enterprise systems, presenting a new and complex challenge for security teams. These agents have more access and connections than ever before, making them a prime target for potential cyberattacks. The lack of a standardized framework for governing AI agents has left many organizations vulnerable to security breaches.
At a recent VentureBeat AI Impact Series event, industry experts highlighted the growing importance of addressing the security implications of AI agents. Spiros Xanthos, founder and CEO of Resolve AI, emphasized the need for a comprehensive framework to mitigate the risks associated with autonomous AI agents. Traditional security frameworks designed for human interactions are not sufficient to protect against the unique threats posed by AI agents.
Jon Aniano, SVP of product and CRM applications at Zendesk, pointed out that the widespread adoption of Model Context Protocol (MCP) servers has further complicated the security landscape. While MCP servers facilitate integration between agents, tools, and data, they are often “extremely permissive,” leaving organizations vulnerable to potential security breaches.
The challenge lies in determining who is accountable when an AI mis-authenticates a user or carries out the wrong actions. As AI becomes more involved in user interactions, establishing clear guidelines and guardrails is essential to prevent unauthorized access and data breaches. Zendesk has implemented strict access controls and scope limits to mitigate these risks, but the industry as a whole lacks concrete standards for agent interactions.
Looking ahead, Xanthos suggested that AI agents may eventually be granted more permissions than humans for certain tasks. However, concerns about security and potential risks must be addressed before organizations can fully trust autonomous agents to operate independently. Resolve AI is exploring the possibility of giving agents standing authorization for low-risk tasks, with the goal of gradually expanding their capabilities in a controlled manner.
In the meantime, security teams can take interim measures to enhance the security of AI agents within their existing tooling. Fine-grained access controls offered by tools like Splunk can help restrict access to sensitive data, while declaratively designed API calls and human review processes can ensure that agent actions are sanctioned and monitored. By continuously evaluating and expanding access controls, organizations can strengthen their defenses against potential security threats posed by AI agents.
In conclusion, the evolving landscape of AI agents presents a unique set of challenges for security teams. By developing a comprehensive framework for governing AI agents, implementing strict access controls, and continuously monitoring and evaluating agent actions, organizations can mitigate the risks associated with autonomous AI systems.

