In a hospital exam room, a doctor observes a medical transcription agent as it updates electronic health records, suggests prescription options, and reveals patient history in real-time. Meanwhile, on a manufacturing line, a computer vision agent conducts quality control at speeds beyond human capability. Both agents create non-human identities that most enterprises struggle to inventory, manage, or revoke quickly enough.
This identity governance issue is the primary hurdle preventing agentic AI from moving beyond pilot projects. It’s not about the models’ capabilities or computational power.
Cisco President Jeetu Patel, speaking to VentureBeat at RSAC 2026, revealed that while 85% of enterprises are testing agent pilots, only 5% have fully implemented them. This 80-point gap highlights a trust issue. The crucial questions for any CISO are: which agents have access to sensitive systems in production, and who is responsible if they act beyond their scope? IANS Research discovered that many businesses lack sufficiently mature role-based access controls for today’s human identities, and agents further complicate this. The 2026 IBM X-Force Threat Intelligence Index reported a 44% increase in attacks on public-facing applications, driven by missing authentication controls and AI-enabled vulnerability discovery.
Why the Trust Gap is Architectural, Not Just a Tooling Problem
Michael Dickman, SVP and GM of Cisco’s Campus Networking business, provided a trust framework during an interview with VentureBeat that is rarely articulated so clearly to security and networking leaders. Dickman, who previously served at Gigamon and Aruba Networks, explained that networks detect system-to-system communications that other telemetry sources might miss. “It’s the difference between knowing and guessing,” he stated. The network captures actual data exchanges, forming the basis for cross-domain correlation, essential for enforcing agent policies at “machine speed.”
The Trust Prerequisite That Most AI Strategies Skip
Dickman highlighted that agentic AI disrupts a traditional pattern where productivity is prioritized, and security is added later. “Trust isn’t something that comes after business productivity,” he told VentureBeat. “It’s a fundamental requirement from the start.”
When agents autonomously update records, adjust configurations, or process transactions, the impact of a compromised identity can be extensive. “Now more than ever, the question is who has the right to do what,” Dickman noted, pointing to the complexity introduced by autonomous agents.
He outlined four conditions for addressing the trust problem. Secure delegation involves defining agent permissions and ensuring human accountability. Cultural readiness involves addressing alert fatigue, traditionally managed by reducing alerts, which agents can now evaluate comprehensively. Token economics considers the computational cost of each agent action, advocating for hybrid architectures where AI handles reasoning and traditional tools execute tasks. Finally, human judgment remains irreplaceable, as demonstrated by an AI tool that produced 60 pages of filler for a product document, requiring human refinement.
What the Network Sees That Endpoints Miss
Enterprise data today is often proprietary, internal, and scattered across various tools, platforms, and stacks. Each team creates its own perspective, never seeing the complete picture. “It’s the difference between knowing and guessing,” Dickman reiterated. Networks capture real data communications, growing more valuable as IoT and physical AI expand. These agents generate sensitive data that requires stringent access controls.
“All these require the trust we started with, as this data is highly sensitive,” Dickman emphasized.
Why Siloed Agent Data Misses the Signal
“It’s not just about aggregation but creating knowledge from the network,” Dickman explained. This insight reveals the strategic challenge of sequencing, not capability. “The real power comes from cross-domain views and correlation,” he added, noting the common pitfall of isolated silos producing useful automation without cross-domain insights.
Practitioners like Kayne McGladrey pointed out that organizations often clone human user profiles for agents, leading to permission sprawl. Carter Rees highlighted broken access control as a significant vulnerability, while Etay Maor called for an HR-like view of agents, including onboarding and offboarding processes.
Agentic AI Trust Gap Assessment
This matrix evaluates platforms against the five trust gaps Dickman identified, with enforcement approaches reflecting Cisco’s framework.
|
Trust gap |
Current control failure |
What network-layer enforcement changes |
Recommended action |
|
Agent identity governance |
IAM built for human users cannot inventory, scope, or revoke agent identities at machine speed |
Agentic IAM registers each agent with defined permissions, an accountable human owner, and a policy-governed access scope |
Audit every agent identity in production. Assign a human owner. Define permitted actions before expanding the scope |
|
Blast radius containment |
Host-based agents and perimeter controls can be bypassed; flat segments give compromised agents lateral movement |
Microsegmentation enforces least-privileged access at the network layer, limiting blast radius independent of host-level controls |
Implement microsegmentation for every agent-accessible system. Start with the highest-sensitivity data (PHI, financial records) |
|
Cross-domain visibility |
Siloed observability tools create fragmented views; Team A’s agent data never correlates with Team B’s security telemetry |
Network telemetry captures actual system-to-system communications, feeding a unified data fabric for cross-domain correlation |
Unify network, security, and application telemetry into a shared data fabric before deploying production agents |
|
Governance-to-enforcement pipeline |
No formal process connecting business intent to agent policy to network enforcement |
Policy-to-enforcement pipeline translates governance decisions into machine-speed network rules |
Establish a formal pipeline from business-intent definition to automated network policy enforcement |
|
Cultural and workflow readiness |
Organizations automate existing workflows rather than redesigning for agent-scale processing |
Network-generated behavioral data reveals actual usage patterns, informing workflow redesign |
Run a 30-day telemetry capture before designing agent workflows. Build around observed data, not assumptions |
A Broken Ankle and a Microsegmentation Lesson
Dickman shared a personal story to illustrate his framework. A family member’s broken ankle led him to observe a medical transcription agent in a hospital. The agent updated records, suggested prescriptions, and revealed history, all while the doctor approved decisions. The security implications became personal with a loved one’s data involved.
“Governance should be done slowly, but enforcement and implementation must be rapid,” he advised, emphasizing the need for machine-speed action.
Agentic IAM begins by registering each agent with defined actions and human accountability. “Here’s my set of agents and the responsible human,” Dickman said. “If something goes wrong, there’s a contact person.”
This identity layer supports microsegmentation, enforcing least-privileged access and limiting potential damage. “Microsegmentation ensures least-privileged access,” he noted, highlighting its independence from host-level controls.
If this model works for a medical transcription agent, it can scale to less sensitive enterprise applications.
Five Priorities Before Agents Reach Production
1. Force cross-functional alignment now. Set clear expectations for agentic AI across business, IT, and security leaders. Human coordination often lags behind technology, creating bottlenecks.
2. Get IAM and PAM governance production-ready for agents. Dickman emphasized the need to mature identity and access management and privileged access management before scaling agents. “This unlocks trust,” he explained, emphasizing the importance of governance and policy.
3. Adopt a platform approach to networking infrastructure. A platform strategy promotes data sharing across domains, enabling cross-domain correlation and addressing trust gaps.
4. Design hybrid architectures from the start. Combine agentic AI for reasoning with traditional deterministic tools for execution. This approach balances intelligence with predictability and cost-efficiency.
5. Make the first use cases bulletproof on trust. Select high-value use cases and implement them with best practices from the outset. Successful deployments build confidence and facilitate future expansions.
“You can guarantee trust to the organization, unleashing speed,” Dickman stated.
The central theme throughout this discussion is that the 85% of enterprises in pilot mode aren’t waiting for better models. They’re awaiting the identity governance, cross-domain visibility, and policy enforcement needed for secure production deployment. Whether leveraging Cisco’s platform or their own solutions, Dickman’s framework emphasizes that identity governance, cross-domain visibility, and policy enforcement are essential. Organizations that address these needs first will deploy agents more rapidly, with each new agent benefiting from the established trust framework. Those hesitating to start will fall behind, as theoretical trust doesn’t lead to deployment.

