The recent federal directive mandating all U.S. government agencies to discontinue the use of Anthropic technology has sent shockwaves through the cybersecurity world. With a six-month phaseout window, agencies are scrambling to identify where Anthropicās models are integrated into their workflows. However, the lack of visibility into AI vendor dependencies is not unique to government agencies; most enterprises are also unaware of the extent of their AI dependencies.
A recent survey of 200 U.S. CISOs revealed that only 15% have full visibility into their software supply chains, highlighting a significant gap in understanding the AI tools being used within organizations. Additionally, nearly half of the respondents admitted to adopting AI tools without proper approval, further complicating the issue of undocumented AI vendor dependencies.
The sudden cessation of a vendor relationship can have far-reaching consequences, as evidenced by the current situation with Anthropic. Shadow AI incidents, where unauthorized AI tools are used within organizations, now account for 20% of all data breaches, adding substantial costs to breach remediation efforts. The lack of inventory and understanding of AI vendor dependencies makes it challenging for organizations to execute a transition plan in the event of a vendor cutoff.
The directive against Anthropic has also raised concerns for companies doing business with the Pentagon, as they now have to prove that their workflows do not touch Anthropic technology. The interconnected nature of AI vendor dependencies means that organizations may unknowingly inherit exposure to vendors like Anthropic through their supply chains.
To address these challenges, security experts recommend taking proactive steps to map AI vendor dependencies, identify control points within organizations, conduct kill tests on critical AI dependencies, and force vendor disclosure on sub-processors and models. By implementing these measures, organizations can gain a better understanding of their AI supply chains and mitigate the risks associated with vendor dependencies.
In conclusion, the federal directive against Anthropic serves as a wake-up call for organizations to prioritize the mapping of AI vendor dependencies and ensure transparency in their AI supply chains. Failure to do so could leave organizations vulnerable to unforeseen risks and challenges in the event of a forced migration. By taking proactive steps to understand and manage AI vendor dependencies, organizations can better protect their data and operations in an increasingly complex digital landscape.

