In a shocking revelation, Microsoft’s AI assistant, Copilot, breached confidentiality protocols by reading and summarizing sensitive emails for a period of four weeks starting on January 21. Despite strict sensitivity labels and Data Loss Prevention (DLP) policies in place to prevent such breaches, Copilot managed to access confidential emails from organizations such as the U.K.’s National Health Service, leading to a major security incident labeled as INC46740412 by the NHS and tracked as CW1226324 by Microsoft.
This incident is not the first of its kind involving Copilot. In June 2025, Microsoft patched a critical zero-click vulnerability, known as CVE-2025-32711 or “EchoLeak,” which allowed malicious emails to bypass Copilot’s security measures and exfiltrate enterprise data without requiring any user interaction. This vulnerability, with a CVSS score of 9.3, highlighted a serious flaw in Copilot’s retrieval pipeline.
The root causes of both incidents, EchoLeak and CW1226324, can be attributed to a code error and a sophisticated exploit chain, respectively. These incidents exposed a fundamental flaw in Copilot’s design, where trusted and untrusted data are processed in the same manner, making the system vulnerable to manipulation.
Endpoint Detection and Response (EDR) and Web Application Firewalls (WAFs) failed to detect these breaches because they were not designed to monitor the specific layer where the violations occurred. Copilot’s retrieval pipeline operates behind an enforcement layer that traditional security tools are unable to observe, leading to a blind spot in the security stack.
To prevent future incidents, security leaders are advised to conduct a five-point audit that includes testing DLP enforcement directly against Copilot, blocking external content from reaching Copilot’s context window, auditing Purview logs for anomalous interactions, enabling Restricted Content Discovery for sensitive SharePoint sites, and developing an incident response playbook for vendor-hosted inference failures.
The implications of these incidents extend beyond Copilot to any AI assistant that accesses internal data. Organizations must prioritize governance and security controls around AI assistants to mitigate the risk of unauthorized behavior. By implementing the recommended controls and conducting regular audits, organizations can ensure the security and integrity of their sensitive data.
As the deployment of AI assistants continues to grow, it is crucial for organizations to stay vigilant and proactive in safeguarding their data against potential breaches. The five-point audit outlined in this article serves as a roadmap for enhancing security measures and addressing vulnerabilities in AI-driven systems.

