CX platforms are revolutionizing customer experience by processing billions of unstructured interactions every year. From survey forms to social media feeds, these platforms use AI engines to automate workflows that touch various systems like payroll, CRM, and payment systems. However, a significant security gap exists in ensuring the integrity of the data being fed into these AI engines, allowing attackers to exploit vulnerabilities and cause widespread damage without deploying any malware.
The Salesloft/Drift breach in August 2025 serves as a stark example of this security loophole. Attackers compromised Salesloft’s GitHub environment, stole Drift chatbot OAuth tokens, and gained access to Salesforce environments across over 700 organizations, including major companies like Cloudflare, Palo Alto Networks, and Zscaler. They then scanned the stolen data for sensitive information like AWS keys, Snowflake tokens, and plaintext passwords, all without deploying any malware.
Despite the prevalence of data loss prevention (DLP) programs in organizations, only a mere 6% have dedicated resources to monitor and secure the data flowing into AI engines. This lack of oversight leaves organizations vulnerable to attacks that exploit legitimate access routes rather than traditional malware-based intrusions. Cloud intrusions have surged by 136% in the first half of 2025, highlighting the urgent need for improved security measures.
Experience management platforms like Qualtrics, which process billions of interactions annually, are no longer just ‘survey tools’ but integral components that connect to critical systems like HRIS, CRM, and compensation engines. Organizations must prioritize input integrity as AI technology becomes increasingly embedded in their workflows to prevent data breaches and unauthorized access.
Security leaders have identified six key blind spots that exist between the security stack and the AI engine in CX platforms:
1. DLP tools struggle to detect unstructured sentiment data leaving through standard API calls.
2. Zombie API tokens from past campaigns remain active, posing a security risk.
3. Public input channels lack bot mitigation, allowing fraudulent data to reach the AI engine undetected.
4. Compromised CX platforms enable lateral movement through approved API calls.
5. Non-technical users often hold admin privileges that go unchecked.
6. Open-text feedback containing sensitive information hits the database before PII gets masked, exposing vulnerabilities.
To address these vulnerabilities, organizations must implement continuous monitoring of user activity, configurations, and data access within experience management platforms. Security teams are exploring solutions like extending SSPM tools, API security gateways, and CASB-style access controls to enhance security measures in CX platforms.
By bridging the gap between security posture management and the CX layer, organizations can gain real-time visibility into potential threats and enforce policies to protect sensitive data effectively. It is crucial for security teams to prioritize the security of AI-driven workflows to prevent costly data breaches and ensure the integrity of business decisions made based on AI-generated insights.

