An employee at Vercel incorporated an AI tool, while an employee at the AI vendor suffered an infostealer attack. This combination led to unauthorized access to Vercel’s production environments via an unreviewed OAuth grant.
Vercel, the cloud platform powering Next.js with millions of npm downloads each week, announced on Sunday that its internal systems were compromised by attackers. Mandiant was engaged for investigation, and law enforcement was alerted. Ongoing inquiries continue. On Monday, Vercel updated that it worked with GitHub, Microsoft, npm, and Socket, confirming that no Vercel npm packages were affected. Vercel also stated a new default setting for environment variables to be “sensitive.” Next.js, Turbopack, AI SDK, and all Vercel-published npm packages remained secure following an audit with GitHub, Microsoft, npm, and Socket.
The breach originated through Context.ai. OX Security reported that a Vercel employee installed the Context.ai browser extension and signed in using a corporate Google Workspace account, permitting broad OAuth access. When Context.ai was breached, the attacker gained the employee’s Workspace access, moved into Vercel’s environments, and escalated privileges using environment variables not marked as “sensitive.” Vercel noted that variables marked as sensitive are protected from being read, while those not marked were accessible in plaintext, providing an escalation path.
CEO Guillermo Rauch described the breach as highly sophisticated and likely accelerated by AI. Jaime Blasco, CTO of Nudge Security, identified another OAuth grant linked to Context.ai’s Chrome extension, which matched the client ID from Vercel’s IOC to Context.ai’s Google account before Rauch’s statement. The Hacker News reported that Google removed the Context.ai Chrome extension from the Chrome Web Store on March 27. According to The Hacker News and Nudge Security, this extension included a second OAuth grant allowing read access to users’ Google Drive files.
Patient zero: A Roblox cheat and a Lumma Stealer infection
Hudson Rock released forensic evidence on Monday, linking the breach to a February 2026 Lumma Stealer infection on a Context.ai employee’s device. Browser history revealed downloads of Roblox auto-farm scripts and game exploit executors. Stolen credentials included Google Workspace logins, Supabase keys, Datadog tokens, Authkit credentials, and access to the support@context.ai account. Hudson Rock identified the affected user as a core member of “context-inc,” Context.ai’s tenant on Vercel’s platform, with administrative access to production environment variable dashboards.
Context.ai issued a bulletin on Sunday (updated Monday), stating the breach impacted its obsolete AI Office Suite consumer product, not its enterprise Bedrock offering. Context.ai detected unauthorized AWS access in March, engaged CrowdStrike to investigate, and shut down the environment. The updated bulletin revealed a broader scope than initially thought: OAuth tokens for consumer users were also compromised, one of which provided access to Vercel’s Google Workspace.
Dwell time is crucial for security directors. Almost a month passed between Context.ai detecting the breach in March and Vercel disclosing it on Sunday. Trend Micro’s analysis suggests the breach might have started as early as June 2024, which, if accurate, would extend the dwell time to approximately 22 months. VentureBeat could not independently verify this with Hudson Rock’s February 2026 dating; Trend Micro did not comment before publication.
Where detection goes blind
Security directors can compare their detection systems against the four-step kill chain exploited in this breach.
|
Kill Chain Step |
Incident Detail |
Detection Responsibility |
Common Coverage |
Coverage Gap |
|
1. Infostealer on device |
Context.ai employee downloaded Roblox cheats; Lumma Stealer harvested credentials. |
Endpoint EDR; credential monitoring. |
Low. Device likely under-monitored. No stealer log monitoring in many organizations. |
Organizations often lack infostealer intelligence feeds or correlation of stealer logs with employee emails. |
|
2. AWS compromise at Context.ai |
Attacker used stolen credentials to access Context.ai’s AWS. Detected in March. |
Context.ai cloud security; AWS CloudTrail. |
Partially detected. AWS access was halted but OAuth token exfiltration was missed. |
Initial investigation missed OAuth token exfiltration. Scope was underestimated until Vercel’s disclosure. |
|
3. OAuth token theft into Vercel |
Compromised OAuth token accessed Vercel employee’s Google Workspace. Broad permissions granted via Chrome extension. |
Google Workspace audit logs; OAuth app monitoring; CASB. |
Very low. Third-party OAuth token usage patterns are rarely monitored. |
No workflow intercepted the grant. No anomaly detection for OAuth token use from compromised parties. |
|
4. Lateral movement into Vercel production |
Attacker accessed non-sensitive environment variables, harvested customer credentials. |
Vercel platform audit logs; behavioral analytics. |
Moderate. Detection came post-exfiltration. |
Env var access by compromised account did not trigger real-time alerts. |
Confirmed vs. claimed details
Vercel’s bulletin confirms unauthorized internal access, a limited customer impact, and two IOCs tied to Context.ai’s Google Workspace OAuth apps. Rauch confirmed Next.js, Turbopack, and Vercel’s open-source projects remain unaffected.
Meanwhile, someone using the ShinyHunters alias claimed on BreachForums to possess Vercel’s internal database, employee accounts, and GitHub and NPM tokens, with a $2 million price tag. Austin Larsen, Google Threat Intelligence principal threat analyst, assessed the claim as likely false. ShinyHunters-associated actors have denied involvement. These claims remain unverified.
Six governance failures exposed by the Vercel breach
1. AI tool OAuth scopes go unchecked. Context.ai’s bulletin notes a Vercel employee granted “Allow All” permissions with a corporate account. Most security teams lack an inventory of AI tools their employees have granted OAuth access to.
CrowdStrike CTO Elia Zaitsev bluntly stated at RSAC 2026: “Don’t give an agent access to everything just because you’re lazy. Give it access to only what it needs to get the job done.” Jeff Pollard, Forrester VP and principal analyst, told Cybersecurity Dive this attack highlights third-party risk management concerns and AI tool permissions.
2. Environment variable classification is critical. Vercel differentiates between “sensitive” variables (protected from reading) and those accessible in plaintext. Attackers exploited the latter. A developer toggle determined the impact’s extent. Vercel now defaults new environment variables to sensitive.
“Modern controls are deployed, but if legacy tokens or keys aren’t retired, the system quietly favors them,” Merritt Baer, Enkrypt AI CSO, told VentureBeat.
3. Infostealer-to-SaaS-to-supply-chain escalation chains lack detection. Hudson Rock’s report shows a kill chain crossing four organizational boundaries. No single detection layer covers this. Context.ai’s updated bulletin acknowledged the scope exceeded initial CrowdStrike-led investigation findings.
4. Dwell time between detection and notification exceeds attacker timelines. Context.ai detected the AWS breach in March. Vercel disclosed on Sunday. CISOs should ask vendors: what is your notification window after unauthorized access affecting downstream customers?
5. Third-party AI tools are shadow IT. Vercel’s bulletin describes Context.ai as “a small, third-party AI tool.” Grip Security’s March 2026 analysis of 23,000 SaaS environments found a 490% year-over-year rise in AI-related attacks. Vercel learned this lesson the hard way.
6. AI-accelerated attackers compress response times. Rauch’s AI acceleration assessment comes from IR team observations. CrowdStrike’s 2026 Global Threat Report sets the average eCrime breakout time at 29 minutes, 65% faster than 2024.
Security director action plan
|
Attack Surface |
Failure |
Recommended Action |
Responsibility |
|
OAuth governance |
Context.ai had broad “Allow All” permissions. No approval workflow intercepted. |
Inventory all AI tool OAuth grants. Revoke excessive scopes. Check Vercel IOCs. |
Identity / IAM |
|
Env var classification |
Non-sensitive variables remained accessible, providing an escalation path. |
Default to non-readable. Require security sign-off to downgrade any variable’s accessibility. |
Platform engineering + security |
|
Infostealer-to-supply-chain |
Kill chain involved Lumma Stealer, Context.ai AWS, OAuth tokens, Vercel Workspace, and production environments. |
Correlate infostealer intel feeds with employee domains. Automate credential rotation upon appearance in stealer logs. |
Threat intel + SOC |
|
Vendor notification lag |
Nearly a month passed between Context.ai detection and Vercel disclosure. |
Mandate 72-hour notification clauses in contracts involving OAuth or identity integration. |
Third-party risk / legal |
|
Shadow AI adoption |
One employee’s unauthorized AI tool became a breach vector for many organizations. |
Expand shadow IT discovery to AI agent platforms. Treat unauthorized adoption as a security event. |
Security ops + procurement |
|
Lateral movement speed |
Rauch suspects AI acceleration compressed the access-to-escalation window. |
Reduce detection-to-containment SLAs below the 29-minute eCrime average. |
SOC + IR team |
Run both IoC checks today
Search your Google Workspace admin console (Security > API Controls > Manage Third-Party App Access) for two OAuth App IDs.
The first is 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com, linked to Context.ai’s Office Suite.
The second is 110671459871-f3cq3okebd3jcg1lllmroqejdbka8cqq.apps.googleusercontent.com, linked to Context.ai’s Chrome extension, granting Google Drive read access.
If either has interacted with your environment, you are within the blast radius, regardless of Vercel’s forthcoming disclosures.
Implications for security directors
Setting aside the Vercel name, this incident demonstrates a significant vulnerability in AI agent OAuth integrations that most enterprise security systems cannot detect, scope, or contain. A simple Roblox cheat download in February led to infrastructure access by April, crossing four organizational boundaries, two cloud providers, and one identity perimeter, all without needing a zero-day exploit.
In many enterprises, employees have linked AI tools to corporate Google Workspace, Microsoft 365, or Slack with broad OAuth scopes, often without security teams’ awareness. The Vercel breach illustrates the risks when attackers exploit this exposure first.

