On 19 April 2026, Vercel disclosed a security incident that originated from a single employee’s use of a consumer AI tool. The employee had installed Context.ai, an AI productivity and analytics extension, and connected it to their Vercel enterprise Google Workspace account with “Allow All” OAuth permissions. An attacker who had already compromised Context.ai used that OAuth access to take over the employee’s Workspace account, pivot into Vercel’s internal systems, and enumerate and decrypt customer environment variables. A threat actor linked to the ShinyHunters group is now offering the stolen data on BreachForums for approximately USD 2 million.
This is the most detailed, publicly documented case of a shadow AI tool becoming the entry point for an enterprise supply-chain breach. The kill chain runs from a consumer AI extension through OAuth token theft to customer data exfiltration, and it started, according to Hudson Rock’s analysis, with a Context.ai employee downloading Roblox game cheats infected with Lumma Stealer malware.
The kill chain
OX Security’s reconstruction and Trend Micro’s analysis document the chain in detail:
February 2026. A Context.ai employee’s machine was infected with Lumma Stealer, an information-stealing malware. Hudson Rock traced the infection to the employee searching for and downloading Roblox “auto-farm” scripts and executors, which are common distribution vectors for infostealers. The malware harvested credentials for Google Workspace, Supabase, Datadog, Authkit, and the [email protected] administrative account.
March 2026. The attacker used the stolen credentials to access Context.ai’s AWS environment and to compromise OAuth tokens belonging to Context.ai’s consumer users. Context.ai identified and blocked the AWS intrusion, but the OAuth token theft had already occurred. The Context.ai Chrome extension (ID: omddlmnhcofjbnbflmjginpjjblphbgk), which OX Security confirmed used OAuth2 Google App login, was removed from the Chrome Web Store on 27 March 2026.
March to April 2026. At least one Vercel employee had signed up for the Context.ai AI Office Suite using their Vercel enterprise Google Workspace account and granted “Allow All” permissions. Context.ai’s security advisory confirmed that the compromised OAuth token gave the attacker access to Vercel’s Google Workspace. From there, the attacker pivoted into Vercel environments and accessed customer environment variables not marked as “sensitive.”
19 April 2026. Vercel published its security bulletin. A threat actor claiming to represent ShinyHunters listed the stolen data on BreachForums, claiming access to customer API keys, source code, and database data. ShinyHunters told BleepingComputer they were not involved, and the post was later removed.
23 April 2026. TechCrunch reported that Vercel had uncovered evidence of malicious activity predating the April incident, suggesting the breach may be broader and longer-running than initially disclosed. Vercel CEO Guillermo Rauch confirmed on X that the hackers had been active “beyond that startup’s compromise.”
What makes this a shadow AI incident
The Vercel bulletin and Context.ai’s advisory both confirm that the breach vector was a consumer AI tool connected to an enterprise identity with overly broad permissions. This is textbook shadow AI: an employee adopted a productivity tool outside of IT-sanctioned channels, connected it to corporate infrastructure, and created an attack path that bypassed the organisation’s other security controls.
David Lindner, CISO of Contrast Security, told Dark Reading: “No exploit. No zero-day. Just an unsanctioned AI tool, an overpermissioned OAuth grant, and a gaming cheat download. Vercel is now working with Mandiant on a breach that a threat actor is selling for $2 million. Your employees are doing the same things on their machines right now. The question is whether you know about it.”
That quote captures the governance problem precisely. The attacker did not need to find a vulnerability in Vercel’s systems. They needed one employee to grant one AI tool broad access to one enterprise identity, and that was enough to traverse from a compromised startup into a major cloud platform provider.
The OAuth permissions problem
OAuth tokens are designed to allow third-party applications to access user data without sharing passwords. When an employee grants “Allow All” permissions to a consumer app, that app receives a token that can read, write, and manage data across the employee’s entire Google Workspace, including emails, documents, calendar, and contacts.
OX Security’s analysis noted that the Context.ai onboarding process asked users to link a Google account and grant the app read access to their entire Google Drive. If the user connected with an enterprise account, that meant the app could read every document in the enterprise Drive visible to that user.
Most organisations do not audit which third-party apps their employees have authorised through OAuth. Google Workspace administrators can view and revoke third-party app access, but few do so proactively. The result is a growing inventory of consumer apps with enterprise-grade access that IT security teams do not monitor, do not control, and may not know exist.
Vercel published the compromised OAuth client ID in its bulletin: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com. Any Google Workspace administrator can search for this identifier in their admin console to determine whether employees in their organisation were also using Context.ai.
The pattern SAW readers already know
This incident validates the structural argument SAW has been building across several articles. When Microsoft Copilot was deployed across M365 tenancies, it surfaced permissions sprawl that was already there. When SaaS vendors embedded AI features by default, they processed data that organisations did not know was accessible. When Anthropic’s Mythos found 17-year-old vulnerabilities in operating systems, it exposed technical debt that had accumulated silently for decades.
Context.ai is the same pattern at a different layer. The AI tool did not create a vulnerability. It inherited permissions that a human had granted, and those permissions became the attack path. The difference is that the previous examples were about risk exposure. This one ended in an actual breach, with customer data on a criminal marketplace.
What IT and security teams should do
Audit third-party OAuth access in Google Workspace and Microsoft 365. Both platforms allow administrators to view which third-party apps employees have authorised. Search specifically for the Context.ai OAuth client ID published in Vercel’s bulletin. Then review every other AI-related app in the list. The tools employees installed without IT approval are the tools an attacker will use to move laterally.
Block or restrict consumer AI apps from connecting to enterprise identity. Google Workspace administrators can configure app access control to block unapproved third-party apps or limit OAuth scopes. Microsoft Entra ID provides equivalent controls for M365. The default should be deny, not allow.
Add AI tools to the vendor risk assessment process. Consumer AI tools are not subject to the same due diligence as enterprise software procurement. They should be. Any AI tool that requests access to enterprise identity, email, calendar, or file storage should be assessed for security practices, data handling, and OAuth scope before employees are permitted to use it.
Encrypt all environment variables and secrets. Vercel’s bulletin distinguishes between “sensitive” (encrypted, not readable) and “non-sensitive” (plaintext, accessible) environment variables. The attacker accessed the non-sensitive ones. The distinction was a design choice that reduced friction but increased exposure. Any system that stores API keys, tokens, or credentials should default to encryption regardless of classification.
Hunt for the specific IOC. Vercel published the compromised OAuth client ID for a reason: to allow other organisations to check whether their employees were also affected. OX Security noted that Context.ai may have had “hundreds of users across many organizations.” Every Google Workspace administrator should search for this identifier today.
Run targeted awareness training on AI tool permissions. The training does not need to be complex. Employees need to understand one concept: connecting an AI tool to your work account with “Allow All” permissions gives that tool, and anyone who compromises it, access to everything your account can see. That single message, reinforced with the Vercel example, is more effective than generic security awareness.
This will not be the last one
Vercel is a well-resourced technology company with a security team, Mandiant engagement, and coordinated disclosure. Most organisations that grant consumer AI tools broad OAuth access do not have those resources. The Vercel breach was caught, documented, and published. The next one may not be.
OX Security concluded their analysis with a blunt assessment: “Rotating keys, enabling 2FA and auditing your 3rd party connections has moved from being a recommendation to a survival tactic in 2026.”
The cost of shadow AI now has a price tag attached to it: one employee, one consumer AI tool, one “Allow All” OAuth grant, and USD 2 million worth of stolen data on a criminal marketplace.
Sources
- Vercel, “Vercel April 2026 security incident,” security bulletin, 19-23 April 2026 (kill chain, OAuth client ID IOC, remediation steps, npm package confirmation). vercel.com
- TechCrunch (Zack Whittaker), “App host Vercel says it was hacked and customer data stolen,” 20 April 2026 (ShinyHunters $2M listing, Context.ai “Allow All” permissions, scope confirmation). techcrunch.com
- TechCrunch (Zack Whittaker), “Vercel says some of its customers’ data was stolen prior to its recent hack,” 23 April 2026 (expanded scope, pre-April compromise evidence). techcrunch.com
- OX Security, “Vercel Breached via Context AI Supply Chain Attack,” 19 April 2026 (Chrome extension ID, OAuth2 login flow, onboarding permissions, Context.ai product detail). ox.security
- The Hacker News, “Vercel Breach Tied to Context AI Hack Exposes Limited Customer Credentials,” 22 April 2026 (Context.ai advisory detail, Hudson Rock Lumma Stealer finding, Guillermo Rauch quote). thehackernews.com
- Dark Reading, “Vercel Employee’s AI Tool Access Led to Data Breach,” 21 April 2026 (David Lindner CISO quote, Context.ai advisory detail, OAuth token analysis). darkreading.com
- BleepingComputer, “Vercel confirms breach as hackers claim to be selling stolen data,” 19 April 2026 (ShinyHunters denial, Rauch X post, Linear screenshot, BreachForums detail). bleepingcomputer.com
- Help Net Security (Zeljka Zorz), “Vercel breached via compromised third-party AI tool,” 20 April 2026 (Hudson Rock Lumma Stealer analysis, Roblox cheat vector, Context.ai corporate credentials). helpnetsecurity.com
- Trend Micro, “The Vercel Breach: OAuth Supply Chain Attack,” 19 April 2026 (technical reconstruction, kill chain timeline). trendmicro.com