Most organizations are rightly nervous about employees adopting unapproved AI tools. Shadow AI use in the form of LLMs, where users upload sensitive data to ChatGPT, Claude, or a dozen other chatbots, is a legitimate concern. But it's not the biggest one.
When an employee connects an AI app into Google Workspace, Microsoft 365, Salesforce, or any other core platform, they're creating a persistent, programmatic bridge between your environment and a third party.
That bridge doesn't go away when the employee stops using the app. And if that third party gets compromised, the bridge becomes a direct pathway into your systems.
We just saw this scenario play out with the Vercel breach. Context.ai’s AI app was trialled by a Vercel employee, who had granted it access (via OAuth) to their Google Workspace account. When Context.ai got breached, Vercel got caught in the fallout.
The AI scramble is a force multiplier for shadow SaaS
Shadow IT is not a new problem. Most organizations run heavily (or exclusively) on SaaS, accessed in the browser, with hundreds of apps per enterprise. Unmanaged, self-adopted apps have been a thorn in the side of security teams for some time. But the AI scramble is a force multiplier.
There are different kinds of shadow IT to be aware of in the context of AI apps:
Shadow apps: Apps that employees have signed up to and are using for business purposes without business approval. This includes apps signed up to with a corporate account or personal account.
Shadow tenants: Apps that employees are accessing with personal accounts, essentially creating shadow tenants outside of your organization's control — even if you've approved the app itself.
Shadow extensions: Many AI apps come with an extension counterpart, along with countless third-party extensions that are either untrustworthy or downright malicious. Browser extensions add another angle to the equation by presenting visibility beyond the application into browser activity.
... continue reading