The real shadow AI risk is thousands of employees pasting sensitive data into personal AI accounts every day with no logging, no oversight, and no awareness they have done anything wrong. The threat is behavioural, not technical. And most compliance programmes are not built for it.

That framing comes from Aaron Warner, CEO of security firm ProCircular, writing in InformationWeek on 5 March 2026. His argument: in 2026, shadow AI is less a technology problem and more a behavioural compliance problem at scale.

How this differs from shadow IT

Traditional shadow IT required some technical capability. A developer building an unauthorised workaround understood they were going around IT policy. They knew the rules, even if they were breaking them.

Shadow AI lowers that bar to near zero. It requires nothing more than a browser and the impulse to finish a task faster. When an employee pastes customer records or internal financial data into a personal ChatGPT account, that data may persist on third-party servers, be used in model training, or become subject to legal discovery orders.

According to WalkMe’s 2025 AI in the Workplace Survey (Propeller Insights, July 2025), 78% of employees report using AI tools not approved by their employer. That figure has held broadly consistent across multiple independent surveys.

The mechanism of harm has changed. Shadow IT moved data around inside and between organisations. Shadow AI sends it outside entirely, to third-party model infrastructure with its own retention policies, training data practices, and legal exposure.

The scale is already documented

Netskope’s 2026 Cloud and Threat Report found that 47% of generative AI users in the workplace are using personal accounts rather than organisation-managed tools. That is down from 78% the prior year, suggesting governance policies are starting to have some effect, but it still represents roughly half of the active GenAI user base operating outside IT visibility. Netskope also found that the average organisation experiences 223 data policy violations involving generative AI applications every month. Organisations in the top quartile for AI adoption see up to 2,100 incidents monthly.

Healthcare is a useful case study in concentrated risk. A February 2026 clinician survey by Wolters Kluwer found that 41% of healthcare respondents are aware of colleagues using unapproved AI tools at work, while 17% admit doing so themselves. In a sector where a single mishandled patient record triggers a notifiable breach, that combination of awareness and continued use should alarm compliance officers.

The e-discovery dimension

A January 2026 court ruling adds a different kind of regulatory exposure. On 5 January, the US District Court for the Southern District of New York upheld orders requiring OpenAI to produce a sample of 20 million de-identified ChatGPT conversation logs as part of consolidated AI copyright litigation.

The ruling establishes that AI conversation logs are discoverable electronically stored information. Courts will weigh privacy interests against relevance, and de-identification is not a complete shield. For organisations whose employees are using personal AI accounts to process sensitive business data, this creates a category of legal exposure that did not exist before generative AI became routine.

IBM’s 2025 Cost of a Data Breach Report found that breaches linked to shadow AI added as much as USD $670,000 to the average breach cost. 65% of shadow AI-related breaches exposed customer personally identifiable information.

What compliance teams should do

The evidence is consistent across multiple sources: blanket bans do not work. Employees who most effectively use AI tools tend to be high performers, and prohibition drives usage underground while reducing visibility. The better approach is structured engagement.

A practical framework starts with visibility. Organisations need to know what AI tools employees are actually using, which data types are flowing into them, and whether personal or enterprise accounts are in use. Network-level monitoring for AI traffic, combined with a formal AI tool inventory, gives compliance teams a baseline most currently lack. The AI governance framework guide covers how to structure that inventory and the controls around it.

The next layer is a clear data handling policy. Rather than blanket restrictions, a policy that distinguishes between public information, internal business data, and regulated or sensitive data gives employees practical guidance they can apply in the moment. This should be paired with a designated list of approved enterprise AI tools that meet the organisation’s data retention, confidentiality, and vendor review requirements.

Training closes the loop. The core message employees need is that pasting data into an AI tool is functionally equivalent to emailing it to an external party. Most employees using personal ChatGPT accounts for work tasks do not make that connection. Once they do, behaviour changes.

The regulatory pressure building

Warner frames the current moment as “a regulatory disaster waiting to happen.” That assessment reflects where enforcement is heading, not where it is today. In most jurisdictions, shadow AI data leaks sit in a grey zone: they may constitute a notifiable breach under privacy legislation, trigger sector-specific obligations in financial services or healthcare, or generate reputational and litigation exposure when they surface.

That grey zone is narrowing. Regulators across the US, EU, and Australia are developing frameworks that treat AI governance as an element of broader data protection compliance. ASIC’s review of AI practices across 23 Australian financial services licensees found the same pattern throughout: adoption consistently outrunning governance. Shadow AI is where that gap is most acute.

Organisations that have built observability and guardrails around employee-driven AI before enforcement catches up will be in a materially better position than those responding reactively after a breach investigation. The lesson from shadow IT was that trying to suppress employee behaviour never worked as well as channelling it. Shadow AI is the same problem, with higher stakes and faster consequences.

For context on what the forensics picture looks like when shadow AI becomes an incident, see Shadow AI Just Became a Forensics Problem.


Related reading: The Real Cost of Shadow AI: What the 2026 Data Shows | What Is an AI Governance Framework?


Stay across AI governance developments in Australia and globally. Subscribe to the Shadow AI Watch newsletter.


Sources