New research from Ponemon, IBM, and Reco puts concrete dollar figures on the cost of unmanaged AI usage, with insider incidents now averaging US $19.5 million per year.

Shadow AI has been discussed as a risk category for over a year. The usual framing covers data exposure, compliance gaps, and reputational damage, all legitimate concerns. But the 2026 research cycle has added something that was largely missing from earlier conversations: specific cost data.

How much does insider risk cost now?

The DTEX/Ponemon 2026 Insider Risk Report, based on research across more than 1,000 IT and security professionals, found that the average annual cost of insider incidents has reached US $19.5 million per organisation.

That figure is up 20% in two years. The report names shadow AI as a key driver of the increase. As employees adopt AI tools outside sanctioned channels, the surface area for data spillage, compliance failures, and operational risk expands.

The report also found that only 18% of organisations have properly built AI governance into their risk programs. The remaining 82% are managing AI risk through general security controls or, in many cases, not managing it at all.

What shadow AI adds to breach costs

The IBM Cost of a Data Breach 2025 report provides a more specific lens. Organisations with high levels of shadow AI, meaning significant use of unapproved AI tools across the workforce, paid an average of US $670,000 more per breach than those without.

The average cost of an AI-associated breach reached $4.63 million per incident. That figure includes detection, containment, notification, lost business, and regulatory response.

The cost premium exists because shadow AI breaches are harder to detect, harder to scope, and harder to contain. When an employee pastes client data into an unsanctioned AI tool, the organisation typically has no logging, no audit trail, and no way to determine what was exposed.

CyberCX confirmed this pattern in their 2026 Threat Report. Their forensics team was engaged for AI data spill incidents for the first time in 2025. In many cases, affected organisations could not identify or quantify the data spillage because they had no enterprise AI licensing, no DLP controls, and no network logging.

Where the costs concentrate

Insider risk costs are not evenly distributed. The Ponemon research identifies several areas where shadow AI drives disproportionate spending.

Containment takes longer when there is no visibility. If an organisation does not know which AI tools are in use, tracing a data exposure incident requires manual investigation across every potential platform. That extends the response timeline and the associated costs.

Regulatory penalties add a second layer. The EU AI Act, effective August 2026, carries fines up to EUR 15 million or 3% of global turnover. The Australian Privacy Act amendments from December 2026 include civil penalties up to AUD $50 million. Both require organisations to demonstrate governance over AI usage. Shadow AI makes that demonstration impossible.

Business disruption is the third cost category. Reco research found an average of 269 shadow AI tools per 1,000 employees. When a breach originates from an unsanctioned tool, the organisation faces a triage problem: how many other unsanctioned tools might present the same exposure? Without an inventory, the answer is unknown.

The governance gap in numbers

WalkMe and IDC found that 78% of employees use unapproved AI tools at work. Reco found that small businesses with 11 to 50 employees face the highest risk, with 27% of employees using unsanctioned tools. An Okta survey of Australian security leaders found that 41% say nobody in their organisation owns AI security risk. A further 35% named shadow AI as their top AI security blind spot.

The Netskope Cloud and Threat Report added further scale: 72% of enterprise GenAI use is shadow IT, and organisations average 223 AI-related data incidents per month.

These are not projections. They are measurements of current organisational behaviour. And they explain why the cost figures are climbing as fast as they are.

What the cost data means for governance decisions

The traditional approach to AI governance has been policy-led. Write an acceptable use policy, communicate it to staff, and trust that compliance follows.

The cost data suggests that approach is insufficient. When 78% of employees use unapproved tools, the gap between policy and practice is too wide for a document to bridge.

The organisations paying forensics bills and breach premiums share a common characteristic: they lacked visibility into AI usage at the point it happened. They had no monitoring, no audit trail, and no way to catch sensitive data before it left the organisation.

At US $19.5 million per year for insider incidents and $670,000 in additional breach costs for shadow AI, the financial case for AI visibility tools is no longer abstract. The cost of monitoring is a fraction of the cost of not monitoring, and for most organisations that gap is already widening.


Sources: DTEX/Ponemon 2026 Insider Risk Report, IBM Cost of a Data Breach 2025, CyberCX 2026 Threat Report, Reco State of Shadow AI Report 2025, Okta Australia AI Governance Survey, Netskope Cloud and Threat Report 2026, WalkMe/IDC AI in the Workplace Survey 2025