CyberCX confirmed their incident response team was called in for AI data spill cases in 2025. That makes shadow AI a forensics category, not just a risk register entry.

The CyberCX 2026 Threat Report, released 3 March, is based on more than 100 serious incidents their Digital Forensics and Incident Response (DFIR) team handled in 2025. Buried in the findings is a significant marker for the AI governance industry.

For the first time, CyberCX was engaged for AI data spill incidents. Staff at multiple organisations uploaded sensitive corporate data to public AI tools. These were real forensics engagements requiring professional response teams. Not survey findings. Not hypothetical risk scenarios.

What CyberCX found when they got there

The detail that matters most: affected organisations had no enterprise licensing for AI platforms, no data loss prevention controls, and no network logging that covered AI tool usage.

Hamish Krebs, Global Executive Director of Digital Forensics and Incident Response at CyberCX, noted that 2025 was the first year the DFIR team was engaged for these types of incidents. He attributed it to the continued surge in AI adoption across organisations.

The result was predictable. Without logging or monitoring, these organisations could not identify what data had been shared or how much. The forensics team was called in after the fact, and even they could not fully quantify the damage. These were not minor oversights that a conversation with an employee could resolve. Organisations paid for professional incident response and still could not determine the scope of the breach.

Why this matters beyond the incidents themselves

Shadow AI has been a talking point for 18 months. Every industry report has cited it as a growing risk. But until now, the evidence has been statistical. Survey-based. Modelled.

The CyberCX findings are different. This is a forensics firm reporting on actual incident response engagements, not survey data or modelled estimates.

It signals a shift in how organisations will need to think about AI risk. Shadow AI is now an incident category with real response costs, legal exposure, and reputational risk.

The DTEX/Ponemon 2026 Insider Risk Report puts the average cost of insider incidents at US $19.5 million per year, up 20% in two years. Shadow AI was named as a key driver. Only 18% of organisations have properly integrated AI governance into their risk programs (DTEX/Ponemon, 2026).

The gap between enterprise advice and reality

CyberCX recommends layered defences: data cleansing and labelling programs, network-level DLP, endpoint-level DLP, and data-level DLP solutions. Their report describes these layers as designed to complement each other.

That is solid advice for organisations with security teams, dedicated budgets, and infrastructure maturity. CyberCX is transparent about this. Their report notes their customer base is skewed towards medium and large organisations.

But the research consistently shows that smaller organisations face disproportionate risk. Reco data shows small businesses with 11 to 50 employees have the highest shadow AI usage rates, with 27% of employees using unsanctioned tools. Across all sizes, Reco found an average of 269 shadow AI tools per 1,000 employees.

The IBM Cost of a Data Breach 2025 report found that organisations with high levels of shadow AI paid US $670,000 more per breach than those without. AI-associated breaches averaged $4.63 million per incident.

The mismatch is clear. The organisations most exposed to shadow AI risk are the least equipped to deploy enterprise-grade layered defences.

What the Australian landscape looks like

An Okta survey of Australian security and technology leaders found that 41% say nobody in their organisation owns AI security risk. A further 35% named shadow AI as their top AI security blind spot.

If four in ten organisations have no clear ownership of AI risk, the CyberCX findings start to look less like outlier incidents and more like early indicators of a broader pattern.

Meanwhile, compliance deadlines are approaching. The EU AI Act takes effect in August 2026 with fines up to EUR 15 million or 3% of global turnover. The Australian Privacy Act amendments require disclosure of automated decision-making, including AI, in privacy policies from December 2026. Civil penalties run to AUD $50 million for serious interference.

What this signals for the industry

Three things stand out from the CyberCX report.

First, shadow AI has crossed from theoretical risk to documented incident. The forensics engagement model means organisations are now paying response costs, not just reading about potential risks.

Second, the visibility gap is the core problem. Organisations without monitoring could not scope the damage even with professional help. Every recommendation, from CyberCX and across the industry, points back to the same foundation: you cannot govern what you cannot see.

Third, the timeline is compressing. Between regulatory deadlines, rising insider risk costs, and the first confirmed forensics engagements, the window between “we should probably look at this” and “we needed this six months ago” is closing.

The CyberCX report uses a telling phrase in its AI section heading: “the more immediate AI risk might be internal.” Based on what their DFIR team found in 2025, that is hard to argue with.


Sources: CyberCX 2026 Threat Report, DTEX/Ponemon 2026 Insider Risk Report, IBM Cost of a Data Breach 2025, Reco State of Shadow AI Report, Okta Australia AI Governance Survey