Kordia’s 2026 NZ Business Cyber Security Report is the first major regional survey to put shadow AI above external threats on the cyber risk table. Published 9 March, the report found 43% of businesses say their own employees are their biggest AI risk. Shadow AI has crossed into insider threat territory. Governance frameworks have not kept pace.


What the Kordia Survey Found

Kordia is a New Zealand state-owned enterprise. Through its subsidiary Aura Information Security, it provides cyber advisory and incident response services across ANZ. The annual New Zealand Business Cyber Security Report has run for ten years, surveying nearly 250 businesses with 50 or more employees. It carries weight with boards in the region.

The 2026 edition shifts the conversation from external attackers to internal AI misuse.

The top finding: 43% of surveyed businesses said employees accidentally exposing data through AI-driven processes is their biggest cyber risk, making it “the top concern by quite a margin” according to Patrick Sharp, General Manager of Aura Information Security. External AI-powered attacks ranked lower.

The year-on-year trend reinforces this. In 2025, AI-related cyber incidents accounted for 6% of reported attacks. In 2026, that figure reached 14%, more than doubling in twelve months. The proportion of businesses naming improper AI use as a top-three cyber security challenge rose from 16% to 24% over the same period (Kordia, March 2026).

Sharp describes the core behaviour directly: staff are routinely sharing confidential data with AI platforms, including client records, commercial terms, internal pricing, and source code, without any organisational guidance on the risks involved. The same data they would never paste into a Google search goes straight into a public AI tool without a second thought.

The traditional external threat picture shows a different trajectory. The proportion of businesses reporting a cyber-attack dropped from 59% to 44%, consistent with New Zealand’s National Cyber Security Centre data showing incidents declining from 7,122 to 5,995. But the financial impact within that smaller pool got worse. Financial extortion rose from 14% to 19% of incidents, and the NCSC reported NZD $12.4 million in direct financial losses in Q3 2025 alone, up 118% from the previous quarter (NCSC Cyber Threat Report 2025). Among businesses that received a ransom demand, 42% paid it.

Kordia also flagged the “sanctioned but ungoverned” problem. Many organisations have responded to shadow AI by rolling out approved AI tools. According to the report, these deployments frequently lack sufficient security governance and practices. The line between sanctioned and shadow AI becomes difficult to enforce when authorised tools carry no guardrails.


Two Lenses on the Same Week

The Kordia findings did not emerge in isolation. The same week, two other data points landed that add texture to what the survey is measuring.

CyberCX, the ANZ-based digital forensics and incident response firm, published its 2026 Threat Report confirming that its forensics team was called in to manage AI data spill incidents for the first time, suggesting shadow AI has moved from a theoretical liability to an evidenced one. Where survey data shows how businesses perceive the risk, forensics data shows what the incident looks like once it materialises. The two sources are measuring different things but describing the same problem.

Netskope’s global Cloud and Threat Report adds a third signal. Personal AI account usage dropped from 78% to 47% as organisations pushed sanctioned tools, but the number of data policy violations doubled over the same period. The reduction in unsanctioned tool use produced more detectable violations, not fewer incidents. Organisations discovering more violations is partly a function of having more governance to detect them, not necessarily evidence that the underlying behaviour has changed.

Read together, the three reports from the same week form a consistent picture: the problem is larger than survey awareness figures suggest, it has reached the forensics stage in the region, and increasing governance visibility surfaces more exposure rather than less.


A Third Category of Insider Threat

Traditional insider threat frameworks divide risk into two categories: malicious insiders who deliberately steal or damage, and negligent insiders who make mistakes. Shadow AI fits neither cleanly.

The employee who pastes a client contract into an AI platform before a negotiation is not malicious. They are not negligent in the conventional sense either. They are optimising for the task in front of them, using the fastest tool available, with no understanding that sensitive data has now left organisational control. The behaviour is productivity-driven, not risk-driven, and it produces data loss outcomes that are functionally identical to the negligent category.

This matters for how organisations design their response. Malicious insider programmes focus on detection and deterrence. Negligent insider programmes focus on training and consequence. Productivity-driven AI exposure requires a third response architecture: policy that is specific enough to be actionable, visibility at the point of data entry rather than after the fact, and accountability structures that sit with AI programme owners rather than individual employees.

The Kordia data suggests governance frameworks have not caught up with this third category. When 43% of businesses identify employees as their biggest AI cyber risk but governance frameworks remain immature, the gap is structural, not a matter of employee behaviour alone.

Insider threat playbooks in particular need updating. An employee pasting commercial terms into an AI tool before a bid submission is a data exfiltration event in terms of outcome, regardless of intent. It warrants detection, investigation, and response procedures. Current playbooks, built around disgruntled employees and credential theft, are not calibrated for this scenario.


What Governance Teams Should Watch

Two regulatory deadlines make this more than a policy exercise.

The EU AI Act’s first enforcement provisions apply from 2 August 2026. For businesses operating in or exporting to EU markets, those provisions include requirements to demonstrate that AI systems are used within documented governance structures. Shadow AI, by definition, sits outside those structures. Organisations that cannot account for what AI tools their workforce uses, what data has been processed through them, and under what policies, will face a documentation gap at the point of audit.

Australia’s Privacy Act amendments take effect 10 December 2026. The reforms extend obligations around automated decision-making and data security in ways that make AI usage oversight a compliance requirement rather than a governance preference. Full details are in the AI compliance deadlines guide.

Beyond regulatory timelines, insurance and audit implications are becoming concrete. Insurers writing cyber policies are beginning to ask about AI usage governance specifically. Organisations that can demonstrate an AI usage monitoring programme, documented acceptable-use policies with specific examples, and a process for identifying shadow AI deployment, are in a better position on both counts.

The Kordia report notes that a third of NZ businesses would consider paying a ransom demand. That figure reflects how unprepared many organisations feel when an incident occurs. Building the governance infrastructure before the incident is less about compliance and more about having options when things go wrong.


New Zealand as a Regional Signal

NZ is a useful leading indicator for this dynamic. Survey data from a ten-year longitudinal study, conducted across businesses of meaningful scale, provides a cleaner signal than single-vendor research or single-incident reporting.

The 43% figure from Kordia is not an outlier. It aligns with the direction of global research: Metomic found 68% of organisations had experienced AI data leaks while only 23% had formal AI security policies in place (Metomic, 2025). The Kordia finding is the regional version of that gap.

If directors in New Zealand are already naming shadow AI as their biggest AI cyber risk ahead of all external threats, the same shift is underway in Australia, the UK, and elsewhere. The governance and regulatory apparatus is moving in the same direction. The businesses that treat this as an insider threat problem requiring its own programme, rather than a variation on existing IT risk management, will be better positioned when enforcement arrives.


Stay across AI governance and compliance developments. Subscribe to the Shadow AI Watch newsletter.


Sources