Grant Thornton calls it the “AI proof gap.” Three in four boards have approved major AI investments. But when the firm surveyed 950 senior US business leaders across multiple industries in early 2026, 78% said they lack confidence their organisation could pass an independent AI governance audit within 90 days. Among organisations still piloting AI, only 7% are “very confident” they could pass. Among those with fully integrated AI, the figure reaches 74%.

“Companies are making tremendous investments into AI and yet, we’re not seeing that correlate with an increase in AI accountability,” said Tom Puthiyamadam, managing partner of Advisory Services at Grant Thornton. “Our report found that while most organisations have implemented AI solutions, many teams cannot measure its impact or respond effectively when initiatives fail.”

The question SAW keeps returning to, across the EU AI Act, NAIC insurance pilot, Connecticut AG memo, and Australian Privacy Act deadlines, is the same one Grant Thornton has now put a number on: can you prove your AI governance works? For most organisations, the answer is no.

What the survey actually measured

The AI Impact Survey collected responses from nearly 1,000 senior business leaders (the full report confirms n = 950) across multiple US industries between 13 February and 18 March 2026. It is self-reported, US-only, and produced by an advisory firm with a commercial interest in AI governance services. Those caveats matter and should inform how the data is used. But the sample size, the seniority of respondents, and the specificity of the questions make the findings usable for governance analysis. Axios covered the survey on the day of release, confirming the core numbers independently.

The methodology is disclosed. Grant Thornton states explicitly that the results “represent respondents’ perceptions and experiences at the time of the survey and do not constitute an assurance, audit, attestation or determination of regulatory compliance.” References to an “audit” in the survey are conceptual, not literal. The question was whether leaders felt they could demonstrate compliance, not whether an actual audit had occurred. That distinction matters, but the finding is still sharp: most leaders who have approved AI spend do not believe they could prove their governance is working.

The numbers that matter for governance teams

78% lack audit confidence. The headline finding. More than three-quarters of senior leaders said they could not confidently demonstrate AI governance readiness to an independent assessor within 90 days. The gap between piloting firms (7% “very confident”) and fully integrated firms (74%) shows that governance confidence is built through experience, not through policy documents.

Only 12% say their workforce is AI-ready. Despite heavy AI investment, only one in eight organisations believes its people can use AI effectively. This aligns with WalkMe’s 2026 State of Digital Adoption survey finding that 80% of enterprise workers bypass or avoid sanctioned AI tools.

46% say AI underperforms because controls are not working. Leaders attributed underperformance to governance failure rather than technological inadequacy. This is the most significant finding for compliance teams: the constraint is the infrastructure around AI, not the AI itself.

75% of boards have approved major AI investments, but only 52% have set governance expectations. Nearly half of boards that signed off on AI spend did so without setting clear governance expectations. Only 54% have integrated AI risk and opportunity into ongoing board or committee oversight. The investment and the oversight are disconnected.

51% say strategy is the biggest driver of AI ROI, but only 22% have a fully developed AI strategy. The gap between recognising what drives returns and actually building it is a recurring pattern across every finding in the survey.

Nearly three in four organisations are piloting, scaling, or running autonomous AI, but only one in five has tested a response plan for AI failures. Agentic AI is already in production in most organisations. Incident response planning has not caught up. When something goes wrong, most organisations will be improvising.

The C-suite alignment problem

The Grant Thornton data exposes a structural misalignment between the three executive roles most involved in AI deployment. CIOs and CTOs own the technology. COOs discover the governance gaps when AI touches operations. CFOs control the budget but are not always funding the governance layer.

“Most governance models weren’t designed for AI,” Puthiyamadam told Insurance Business Magazine. “Centralised review bodies become overwhelmed, creating bottlenecks that slow execution without reducing risk. The fix is to set policy and risk criteria centrally, then delegate assessments to trained reviewers at the division or regional level, aligning the depth of review to the level of risk.”

Sumeet Mahajan, lead partner for AI and Data at Grant Thornton Advisory Services, framed the breadth-versus-depth problem: “Organisations are expanding AI across more pilots, use cases and functions, but without consistent measurement, feedback loops or clarity on where value is created. You have to apply discipline, set measurement targets, build governance infrastructure and curtail initiatives that do not deliver results.”

Why this matters for Australian organisations

The Grant Thornton survey is US-only. But the “proof gap” it describes is jurisdiction-agnostic.

Australian organisations facing the December 2026 automated decision-making transparency requirement under the Privacy and Other Legislation Amendment Act 2024 will need to demonstrate how AI-assisted decisions are made, who is accountable, and what safeguards are in place. That is an evidence question, not a policy question. A governance framework document that nobody has tested, with controls nobody has audited, will not survive a regulatory inquiry.

ASIC’s REP 798 examination of 23 Australian lenders already showed the pattern Grant Thornton is now quantifying at scale: organisations deploy AI faster than they build the governance to support it. The NAIC insurance pilot (covered in a separate SAW article) is building a template that other sector regulators will follow. The Connecticut Attorney General’s February 2026 memorandum established that existing laws already regulate AI, meaning the enforcement tools are in place before the governance is.

Navrina Singh, CEO of AI governance platform Credo AI, told Axios that governance is becoming “a competitive moat” for organisations that build it early. The Grant Thornton data supports that framing: the 4x revenue growth multiplier for integrated firms tracks alongside the 74% governance audit confidence in the same cohort. The organisations that can prove their AI works are the same ones generating returns from it.

The data also gives Australian boards a benchmark to test against. If 78% of US leaders with mature AI programmes cannot pass a governance audit, the odds that an Australian mid-market firm with fewer resources and less AI maturity could do better are low.

What the proof gap actually looks like in practice

The gap is not between having a policy and not having one. Most organisations have something on paper. The gap is between policy and evidence.

An organisation with an AI acceptable use policy but no inventory of which AI tools are in production has a policy gap. An organisation with an inventory but no documented risk assessments for each tool has an assessment gap. An organisation with risk assessments but no incident response testing has a readiness gap. An organisation with all three but no board-level reporting on AI governance metrics has an oversight gap. The “proof gap” is the accumulation of all these gaps, and Grant Thornton’s 78% figure suggests most organisations have multiple layers missing.

What boards and governance teams should do

Run the 90-day test. Ask the question Grant Thornton asked: if an independent auditor, regulator, insurer, or customer asked to see your AI governance in 90 days, what could you produce? Map the evidence you have against the evidence you would need. The gaps become the work programme.

Build an AI inventory that can survive scrutiny. Not a spreadsheet of tools someone listed once. A living register that identifies every AI system in production, its risk classification, its data inputs, its decision scope, who owns it, and when it was last assessed. If the inventory does not exist, every governance control built on top of it is a fiction.

Set governance expectations at the board before approving AI spend. Grant Thornton found that 75% of boards approved AI investments while only 52% set governance expectations. The governance conversation should precede the spending conversation, not follow it.

Test your AI incident response plan. Nearly three in four organisations are running autonomous AI. Only one in five has tested a failure response plan. A tabletop exercise that simulates an AI-generated output causing harm, a regulatory inquiry, or a data breach involving AI-processed data takes a day to run and months to recover from not running.

Align CIO, COO, and CFO on governance ownership. The Grant Thornton data shows these three roles are describing different organisations. Until they agree on who owns AI governance, who funds it, and who reports on it, the proof gap will keep widening.

The compounding problem

Grant Thornton’s full report makes a point that deserves direct attention. Each ungoverned AI initiative does not just create one gap. It creates conditions that make the next initiative harder to govern, harder to measure, and harder to defend. The proof gap does not grow linearly. It compounds. Organisations that wait until the first audit, enforcement action, or incident to address governance will find the gap has grown larger than anything a remediation project can close quickly. The organisations closing the gap now are building governance alongside deployment, and the Grant Thornton data shows the difference is already measurable in revenue, compliance confidence, and operational resilience.

Sources

  • Grant Thornton, “2026 AI Impact Survey Report” (full report; n = 950; methodology and detailed findings). grantthornton.com
  • Grant Thornton, “A widening ‘AI proof gap’ is emerging, but well-governed AI is showing results,” press release, 13 April 2026 (Puthiyamadam and Mahajan quotes). grantthornton.com
  • Business Wire, “Grant Thornton survey: A widening ‘AI proof gap’ is emerging,” 13 April 2026 (wire distribution, methodology disclosure). businesswire.com
  • Axios, “The work AI boom is outrunning oversight,” 13 April 2026 (independent validation, Navrina Singh quote). axios.com
  • Insurance Business Magazine, “Widening ‘AI proof gap’ exposes weak governance behind board-level enthusiasm,” April 2026 (Puthiyamadam and Mahajan extended quotes). insurancebusinessmag.com