Australian financial services firms are deploying AI faster than they are building the controls to manage it. That is the core finding of ASIC Report 798, released in October 2024. The regulator reviewed 624 AI use cases across 23 licensees and found the same pattern repeated: adoption outrunning governance. The compliance implications extend well beyond financial services.


Why This Report Matters Beyond Finance

ASIC’s review is the first of its kind from an Australian financial regulator. It examined how banks, insurers, credit providers, and financial advice firms were using AI as at December 2023, with follow-up meetings in June 2024.

The findings are relevant to any regulated industry. The competitive pressure to deploy AI before governance catches up is not specific to financial services.

Digital innovation including AI is estimated to contribute $315 billion to Australia’s GDP by 2030 (Department of Industry, Science and Resources). The value of AI is not in dispute. What ASIC examined was whether organisations are building safeguards fast enough to match deployment, and across 23 licensees, most were not.


The Scale of What Was Found

The 23 licensees in the review reported 624 AI use cases impacting consumers, directly or indirectly. That figure reflects how deeply embedded AI has become in everyday financial services operations.

The more significant data point is the age profile of those use cases. According to ASIC Report 798, 57% of all use cases were less than two years old or still in development at the time of the review. AI adoption in Australian finance has accelerated sharply, and the acceleration is continuing: 61% of licensees told ASIC they planned to increase their AI use within the next 12 months.

Generative AI is the sharpest illustration. While it represented just 5% of use cases currently in production, it accounted for 22% of all use cases in development. Some 92% of generative AI use cases reviewed were deployed in 2022 or 2023, or were still being built. This is a technology going from near-zero to mainstream deployment inside 24 months.

Governance frameworks have never moved at the speed of product deployment, and AI is not changing that.


What the Governance Gap Looks Like in Practice

ASIC found that governance maturity sat on a spectrum. At one end, a small group of licensees had built strategic, centralised AI governance frameworks. These organisations had clearly articulated AI strategies, AI-specific policies covering the full deployment lifecycle, dedicated oversight committees, and regular board reporting on AI risk. They understood what they had deployed, who was accountable, and how they would respond if something went wrong.

At the other end, some licensees carried AI risk without a strategy, risk classification, or any clear picture of what was running in their systems.

Between those two poles sat the majority: organisations that had made some changes, adapted existing frameworks, and considered the issue, but had not fully operationalised their response to AI risk.

Only 12 of the 23 licensees had AI policy documents that referenced fairness, discrimination risk, or algorithmic bias. Only 10 had documented requirements about disclosing AI use to affected consumers (ASIC Report 798, October 2024). No licensee had implemented specific contestability arrangements, meaning a formal process for consumers to challenge decisions made by AI.


The Black Box Problem

The most direct illustration of governance failure in the report involves a credit scoring model that was described internally as a “black box with no ability to explain the variables in the scorecard or the impact they are having on an applicant’s score.”

An internal review, completed approximately ten months after deployment, found the model had been built with “limited understanding” of the third-party platform used, had “incomplete model documentation with missing critical elements,” and suffered from “poor governance and a lack of a monitoring process.”

The model was being used to inform credit decisions. It had the potential to result in consumers being refused credit or offered less than they would otherwise have received.

The licensee eventually replaced the model with a simpler, explainable version. But the replacement came after months of operation. And despite identifying the problem, the same licensee told ASIC it planned to expand its AI use, citing competitive pressure and the risk of being “left behind.”

This dynamic, identifying a governance failure while simultaneously expanding the thing being governed, is precisely what ASIC means by the governance gap.


Third-Party Models: A Compounding Risk

A significant share of the AI in Australian financial services is not built in-house. ASIC found that 30% of all use cases involved models developed by third parties. For four licensees, every single model was third-party. For 13 of the 23 licensees, at least half their models came from external vendors (ASIC Report 798, October 2024).

This creates a compounding governance problem. Organisations that struggle to oversee their own models face an additional layer of opacity when those models are built and maintained by vendors who may be unwilling or unable to explain how they work.

One licensee in the review could not identify the AI technique used across all of its models. The vendor rationale was intellectual property concerns. The licensee had no documented process for independently validating, monitoring, or reviewing those third-party systems.

Better-practice licensees took a different approach. They required proof of independent validation from suppliers before deployment, established service level agreements covering performance monitoring and model change notifications, and applied the same governance expectations to third-party models as to internally developed ones. Accountability for model behaviour does not transfer to the vendor simply because the vendor built the model.


What Regulators Expect Right Now

A common response to governance gaps is to wait for specific regulation to arrive before acting. Australia’s regulatory framework for financial services is technology neutral. Existing obligations, the duty to provide services efficiently, honestly and fairly, prohibitions on unconscionable conduct, requirements for adequate risk management systems, apply to AI use right now, regardless of whether AI-specific legislation has passed.

Directors and officers are specifically named. ASIC states that duties of reasonable care and diligence extend to the adoption, deployment and use of AI. Boards should understand what AI is running inside their organisations, the extent to which they rely on AI-generated information, and the reasonably foreseeable risks associated with it.

The Australian Government’s mandatory AI guardrails consultation, covering high-risk AI in domains including financial services, was underway at the time of the report’s release. ASIC indicated it had contributed to that process and would continue monitoring licensees as any new obligations came into effect. As of December 2025, the Government opted not to proceed with mandatory guardrails, instead relying on existing technology-neutral laws and a new AUD$30 million AI Safety Institute to monitor risks and advise on gaps. That decision means existing obligations carry more weight, not less.


The Pattern That Keeps Repeating

Of the 14 licensees planning to increase AI use, 13 were also planning to uplift their governance frameworks in parallel. Only one had built the governance infrastructure before ramping up deployment (ASIC Report 798, October 2024).

Running governance upgrades alongside deployment upgrades is not the same as governance leading deployment. It means controls are always chasing use, never ahead of it. When something goes wrong, and the report documents several instances where things already had gone wrong, the response is reactive rather than systematic.

The more mature licensees understood this. They had built governance first, deployed carefully, and were in a better position to expand further because the foundations were already in place.


What Governance Leadership Actually Requires

The organisations managing this best maintained an accurate AI inventory and knew who was accountable for each model. Risk appetite was addressed explicitly in their AI strategies, not just deployment plans. Algorithmic bias was tested for rather than assumed absent, and governance standards applied equally to third-party models. When policies changed, they were applied to existing deployments as well as new ones.

Several licensees in the review had introduced new requirements, around transparency, disclosure, or ethical principles, but had not applied them retrospectively to models already in use. The result was a two-tier system: newer deployments with governance attached, older deployments operating under the original, less stringent conditions.


The Honest Assessment

The patterns ASIC documents are visible across every industry moving quickly to deploy AI at scale. Fast adoption, incremental controls, third-party opacity, and competitive pressure to keep moving are structural features of this moment, not failures unique to any single organisation or sector.

ASIC’s report documents the gap with data from 23 organisations and 624 real use cases, making it harder to dismiss than anecdote alone, and the gap is already causing harm.

The organisations best positioned for the next phase of AI regulation are building governance infrastructure now. Catching up, once behind, is significantly harder than staying current.


Source: ASIC Report 798, “Beware the Gap: Governance Arrangements in the Face of AI Innovation,” October 2024. Available at asic.gov.au{target=“_blank” rel=“noopener”}.

Related reading: What is an AI governance framework? | The EU AI Act: what Australian businesses need to know


Stay across AI governance developments in Australia and globally. Subscribe to the Shadow AI Watch newsletter.