Most organisations deploying AI do not have a framework to govern it. Nearly 74% of organisations report only moderate or limited coverage in their AI risk and governance frameworks, according to IBM’s 2025 report on AI risk governance. A 2025 AuditBoard study found just one in four organisations have fully operational AI governance, despite widespread awareness of new regulations. The gap is most pronounced in small and mid-sized businesses, where dedicated governance resources are rare. In Australia, ASIC has already flagged it as a systemic risk in REP 798.

This guide explains what an AI governance framework actually is, what goes inside one, and how to build it, structured around the maturity spectrum ASIC laid out in REP 798. It is written for Australian businesses of all sizes, including SMEs that do not have a dedicated risk or compliance function.


What an AI Governance Framework Is (And Isn’t)

An AI governance framework is the set of policies, roles, processes, and controls an organisation uses to manage the risks, accountability, and oversight of AI systems across their lifecycle. Unlike a policy document, it is the infrastructure through which responsible AI deployment is operationalised, covering everything from who approves a new use case to how model drift is monitored after deployment.

It sits across legal, risk, technology, and operations, and applies whether an organisation is building AI in-house or sourcing it from a third-party vendor. ISO/IEC 42001, the world’s first international standard for AI management systems, was published in late 2023 and adopted by Standards Australia as AS ISO/IEC 42001:2023 in February 2024. It defines the requirements for establishing, implementing, maintaining, and continually improving an AI management system, covering risk management, AI system impact assessment, lifecycle management, and third-party supplier oversight.


Why It Matters Now in Australia

A 2025 Pacific AI governance survey found that while 75% of organisations have established AI usage policies, only 36% have adopted a formal governance framework. McKinsey’s 2024 global AI survey reported that just 18% of organisations have an enterprise-wide council with the authority to make responsible AI governance decisions, and fewer than 25% of companies have board-approved, structured AI policies.

In Australia specifically, the Governance Institute of Australia’s 2025 AI Deployment and Governance Survey found that only 10% of respondents have advanced AI qualifications and 88% have struggled to integrate generative AI into legacy systems. ASIC’s REP 798 review of 23 financial services licensees across 624 AI use cases found governance arrangements “varied widely”, with some licensees adopting AI faster than their governance could keep up. ASIC warned there is potential for a “governance gap” as AI adoption outpaces governance under competitive pressure.


The ASIC REP 798 Maturity Spectrum

ASIC identified three broad categories of AI governance maturity forming a spectrum.

Latent

The least mature approach. These organisations have not considered AI-specific governance and risk. These organisations carry AI risk without an AI strategy, dedicated policies, or any formal risk articulation. General frameworks (IT policies, codes of conduct) carry the load, but were not designed for algorithmic bias, model opacity, or third-party model risk. ASIC found some licensees in this category had significant AI use but the lowest governance maturity, placing them in the highest-risk position.

Leveraged (Decentralised)

The middle ground. These organisations adapt existing risk and governance frameworks for AI on a business-unit-by-business-unit basis. AI risks are nominally covered by general policies, including privacy, IT security, and data quality, but there is no enterprise-wide AI executive accountability. Some have ethics principles, but operationalisation varies, and ASIC noted that even where policies existed, they were not evolving fast enough to match AI usage and new model types.

Strategic (Centralised)

The most mature approach. A clear, organisation-wide AI strategy aligned to risk appetite and business objectives, with AI-specific policies covering the full lifecycle. Board-level reporting is in place, a cross-functional AI committee or council operates with executive authority, and third-party model governance is embedded. ASIC viewed these licensees as best positioned as AI use expanded, because their governance led deployment rather than chased it.


The Six Core Components of a Framework

Drawing from ISO/IEC 42001, the NIST AI Risk Management Framework, and ASIC’s findings, a mature AI governance framework contains six interlocking components.

1. AI Asset Inventory

A complete register of every AI model, tool, and use case in the organisation is the foundation of any governance framework. This includes third-party and shadow AI. ASIC found that 30% of AI use cases across the 23 licensees relied on third-party models, with 13 licensees sourcing at least half their models externally. Models not catalogued are effectively ungoverned.

2. Risk Tiering and Classification

Not all AI carries the same risk. A governance framework needs a risk classification system that sorts use cases into tiers, typically low, medium, and high, with escalating controls for each tier. The Australian Government’s Voluntary AI Safety Standard, released in August 2024, adopts a risk-based approach and provides a table of system attributes that elevate risk across technical architecture, purpose, context, data, and automation. The NIST AI RMF structures this through its four core functions: Govern, Map, Measure, and Manage.

3. Lifecycle Governance

AI governance applies across the full lifecycle, not just at the point of deployment. Policies should cover design, development, testing, deployment, monitoring, and decommissioning. Guardrail 4 of the Voluntary AI Safety Standard requires acceptance criteria, test plans, adversarial testing and red-teaming (for general-purpose AI), monitoring logs, and audit triggers. ISO/IEC 42001 similarly requires continual improvement processes across the AI system lifecycle.

4. Human Oversight Requirements

Every AI use case needs a defined level of human oversight. ASIC’s review found that most current deployments were cautious, with AI augmenting rather than replacing human decision-making. As automation increases, frameworks must specify where human review is mandatory, what escalation paths look like, and how override authority works. The Voluntary AI Safety Standard dedicates Guardrail 5 to meaningful human oversight and clarifies when humans must be “in the loop” or “over the loop”.

5. Third-Party and Vendor Management

Vendor management is the most common governance gap. ASIC’s Finding 8 identified that many licensees relied heavily on third-party AI models but did not have appropriate governance to manage those risks, including limited visibility into training data and model changes. APRA’s CPS 230 Operational Risk Management standard, effective 1 July 2025, requires regulated entities to have clear understanding of and mitigation plans for critical third-party service providers, including technology and data services that may embed AI. The Voluntary AI Safety Standard treats procurement as a governance lever, prompting deployers to contract for supplier transparency, testing evidence, and monitoring responsibilities.

6. Transparency and Accountability

A framework must define who is accountable (board, committee, model owner, business unit lead) and what gets disclosed (to consumers, regulators, and internal stakeholders). ASIC found only 10 of 23 licensees had documented requirements about disclosing AI use to consumers, and no licensees had implemented specific contestability arrangements for customers affected by AI decisions. Guardrails 6 and 7 of the Voluntary AI Safety Standard operationalise disclosure, transparency, and challenge mechanisms, including guidance on when to label AI use and how to explain AI-assisted decisions.


The Australian Regulatory Landscape

Australia does not yet have a dedicated AI Act. The September 2024 “Safe and Responsible AI in Australia” interim response canvassed mandatory guardrails for high-risk AI, but the December 2025 National AI Plan abandoned that approach in favour of relying on existing technology-neutral laws and sector regulators.

That does not mean enforcement risk is absent. Australian regulators are already using existing powers to pursue AI-related breaches.

Regulator Instrument / Guidance AI Governance Relevance
ASIC REP 798; Corporations Act obligations AI governance maturity; consumer harm from AI mis-selling or unfair outcomes
APRA CPS 230 Operational Risk Management Operational risk, board accountability, and third-party AI service providers for banks, insurers, and super funds
OAIC Privacy Act; ADM transparency reforms; OAIC AI guidance Data governance, consent, automated decisions, and privacy-by-design for AI
ASD/ACSC “Engaging with artificial intelligence” guidance (2024) Secure use of AI tools, prompt injection, data exfiltration, and supply-chain security
DTA “Policy for the responsible use of AI in government” (v2.0) Common baseline for accountability, risk-tiering, and transparency for AI in the APS

The OAIC’s Bunnings facial recognition determination in November 2024 is the clearest Australian enforcement signal to date. The Privacy Commissioner found Bunnings breached Australians’ privacy by using AI-powered facial recognition in stores without valid consent, adequate notice, or sufficient governance practices, in breach of multiple Australian Privacy Principles. OAIC now has stronger enforcement powers, including infringement notices and higher civil penalties, which privacy firms note can reach into the millions for serious or repeated interferences with privacy.


Where to Start

Organisations with formal AI governance are 3.2 times more likely to achieve positive returns on their AI investments and report fewer ethical incidents, according to Deloitte’s AI governance research. Gartner’s 2025 survey suggests entities with formal AI boards or oversight committees reduce compliance failures by around 46%.

The goal is movement along ASIC’s maturity spectrum, from latent to leveraged to strategic, rather than a perfect framework from day one. For small businesses and SMEs, the starting point is the inventory: establishing what AI is running, who owns it, and when it was last reviewed. Risk tiers follow, then defined accountability structures. Lifecycle controls, vendor governance, and transparency mechanisms build on that foundation.

The Australian Voluntary AI Safety Standard provides a practical, deployer-focused checklist that aligns to ISO/IEC 42001 and the NIST AI RMF without requiring full certification maturity upfront. It is designed to be accessible for organisations without dedicated compliance staff, making it a practical entry point for SMEs. Each guardrail maps to a concrete artefact: a live AI inventory, a risk tiering matrix, a lifecycle policy, human oversight rules, third-party due diligence standards, and clear disclosure practices.


Related reading: ASIC’s AI governance review: what it found across 23 Australian lenders | EU AI Act: what Australian businesses need to know


Stay across AI governance developments in Australia and globally. Subscribe to the Shadow AI Watch newsletter.


Sources