On 30 April 2026, APRA published an industry letter telling every bank, insurer and superannuation trustee that their AI risk controls are falling behind. The findings are blunt: governance, risk management, assurance and operational resilience practices are “not keeping pace with the scale, speed, and complexity” of AI adoption.

APRA Member Therese McCarthy Hockey framed the position directly: “What we’ve observed from our supervisory engagement is that while AI adoption is continuing apace, the systems and processes required to safely govern its use aren’t keeping up. Likewise, the speed at which entities can identify and patch vulnerabilities needs to operate much faster, commensurate with the AI-accelerated threat.”

This is a supervisory letter based on direct examination findings, published to every APRA-regulated entity simultaneously. The letter explicitly warns that where entities “fail to adequately identify, manage or control AI risks,” APRA will take supervisory action.

What APRA found

The review covered large banks, insurers and superannuation trustees. APRA found AI being used across all entities reviewed, with use cases extending well beyond chatbots into software engineering, claims triage, loan application processing, fraud and scam disruption, customer interaction, and insight generation. Entities are moving from experimentation into operationally embedded and customer-facing deployments.

But governance has not kept pace. APRA identified five systemic gaps across the institutions it reviewed.

Governance treated as business-as-usual. Most entities recognised that existing prudential standards apply to AI risk, but few had operationalised governance in practice. APRA observed a tendency to treat AI as “just another technology,” missing key differences such as adaptive model behaviour, probabilistic outputs, bias risks, and privacy considerations specific to predictive systems. The result was weak controls over post-deployment monitoring, model behaviour changes, and decommissioning of AI capabilities.

Board literacy gaps. Boards showed interest in AI’s commercial potential but many lacked the technical literacy required to effectively challenge management on AI-related risks. APRA noted a pattern of boards relying on vendor presentations rather than independent analysis. The regulator’s language was pointed: boards need to move from enthusiasm about AI’s upside to informed challenge on its risks.

Concentration and visibility risks. Some entities were heavily dependent on a single AI or cloud provider across multiple use cases, with few having demonstrated robust contingency planning or tested exit strategies. Where AI capabilities were embedded within broader software platforms or developer tools, entities had limited visibility over how models were trained, updated, or constrained. Upstream dependencies including foundation models, training data sources, and fourth-party service providers were frequently opaque.

Assurance models under pressure. Traditional point-in-time assurance approaches were proving insufficient for AI systems that learn, adapt, and degrade continuously. Internal audit and risk functions lacked the specialist skills and tools required to assess AI systems, particularly where agentic behaviour, automated decision-making, or AI-assisted code generation was involved.

Shadow AI managed by policy, not technology. APRA found entities relying primarily on “policy direction or detective, after-the-fact measures, rather than enforceable technical restrictions or robust preventative controls” to manage staff use of unapproved AI tools. This echoes SAW’s coverage of the Vercel/Context.ai breach, where a single employee’s use of an unsanctioned AI tool became the entry point for a supply-chain compromise.

The Mythos dimension

The letter explicitly names Anthropic’s Claude Mythos as an example of frontier AI that could “increase the probability, speed and scale of cyber attacks” against regulated entities. SAW covered the global regulatory response to Mythos on 21 April when ASIC and APRA confirmed they were monitoring the model. The industry letter takes that monitoring one step further: APRA is now telling regulated entities to factor Mythos-class capabilities into their threat modelling and patching velocity.

APRA’s letter also references the ASD’s advice on frontier AI models, directing entities to note current Australian Signals Directorate guidance. The ASD co-authored the Five Eyes joint guidance on agentic AI published on 1 May 2026, which SAW covers in a separate article. The APRA letter and the ASD guidance are companion documents: APRA tells regulated entities what the problem is, and ASD tells them how to address it technically.

What APRA expects

APRA is not introducing new prudential standards for AI. Instead, the letter makes clear that existing standards, particularly CPS 234 (Information Security), CPS 230 (Operational Risk Management), and CPS 220 (Risk Management), already apply to AI-related risks and must be operationalised accordingly.

The letter sets out minimum governance expectations: frameworks covering the full AI lifecycle from design through deployment to decommissioning; clear ownership and accountability at each stage; board-level reporting on AI risks, not just AI opportunities; risk appetite statements that explicitly address AI; and integration of AI risk into existing operational resilience and cyber security frameworks.

On vendor risk, APRA expects entities to have contractual provisions for audit rights, model update notifications, and incident reporting with AI providers. On assurance, APRA expects continuous monitoring rather than periodic point-in-time reviews, and specialist capability within internal audit teams to assess AI-specific risks.

McCarthy Hockey was explicit that this is a warning, not a final position: “While we are not proposing to introduce additional requirements at this stage, we expect to see a significant improvement in how entities are closing the gaps between the power of the technology they are using and their ability to monitor and control it.”

What this means beyond the prudential perimeter

APRA regulates banks, insurers, and super funds. But the letter’s findings apply to every organisation adopting AI. The governance gaps APRA identified, treating AI as just another technology, board literacy deficits, vendor concentration, weak post-deployment monitoring, and shadow AI managed by policy rather than enforcement, are the same gaps that SAW has documented across sectors throughout 2026.

The Accountants Daily analysis made the supply-chain point directly: when CPS 230 tightened operational risk standards, every supplier in the chain felt it. AI governance is following the same path. Accounting practices, law firms, technology consultancies, and managed service providers that supply APRA-regulated clients will face questions about their own AI controls. The practices that build governance architecture before those questions arrive will be positioned differently from those that do not.

What CISOs, CROs, and boards should do

Complete the AI inventory. APRA found entities that could not identify all AI systems in use. The CISO shadow AI runbook provides a practical starting point. The inventory must include AI embedded in third-party platforms, developer tools, and SaaS applications, not just standalone deployments.

Move shadow AI controls from policy to enforcement. APRA’s finding that entities rely on “policy direction” rather than “enforceable technical restrictions” is a direct instruction to deploy preventative controls: OAuth app restrictions, browser extension management, data loss prevention rules, and API-level blocking of unsanctioned AI services.

Brief the board with substance, not slides. APRA explicitly criticised boards that rely on vendor presentations. Board briefings on AI should include independent threat analysis, documented risk appetite positions, and scenario-based assessment of how frontier AI models could be used against the institution’s specific technology stack.

Map vendor concentration. If the organisation depends on a single AI or cloud provider for multiple critical use cases, document the dependency, test exit strategies, and ensure contractual provisions cover audit rights, model change notifications, and incident reporting.

Accelerate patching velocity. APRA’s reference to frontier AI models enhancing vulnerability discovery means patching cadence needs to match the speed at which AI can find and exploit weaknesses. The Mythos case, where a single model found a 17-year-old FreeBSD vulnerability autonomously, illustrates why monthly patch cycles are no longer sufficient for critical systems.

Prepare for supervisory follow-up. APRA’s letter states it will “apply its supervisory focus to entities’ AI adoption and manage the resulting risks.” That language signals thematic reviews, targeted examinations, and potentially formal requirements to remediate. Entities that can demonstrate progress against the letter’s findings will be in a stronger position when the supervisor arrives.

Sources

  • APRA, “APRA Letter to Industry on Artificial Intelligence (AI),” 30 April 2026 (full supervisory letter, governance expectations, risk management findings, Mythos reference, CPS 234/230/220 application). apra.gov.au
  • APRA, “APRA calls for a step-change in AI-related risk management and governance,” media release, 30 April 2026 (McCarthy Hockey quotes, key findings summary, $9.8T supervisory perimeter). apra.gov.au
  • IDM Magazine, “APRA Threatens Action Over AI Governance Failures,” 1 May 2026 (shadow AI policy vs enforcement finding, assurance gaps, vendor concentration, fourth-party opacity). idm.net.au
  • Insurance Business Magazine, “APRA pushes insurers to narrow AI risk oversight gap,” 1 May 2026 (board literacy gaps, concentration risk, industry AI adoption statistics). insurancebusinessmag.com
  • Mirage News, “APRA Urges Major Shift in AI Risk Management,” 30 April 2026 (full McCarthy Hockey statement, CFR engagement, ASD reference). miragenews.com
  • Accountants Daily, “APRA’s governance letter wasn’t written for accountants. Read it anyway,” 4 May 2026 (supply-chain implications, CPS 230 precedent, December 2026 Privacy Act ADM). accountantsdaily.com.au
  • GRC Report, “APRA Warns AI Risk Controls Are Falling Behind as Financial Sector Accelerates Adoption,” 30 April 2026 (five finding categories summary, McCarthy Hockey extended quotes). grcreport.com