The US published a Financial Services AI Risk Management Framework in February 2026, developed through a Treasury-led public-private collaboration involving 108 financial institutions and agencies including NIST. Singapore’s Monetary Authority published an equivalent in March. The UK and EU have published nothing comparable.

That gap is the central finding of “The Future of AI Governance and Compliance in Financial Services,” a report coordinated by compliance technology firm Zango AI and published 30 April 2026. The report draws on interviews with 27 C-suite and senior leaders across risk, compliance, and AI governance at UK and European financial institutions, plus four industry roundtables involving 60 additional senior practitioners. Contributors include executives from Santander, Standard Chartered, Lloyds Banking Group, Monzo, Revolut, Stripe, St James’s Place, Allica Bank, Commerzbank, and Ecommpay, alongside John Glen MP, member of the Treasury Committee.

Lord Clement-Jones, Liberal Democrat spokesperson on science and technology in the House of Lords and co-chair of the All-Party Parliamentary Group on AI, writes in the foreword: “What is immediately missing is the translation of high-level regulatory principles into day-to-day operational practice.”

Why the governance gap matters now

The report lands during a week in which three separate pressures converged on UK financial services AI governance.

The Bank of England is preparing to convene the Treasury, FCA, and National Cyber Security Centre to assess risks from Anthropic’s Mythos model. SAW covered the ASIC, APRA, and global regulatory response to Mythos on 21 April. The Bank of England convening is the next phase: a specific UK regulatory response to a frontier AI model that can autonomously discover and exploit software vulnerabilities at scale.

The EU AI Act’s Digital Omnibus trilogue collapsed on 28 April, leaving the original 2 August 2026 high-risk deadline intact. UK firms with EU operations or EU customers now face high-risk AI obligations three months away, with no delay to plan against.

Global fraud losses reached USD 579 billion in 2025, according to figures cited in the Zango report. The same research found that nine in ten financial professionals reported a rise in AI-enabled attacks over the period. The FBI’s 2025 IC3 report, which SAW covered on 24 April, logged USD 893 million in AI-related fraud complaints in the US alone.

The combination is acute: frontier AI models creating new attack surfaces, regulatory deadlines that have not moved, rising AI-enabled fraud, and no shared UK framework telling firms what “good” AI governance looks like in practice.

What the report found

The Zango report identifies three structural problems in how UK and European financial institutions are governing AI.

AI systems have changed faster than governance frameworks. The report describes a shift from AI tools that produced predictable, testable outputs to generative and agentic systems that produce context-dependent outputs and cannot be fully validated before deployment. Dean Nash, adviser to Zango and Global Chief Operating Officer (Legal) at Santander, framed the accountability challenge directly: “The kinds of AI systems now being adopted across financial services don’t behave the way the systems we built our governance frameworks around behaved. They make judgements, produce different outputs in different contexts, and cannot be fully tested in advance. This poses a significant accountability problem.”

Firms are solving the same governance problems independently. Without a shared standard, each institution is building its own AI governance framework from scratch. The result is inconsistent control standards across the sector, duplicated effort, and gaps where no firm has addressed a risk that others assumed was covered. Nash added: “Right now, most firms are trying to solve it alone, without a shared standard to work from.”

Some institutions cannot identify all AI tools in use. The report found that business and technology teams are deploying AI tools faster than risk and compliance functions can track them. Several firms acknowledged that they could not produce a complete inventory of AI systems in use across their organisation. This echoes a finding SAW has documented across sectors: the WalkMe data showing 80% of enterprise workers dodging or bypassing sanctioned AI tools, and the Vercel breach where a single unsanctioned AI tool became the entry point for a supply-chain compromise.

How the US and Singapore got ahead

The US Treasury published its Financial Services AI Risk Management Framework in February 2026. It was developed through a structured public-private collaboration with 108 financial institutions and federal agencies including NIST. The framework provides operational guidance that maps AI-specific risks to existing regulatory expectations, giving firms a reference point for model governance, data management, bias testing, and accountability.

Singapore’s Monetary Authority (MAS) published a comparable framework in March 2026, building on its existing Veritas initiative for responsible AI in finance.

The UK has the FCA’s existing guidance on model risk management (SS1/23), the PRA’s expectations on operational resilience, and the ICO’s data protection framework. But none of these individually or collectively provides the kind of AI-specific, sector-level implementation guidance that the US and Singapore frameworks offer. The gap is in the translation of principles into operational practice, not in the principles themselves.

Ritesh Singhania, CEO of Zango, stated the problem plainly: “Compliance teams are trying to keep pace with AI systems their own colleagues have deployed, and with criminal networks scaling faster than anyone’s defences. Weak governance doesn’t just create individual risk. It creates systemic vulnerability across the entire sector.”

The JMLSG model

The report proposes a specific solution: practitioner-built, sector-specific implementation guidance modelled on the Joint Money Laundering Steering Group (JMLSG). The JMLSG is an industry-developed standard for financial crime compliance that carries government endorsement without being mandated by regulators. It provides detailed, practical guidance that firms use to demonstrate compliance with anti-money-laundering obligations. Regulators reference it in enforcement actions. Courts consider it when assessing whether a firm’s AML controls were adequate.

An AI equivalent would give UK financial institutions a shared baseline for AI governance, risk assessment, model validation, human oversight, bias testing, and incident response. It would not replace FCA, PRA, or ICO regulation. It would sit underneath those frameworks and translate principles into operational controls that firms can implement and regulators can assess.

The advantage of the JMLSG model is speed. Industry-led guidance can be developed faster than regulatory rulemaking. It can be updated iteratively as AI capabilities evolve. And it carries the legitimacy of being built by the firms that have to implement it, with regulator input rather than regulator authorship.

What this means for Australian financial institutions

Australia faces a similar gap. ASIC’s REP 798 examined AI governance maturity across 23 lenders and found significant weaknesses, as SAW covered in March. APRA’s expectations on technology risk and operational resilience apply to AI but are not AI-specific. The Australian Government has not published a financial services AI governance framework equivalent to the US Treasury’s.

The Zango report’s finding that UK firms are solving the same problems independently applies equally to Australian banks and insurers. If the UK builds a JMLSG-style AI governance standard and Australia does not, Australian financial institutions that operate in or serve UK clients will need to map to the UK standard anyway. Waiting for APRA or ASIC to lead may mean Australian firms are the last to arrive at a baseline that their UK and US counterparts have already established.

What financial institutions should do

Build to the highest available standard now. The US Treasury framework is publicly available. NIST’s AI Risk Management Framework is freely accessible. Firms do not need to wait for a UK or Australian equivalent to begin aligning their AI governance practices with sector-level expectations. Building to the US framework provides a defensible baseline regardless of jurisdiction.

Complete the AI inventory. If the organisation cannot list every AI system in use, including systems deployed by business units without IT or compliance approval, the governance framework has a hole at its foundation. The Zango report confirms that some major financial institutions cannot do this. The CISO shadow AI runbook provides a practical starting point for discovery.

Engage with industry bodies on shared standards. UK Finance, AFME, the Australian Banking Association, and equivalent industry bodies are the natural conveners for JMLSG-style AI governance guidance. Firms that wait for these bodies to act without contributing to the process will inherit standards they had no part in shaping.

Map AI governance to existing regulatory expectations. The FCA, PRA, ASIC, and APRA already expect firms to manage model risk, operational resilience, data governance, and consumer outcomes. AI governance is not a new regulatory domain. It is the application of existing expectations to a new class of technology. Mapping current AI use cases to existing regulatory frameworks is the fastest way to identify gaps.

Vendor disclosure

Zango AI is a compliance technology vendor selling AI-powered regulatory tools to financial institutions. The report supports Zango’s commercial positioning in the AI governance market. SAW has used the report’s findings because the contributor list (Santander, Lloyds, Standard Chartered, Revolut, Monzo, Stripe, and a sitting MP) and the methodology (structured interviews plus roundtables) give the data credibility beyond a typical vendor survey. The USD 579 billion global fraud figure is cited in the report without a named primary source and should be treated as indicative. Readers should apply the same scrutiny to any vendor-sourced research.

Sources

  • FinTech Global, “Senior leaders warn of critical AI compliance shortfall,” 30 April 2026 (report overview, contributor list, Lord Clement-Jones foreword, JMLSG proposal). fintech.global
  • Financial IT, “UK Finance Firms Warn of No Shared AI Governance Standard as Regulators Scramble to Address Mythos Cyber Threat,” 30 April 2026 (Bank of England convening, fraud figure, Singhania and Nash quotes, John Glen MP). financialit.net
  • SecurityBrief UK, “UK financial firms lack shared AI governance standard,” 30 April 2026 (methodology detail, USD 579B fraud, 90% AI-enabled attack increase, Nash and Singhania quotes). securitybrief.co.uk
  • Financial Reporter, “Lack of AI governance leaves financial services vulnerable, senior leaders warn,” 30 April 2026 (Nash accountability quote, Singhania systemic risk quote, contributor list, Wellinghoff quote). financialreporter.co.uk
  • Finadium, “UK finance pros warn on lack of shared AI governance standard,” 30 April 2026 (Nash detailed governance architecture quote, JMLSG model, US/Singapore comparison). finadium.com
  • Zango AI, “The Future of AI Governance and Compliance in Financial Services,” report landing page, 2026 (research initiative overview). zango.ai