The UK has no AI Act. It has no single AI law of any kind. An anticipated AI Bill did not materialise in 2025, and the government has signalled it is unlikely before mid-2026 at the earliest. But UK businesses using AI are already regulated. The obligations are arriving through data protection reform, sector regulators, procurement requirements, and professional guidance, and they are landing on SMEs through supply chain pressure from larger customers who are themselves being asked to prove AI governance.
No AI Bill, But the Existing Framework Already Covers AI
AI minister Kanishka Narayan stated publicly that existing laws already apply to AI systems, citing data protection, competition, equality legislation, and online safety (Osborne Clarke, January 2026). The Financial Conduct Authority confirmed it will not introduce AI-specific rules, with Chief Executive Nikhil Rathi stating in December 2025 the regulator would rely on existing frameworks and intervene only in cases of “egregious failures” (BCLP, 2026). The government’s approach is principles-based and sector-specific: rather than one cross-economy AI law, each regulator applies its own existing powers to AI within its remit.
This means UK GDPR applies to any AI processing of personal data. The Equality Act 2010 applies to AI-driven discrimination in hiring, lending, and services. Consumer protection law applies to AI products that cause harm. Financial services firms face the FCA’s Consumer Duty and Senior Managers and Certification Regime. The absence of a single AI law does not mean the absence of AI regulation. It means the regulation is distributed across multiple frameworks that businesses need to track simultaneously.
For SMEs accustomed to a single compliance checklist, this distributed approach creates its own governance challenge. Multiple documents across multiple regulators, with overlapping deadlines and no single authority to satisfy.
The Data Use and Access Act Changes the Rules on Automated Decisions
The most significant AI-relevant legal change in the UK landed on 5 February 2026 when key provisions of the Data (Use and Access) Act 2025 came into force. The DUA Act, which received Royal Assent on 19 June 2025, amends but does not replace UK GDPR (GOV.UK; ICO).
The critical shift concerns automated decision-making. Under the previous UK GDPR Article 22, decisions based solely on automated processing that produced legal or similarly significant effects were broadly prohibited, with limited exceptions for contract necessity, legal authorisation, or explicit consent. The DUA Act flips the default. Automated decision-making involving non-special category data is now prima facie permitted, provided specific safeguards are implemented (Debevoise; Freshfields; Hogan Lovells).
The required safeguards include providing data subjects with information about decisions made about them, enabling them to make representations, enabling them to obtain human intervention, and enabling them to contest decisions (Legislation.gov.uk). Automated decision-making involving special category data, including health, race, and religion, remains restricted to explicit consent or legal requirement.
Littler characterised the change as “a significant shift in approach from the EU” that makes it “significantly easier for employers to use automated decision-making in their internal processes in the UK than in the EU.” Clifford Chance described it as “one of the boldest changes introduced by the DUAA.” Debevoise noted the practical implication: “Recruitment pre-screening: the use of an AI system for candidate pre-screening without subsequent human review is generally prohibited in the EU but would be generally permitted in the UK, provided safeguards are in place.”
For UK SMEs, this is a double-edged outcome. The law now permits more AI-driven automation, which is commercially useful. But the safeguard requirements, including information provision, human intervention mechanisms, and contestability, require documented processes that most SMEs do not yet have.
The ICO Has a Concrete Plan for AI Oversight
The Information Commissioner’s Office launched its AI and Biometrics Strategy in 2025, the first dedicated ICO document on GDPR compliance for AI. The strategy includes a detailed plan of action with specific commitments and timelines that UK businesses should be tracking (ICO Plan of Action).
The ICO will develop a statutory code of practice on AI and automated decision-making. Public consultation is expected in the first half of 2026, with the final code due by summer 2026 (ICO Technology guidance timeline). The code will provide “clear and practical guidance on transparency and explainability, bias and discrimination, and rights and redress” (ICO). This will be a statutory code, meaning it carries legal weight.
The ICO has committed to scrutinising automated decision-making in recruitment, stating it will “publish findings and regulatory expectations, holding employers to account if they fail to respect people’s information rights” (ICO Plan of Action). For any business using AI in hiring, whether a recruitment platform, an HR department using AI CV screening, or an agency using automated candidate ranking, this is a direct signal of forthcoming regulatory attention.
The ICO is also engaging with agentic AI specifically, planning a Tech Futures report examining “issues such as accountability and redress” for autonomous AI systems (ICO Plan of Action). The UK is one of the first jurisdictions where a data protection regulator has publicly committed to examining the governance implications of AI agents.
Financial Services Already Has 75 Per Cent AI Adoption
The FCA and Bank of England’s November 2024 survey found that 75 per cent of UK financial services firms already use AI, up from 58 per cent in 2022. Eighty-four per cent of firms have an individual accountable for their AI approach (FCA). The FCA launched an AI Lab, hosted an AI Sprint in January 2025, partnered with NVIDIA on a Supercharged Sandbox, and is now running two cohorts of AI Live Testing where firms develop, assess, and deploy AI in live UK financial markets with regulatory support.
In January 2026, Sheldon Mills announced a long-term review into AI and retail financial services, reporting to the FCA Board in summer 2026. The review will consider how emerging AI, including agents, could influence consumers, markets, and firms out to 2030 and beyond. Lloyd’s 2025 survey found that one in three financial services customers already use AI weekly to manage their money (FCA, January 2026).
The FCA’s regulatory approach relies on existing frameworks: Consumer Duty, Senior Managers and Certification Regime, and operational resilience rules. PwC summarised the position: “Existing technology-agnostic rules apply to AI. AI already falls under multiple existing regulatory frameworks” (PwC, 2025). For SMEs that supply services to financial institutions, the FCA’s expectations flow down through procurement. A financial services firm that must demonstrate AI governance to the FCA will require evidence of the same from its technology suppliers.
Procurement Is the Mechanism That Pulls SMEs In
UK Procurement Policy Note PPN 02/24 (March 2024) and its updated version PPN 017 (February 2025) encourage suppliers to disclose AI use in tender preparation and service delivery. PPN 017 includes optional disclosure questions in Annex B to identify AI involvement, ensuring contracting authorities can assess its impact. These notes are advisory for central government bodies and optional for others, but they signal a direction that procurement professionals across the public and private sector are watching (Trowers & Hamlins, 2025).
Market Dojo’s January 2026 analysis of procurement regulations confirmed: “SMEs in supply chains will face indirect pressure from larger customers to meet these standards.” This applies across AI governance, data governance, and operational resilience. Procurement contracts now need “substantial updates to address AI Act requirements, data portability rights” and other compliance obligations, even for UK-only supply chains where EU AI Act obligations flow down through enterprise procurement (Market Dojo, 2026).
ISO 42001, the first certifiable AI management system standard, is entering procurement evaluation criteria. A March 2026 analysis noted that “a vendor’s AI governance posture is becoming part of the evaluation criteria alongside performance and price” and that “CIOs and chief procurement officers who have not yet added AI governance certification to their vendor evaluation criteria should consider doing so” (Shashi, March 2026). For UK SMEs selling to larger enterprises, this is the mechanism that converts “optional best practice” into “commercial requirement.”
UK SMEs Are Adopting AI Faster Than They Are Governing It
The British Chambers of Commerce surveyed over 1,500 business leaders in June and July 2025 and found 35 per cent of UK SMEs actively using AI, with 24 per cent planning to adopt in the future. Only 33 per cent reported no plans to use AI, down from 43 per cent the previous year (BCC, September 2025). YouGov’s survey of 1,000 SME decision-makers put the figure at 31 per cent currently using AI tools, with IT and telecoms (56 per cent) and media and marketing (53 per cent) leading adoption (YouGov, 2025).
SAP UK and Oxford Economics found that 44 per cent of UK organisations have already experienced data or intellectual property exposure from shadow AI use, and 43 per cent have experienced security vulnerabilities (SAP UK/Oxford Economics, October 2025). The techUK/ANS/YouGov survey of over 1,000 IT decision-makers identified the top barriers to responsible AI adoption: lack of expertise (35 per cent), high costs (30 per cent), and uncertainty around ROI (25 per cent). Stark disparities exist between large businesses, which worry about regulatory compliance and data security, and SMEs, which focus on cost and skills (techUK/ANS/YouGov, February 2025).
The gap between adoption and governance defines the UK SME landscape. SMEs are deploying AI tools across marketing, customer service, and operations without the documented policies, risk registers, or DPIAs that the DUA Act safeguards, ICO code of practice, and procurement requirements will increasingly demand.
What “Good Enough” Governance Looks Like for a UK SME
A UK technology solicitor writing in Enterprise Times in February 2026 put it plainly: “AI governance is no longer optional. It is foundational. European standards frequently influence commercial contracts, investor requirements and enterprise procurement processes. For businesses expanding overseas or servicing EU customers, those standards may apply directly.”
A detailed UK SME governance guide published in February 2026 mapped the core documents most SMEs are missing: an AI Use Policy governing what tools are permitted and what data may be entered, an AI Risk Register listing every AI system in use with its data processing and accountability, an Algorithmic Decision-Making Policy where automated decisions affect individuals, and an AI Incident Response Procedure establishing who is notified and when the ICO must be informed within 72 hours of a personal data breach. Around those, existing policies need updating: privacy notices that disclose AI processing, employment policies addressing AI-assisted recruitment, and information security policies addressing AI-specific threat vectors (The Professor, February 2026).
The same guide noted: “When I work through the master table with leadership teams, the pattern is consistent. AI is being used. Governance is assumed. Documentation is missing. That assumption is where risk accumulates.”
For a 50 to 500-person UK business, the practical starting point is an AI risk register that makes current AI use visible, an acceptable use policy that answers employees’ daily questions about what data can go into which tools, and a plan to conduct DPIAs for any AI processing that could produce significant effects on individuals. A comprehensive governance programme can follow. The DUA Act’s safeguard requirements are conditions for lawful automated decision-making, not aspirational targets. Businesses relying on AI for hiring decisions, credit assessments, customer scoring, or service delivery need these safeguards documented and operational.
The UK’s AI governance landscape may lack the EU’s single-law clarity, but the obligations are real, enforceable, and arriving through multiple channels simultaneously. The ICO’s statutory code is coming and the FCA’s review will report to the Board by summer, while procurement is already flowing requirements down through supply chains. Businesses that wait for a single AI law before acting will find themselves governed by a patchwork they did not prepare for.
Related reading: Does the EU AI Act apply to Australian businesses? | What is an AI governance framework? | What is shadow AI? | AI Usage Policy Template (free download)
Sources
- UK Data (Use and Access) Act 2025: GOV.UK guidance
- ICO: AI and Biometrics Strategy Plan of Action
- ICO: Guidance timeline for AI, ADM, and technology
- Osborne Clarke: Regulatory Outlook January 2026, artificial intelligence
- BCLP: AI regulation in financial services, FCA developments (December 2025)
- Littler: UK key provisions of DUA Act 2025 now in force (February 2026)
- Debevoise: UK’s new ADM rules and EU GDPR comparison (November 2025)
- Freshfields: UK data reforms, implications for AI and ADM
- Hogan Lovells: DUA Act data protection provisions in force (February 2026)
- Clifford Chance: Key aspects of DUA Act take effect (February 2026)
- FCA: AI in financial services
- FCA: Long-term review into AI and retail financial services (January 2026)
- PwC: FCA’s evolving approach to AI
- Trowers & Hamlins: The future of AI in public procurement
- Market Dojo: Procurement regulations in 2026 (January 2026)
- British Chambers of Commerce: SME AI adoption survey (September 2025)
- YouGov: UK SME AI adoption poll (2025)
- SAP UK / Oxford Economics: UK business AI and shadow AI exposure (October 2025)
- techUK / ANS / YouGov: AI adoption barriers survey (February 2025)
- Enterprise Times: AI governance as competitive advantage (February 2026)
- Slaughter & May: AI update for 2026
- King & Spalding: EU and UK AI Round-up December 2025