Shadow AI Watch covered the first wave of agentic AI regulation on 2 April 2026, with FINRA, the GDPR’s EU data protection authorities, the UK ICO, and the EU AI Act Service Desk all publishing formal guidance treating agentic AI as a distinct category. That coverage focused on financial services and data protection. Three weeks later, a second wave is sharper, more specific, and aimed squarely at consumer-facing deployments.

The UK Competition and Markets Authority (CMA) published practical guidance for businesses deploying agentic AI on 9 March 2026, accompanied by a research paper on consumer impact. The cross-regulator Digital Regulation Cooperation Forum (DRCF) followed with a foresight paper, “The Future of Agentic AI,” on 31 March 2026. The ICO opened consultation on draft automated decision-making guidance the same day. The pattern is clear. UK regulators are not waiting for an AI Act. They are mapping agentic AI to existing consumer law, competition law, and data protection rules that already carry serious penalties.

What the CMA actually said

The CMA’s 9 March 2026 paper is short, direct, and operational. The starting point is that consumer law applies to AI agent interactions on the same basis as human ones. Liability sits with the deploying business whether the agent was built in-house or supplied by a third party.

That principle has teeth. Under the Digital Markets, Competition and Consumers Act 2024 (DMCCA), the CMA can impose fines of up to 10% of global annual turnover for consumer law breaches. The CMA’s guidance lays out four areas where it expects compliance:

Transparency. Consumers must not be misled about whether they are dealing with an AI agent, or about what the agent can and cannot do. Businesses must not overstate agent capabilities or disguise automated interactions as human.

Material disclosure. If AI agents generate recommendations, rankings, or comparisons, businesses must disclose limitations such as how much of the market is covered, how results are ranked, and whether commercial relationships influence outputs.

Statutory rights. Agents must be trained and configured to respect consumers’ statutory and contractual rights including pricing, refunds, cancellation, and accurate product information. The CMA’s guidance flags the Consumer Rights Act 2015 and the Consumer Contracts Regulations 2013 as binding on agentic deployments.

Monitoring and intervention. Businesses must check that agents are delivering correct results, behaving as intended, and complying with consumer law. Where agents produce non-compliant outcomes, businesses must act quickly. The CMA emphasises that this is “particularly important where agents may interact with large numbers of people and/or vulnerable consumers.”

The accompanying CMA blog post on AI and collusion, published 4 March 2026, goes further on the competition side. The CMA states that it has “invested heavily in our technical capabilities, including our ability to use AI and agentic systems to detect breaches of consumer and competition law at an unprecedented pace and scale,” meaning the regulator is using agentic AI to police agentic AI. It also offers a reward of up to GBP 250,000 to anyone who reports illegal cartel activity, including algorithmic collusion.

What the DRCF added

The DRCF’s Future of Agentic AI paper is a joint product of the CMA, FCA, ICO, and Ofcom. It does not set policy but signals where four major UK regulators are looking, and the signal is unambiguous. All four agree, in the paper’s words, that “AI agents do not fall outside existing UK regimes: obligations around transparency, fairness, safety, consumer protection and competition continue to apply as Agentic AI develops.”

The paper introduces a five-level autonomy spectrum, ranging from a reactive tool through to a fully autonomous actor that requires little human input. It catalogues risks the regulators consider material: dark patterns optimised for engagement at the cost of consumer outcomes, erosion of consumer agency through opaque delegation, prompt injection, data minimisation failures, and consumer rights challenges.

Algorithmic collusion gets extended treatment. The DRCF cites experiments where LLM-based agents converged on supra-competitive prices in pricing, bidding, and financial markets, and where agents “divided markets” by dynamically adjusting resource allocation. The findings came from controlled settings rather than live deployments. The DRCF treats them as warning signals about deploying agents in pricing roles, not as evidence of current widespread harm. The distinction matters for businesses, because the regulator’s framing affects when intervention becomes likely.

The cross-regulator angle is what raises the stakes. As Lewis Silkin observed, a single agentic AI deployment can simultaneously trigger concerns under data protection law (ICO), financial regulation (FCA), online safety duties (Ofcom), and competition and consumer law (CMA). A retail assistant powered by agentic AI can activate cross-regulatory scrutiny across all four regulators at once. The DRCF paper makes that point explicitly.

How this connects to the EU AI Act

UK and EU approaches differ in form but converge in substance. The EU AI Act’s Article 5 prohibits AI practices involving harmful manipulation and exploitation of vulnerabilities, with administrative fines of up to EUR 35 million or 7% of global annual turnover for the most serious infringements. High-risk system violations are capped at EUR 15 million or 3%. Those penalty figures sit on top of national consumer protection enforcement.

As Reed Smith noted in its 9 April 2026 analysis, the practical concern across both jurisdictions is “less about generative output in isolation and more about systems that can initiate actions, shape decisions or interact at scale with limited human oversight.” For consumer-facing deployments, risk attaches to interaction design and choice architecture, including how consumers can understand, challenge, or override agent-led decisions.

The EU AI Act’s Article 50(2) transparency requirement for AI-generated content was due to apply from 2 August 2026, with the Digital Omnibus now likely to defer that date for systems placed on the market before then. UK regulators do not have an equivalent timeline pressure. The CMA’s DMCCA powers are already in force. The ICO’s Code of Practice on AI and automated decision-making, expected in early June 2026, will operate alongside transparency rules due to come into force on 2 August 2026. The cumulative effect is that UK consumer-facing agentic deployments face active regulatory attention now, not in 2027.

Where boards should focus

The combination of CMA guidance, DRCF foresight, ICO consultation, and EU AI Act provisions creates a workable governance baseline for any organisation deploying consumer-facing agentic AI. Five areas matter most.

Disclose agent use. Customers must know they are dealing with an AI agent, not a human, where that distinction could affect their decisions. Concealment is the fastest path to a CMA enforcement action under DMCCA, and the same logic applies in EU jurisdictions under AI Act Article 50.

Build consumer-law constraints into agent objectives. Agentic systems pursue goals. If the goal is “maximise conversion” without constraints, the system will optimise for conversion at the expense of statutory consumer rights. The fix is to encode consumer law obligations into the agent’s objective function, not bolt them on as a review layer.

Log decision chains. Regulators investigating an agentic AI incident will want to reconstruct the sequence of decisions the agent made. Logging needs to be detailed enough to support that reconstruction. Most current observability tooling for traditional automation is not adequate for agentic systems.

Set escalation points for human review. The CMA’s guidance flags “regular human oversight” as essential. The DRCF’s autonomy spectrum makes clear that fully autonomous agents (level 5) carry the highest regulatory exposure. For consumer-facing deployments, defining the boundary between automated and human-reviewed decisions is now a board-level governance question.

Align UK and EU obligations into a single deployment governance model. Operating two parallel compliance frameworks for the same agentic system is expensive and error-prone. The substance of UK consumer law obligations and EU AI Act obligations on transparency, manipulation, and human oversight overlaps significantly. Building one governance framework that satisfies the stricter of the two is more efficient than maintaining separate ones.

What this means for Australian organisations

Australia does not have a direct equivalent to the CMA’s DMCCA powers or the EU AI Act’s Article 5 prohibitions. It does have a regulatory architecture that already covers most of the substantive ground.

The ACCC enforces the Australian Consumer Law, which prohibits misleading and deceptive conduct. An agentic AI system that misleads consumers about whether they are dealing with a human, or that produces misleading recommendations, falls within the ACCC’s existing enforcement remit. ASIC has parallel powers in financial services. The OAIC will enforce the December 2026 automated decision-making transparency requirement, which applies to many of the same agentic deployments the CMA is scrutinising.

Australian organisations selling into the UK or EU markets need the full UK and EU compliance posture regardless of Australian law. Those operating only domestically should still treat the UK CMA guidance as a useful template. The Connecticut Attorney General’s February 2026 memorandum used the same approach: existing laws already regulate AI, no new legislation is required to enforce. Australian regulators have repeatedly signalled they will follow the same logic.

A second chapter, not a re-run

The first wave of agentic AI regulatory guidance was about defining the category; the second is about defining the obligations. UK regulators have now made it clear that agentic AI is a consumer-facing technology subject to the same consumer law, competition law, and data protection rules that govern every other commercial interaction. The CMA can fine up to 10% of global turnover, and the EU AI Act caps manipulation penalties at EUR 35 million or 7%. Boards that have been treating agentic AI as an innovation story rather than a compliance problem are operating on assumptions UK and EU regulators have now publicly retired.

Sources

  • Competition and Markets Authority, “Agentic AI and consumers” (guidance and research paper), 9 March 2026. gov.uk
  • Competition and Markets Authority, “AI and collusion: frontiers, opportunities and challenges,” 4 March 2026. competitionandmarkets.blog.gov.uk
  • Digital Regulation Cooperation Forum, “The Future of Agentic AI,” foresight paper, 31 March 2026 (covered in PPC Land). ppc.land
  • Reed Smith Viewpoints, “Regulators turn their attention to agentic AI,” 9 April 2026. reedsmith.com
  • Lewis Silkin, “The DRCF’s quiet warning to businesses on agentic AI,” 9 April 2026. lewissilkin.com
  • Lewis Silkin, “Agentic AI and consumer law: the CMA’s guidance for businesses,” 13 March 2026. lewissilkin.com
  • TLT LLP, “Agentic AI: CMA publishes guidance on consumer law and DMCCA risks,” April 2026. tlt.com