Regulators in four jurisdictions have now published formal guidance naming agentic AI as a distinct risk category. FINRA in the United States, data protection authorities in Spain and Turkey, and the UK’s ICO have each mapped agentic AI to existing regulatory obligations. The consensus: current rules already cover these systems. Businesses deploying AI agents do not get to wait for a dedicated “agentic AI act.” The obligations are live.
Four Regulators, One Message
Between December 2025 and March 2026, four regulators across three continents published guidance that explicitly names agentic AI and maps it to existing supervisory and data protection obligations.
FINRA’s 2026 Annual Regulatory Oversight Report, published 9 December 2025, devotes a dedicated section to generative AI and flags agentic AI as an emerging risk area. FINRA defines AI agents as “systems or programs that are capable of autonomously performing and completing tasks on behalf of a user” and states that firms deploying such systems must maintain supervisory controls compliant with Rule 3110 (Supervision) and Rule 4370 (Business Continuity Plans). This is the first time FINRA has named agentic AI in its annual examination priorities (Prokopiev Law, 18 March 2026).
The Spanish data protection authority (AEPD) published a 71-page guide on agentic AI and GDPR compliance in February 2026. The guide is non-binding but maps existing obligations around lawful basis, transparency, and automated decision-making to common agent architectures. The Turkish Personal Data Protection Authority (KVKK) followed on 12 March 2026 with its own Agentic AI Guide emphasising DPIAs, legal basis analysis, and strict data minimisation. The UK’s ICO published a Tech Futures report on 8 January 2026 examining accountability and redress for autonomous AI systems (Skadden, 4 March 2026).
Why “Agent” Changes the Compliance Equation
Traditional AI systems generate an output: a summary, a prediction, a classification. A human reads the output and decides what to do. Agentic AI systems generate actions: they plan multi-step tasks, interact with external tools and databases, and execute decisions with minimal or no human intervention between steps.
This distinction breaks assumptions embedded in most governance frameworks. Snell & Wilmer noted that FINRA’s report “draws a sharp line between traditional GenAI tools used for search, summarisation, or drafting and a new class of systems capable of initiating and completing multi-step operational tasks.” Once an AI system can take action rather than merely generate content, the supervisory, recordkeeping, and governance obligations shift materially.
FINRA identified six specific risks from agentic AI: agents acting autonomously without human validation; agents exceeding the user’s intended scope and authority; complicated multi-step reasoning that makes outcomes difficult to trace or audit; agents operating on sensitive data and unintentionally storing, disclosing, or misusing it; general-purpose agents lacking domain knowledge for industry-specific tasks; and misaligned reward functions that could result in outcomes harmful to investors, firms, or markets (FINRA, December 2025).
Existing Rules Already Cover Agents
The converging message from all four regulators is that agentic AI does not require new legislation to be governed. Existing frameworks apply.
In financial services, Snell & Wilmer’s analysis noted that FINRA expects firms to incorporate any system capable of taking operational steps into Rule 3110 (Supervision) and Rule 3120 (Supervisory Control) frameworks, which together require firms to establish supervisory policies and verify their effectiveness through annual testing. This includes defining authorised actions, required escalation points, and supervisory triggers. Rule 4511 and Exchange Act Rule 17a-4 require firms to reconstruct the full chain of activity when an automated system performs a sequence of actions. Outputs alone are insufficient; firms must preserve telemetry showing how the system reached its end state.
Under GDPR, the Spanish AEPD’s guidance underscores that increased autonomy does not alter controller responsibility. The organisation deploying the agent remains the data controller for GDPR purposes, regardless of how many steps the agent takes or how little human oversight occurs between them. DPIAs are required for high-impact use cases. Transparency obligations apply to any decision that affects individuals. The Turkish KVKK’s guidance aligns with this position, adding emphasis on data minimisation and the need for explicit legal basis analysis before deployment.
The UK ICO’s Tech Futures report, summarised by Skadden, emphasises that data protection obligations remain with the deployer and that agentic systems require careful analysis of accountability and redress mechanisms. Mayer Brown’s February 2026 analysis mapped these regulatory positions to a practical governance framework, noting that organisations should classify agentic systems separately in AI inventories, define action boundaries and human approval checkpoints, and adapt incident response plans for autonomous agents.
From Incident Category to Exam Priority
Shadow AI Watch’s earlier coverage documented how agentic AI has already become an incident category, with real-world security events tied to autonomous AI actions. The regulatory developments described here represent the other half of that picture: supervisors and data protection authorities are now building agentic AI into their examination and enforcement priorities.
FINRA’s report is particularly significant because it is an examination roadmap. When FINRA names agentic AI in its oversight report, examiners will ask about it during their next visit. Firms that cannot demonstrate supervisory controls over AI agents will face findings. The same dynamic applies in the EU: when the AEPD publishes 71 pages of agentic AI guidance, national enforcement teams have a reference point for what “compliance” looks like.
What Businesses Should Do
Classify agentic systems separately in AI inventories. An AI agent that can execute multi-step tasks carries different risk than a chatbot that generates text. The governance framework needs to distinguish between them, with agents subject to additional controls around scope, authority, and logging.
Define action boundaries and human approval checkpoints. Every agent should have a documented scope of permitted actions and clear escalation points where human approval is required. FINRA’s guidance specifically calls for “guardrails to constrain or restrict AI agent behaviours, actions or decisions.”
Implement full-chain logging. Traditional output logging is insufficient for agentic systems. Businesses need to capture intermediate tool calls, data fetches, and decision pathways. Snell & Wilmer described this as “process reconstruction records”: audit trails that show not just what the agent concluded but how it got there.
Adapt incident response and business continuity plans. FINRA’s inclusion of Rule 4370 (Business Continuity Plans) signals that regulators expect firms to plan for what happens when an agent fails, acts outside its scope, or is compromised. Incident response playbooks need agent-specific scenarios.
Update vendor contracts. If your organisation uses third-party AI agents or embeds agent capabilities from a vendor’s platform, the contract should cover agent behaviour specifically. Liability for actions taken by an autonomous system is a question that existing vendor agreements rarely address.
Organisations that have already treated agents as governed systems, with auditable logs, impact assessments, and clearly owned risk decisions, will be positioned for what comes next. Those still treating agentic AI as an experimental side project will find regulators expect otherwise.
Related reading: Agentic shadow AI is already an incident category | What is an AI governance framework? | COSO has spoken: generative AI now sits inside your internal control framework | AI compliance deadlines 2026
Sources
-
FINRA: 2026 Regulatory Oversight Report press release (9 December 2025)
-
Prokopiev Law: FINRA’s 2026 Regulatory Report Flags Agentic AI Risks (18 March 2026)
-
Covington Inside Privacy: Spanish DPA Issues Detailed Guidance on Agentic AI and GDPR (6 March 2026)
-
Skadden: UK Regulator to Agentic AI Developers and Deployers (4 March 2026)
-
Mayer Brown: Governance of Agentic Artificial Intelligence Systems (23 February 2026)