Most financial services firms know AI is coming to trade surveillance. Most are not ready. Only 16% of compliance decision-makers say they have fully deployed AI in surveillance, while 69% believe AI adoption will drive new compliance risks in the next 12 months, according to eflow’s Global Trends in Market Abuse and Trade Surveillance Report 2026. The gap between awareness and action is the problem.

Note: The research cited in this article is from eflow’s Global Trends in Market Abuse and Trade Surveillance Report 2026, a vendor-commissioned survey of 300 senior compliance decision-makers. Shadow AI Watch has reviewed the methodology summary. As with all vendor-sponsored research, findings should be read in that context.


The Tension at the Centre

AI simultaneously creates new market abuse risks and offers the tools to detect them. That tension defines the current state of trade surveillance.

On the risk side, AI-driven manipulation techniques are becoming more accessible. Spoofing, layering, and coordinated trading schemes that once required significant capital and manual coordination can now be partially automated and scaled using AI tools.

On the capability side, AI offers genuine improvements over traditional surveillance approaches. Earlier eflow research found that 43% of compliance professionals identify false positives as a significant challenge with their current surveillance technology.


Where Firms Are Sitting

The deployment picture is fragmented. Of the 300 respondents, 16% say AI is fully deployed across their surveillance function. A further 31% say they are rolling out AI in specific areas. Another 24% plan to deploy within the next 12 to 24 months. That leaves 29% either without a formal AI strategy for trade surveillance or with no plans to use AI at all.

That 29% is significant. Firms without an AI strategy in 2026 are not maintaining a neutral position. They are accepting both the continuing limitations of their existing surveillance and the capability gap relative to firms that have deployed AI tools.

Regulatory uncertainty is the most commonly cited barrier. Sixty-five per cent of respondents identify it as a major risk, rising to 75% among US firms and 63% among UK firms. Geopolitical instability rates as a surveillance risk for 54% of respondents (eflow, 2026).


What the Regulatory Backdrop Looks Like

Market abuse enforcement across the UK, US, and EU has remained active through the period of AI regulatory uncertainty. The EU AI Act classifies trade surveillance tools as high-risk AI systems, which will require conformity assessments, transparency documentation, and human oversight provisions. The Act’s full application date is 2 August 2026.

The FCA ran a dedicated TechSprint on market abuse surveillance in 2025, bringing together vendors, compliance teams, and regulators to develop common standards. That collaborative model is directly relevant to the 50% of survey respondents who want closer regulator-compliance collaboration, and the 47% who want greater transparency on regulatory expectations (eflow, 2026).

For the governance infrastructure required to meet these obligations, the AI governance framework guide covers the core components. The ASIC AI governance analysis shows what structured oversight looks like across a financial institution.


What Deliberate Adoption Looks Like

The firms making progress on AI trade surveillance share a common approach: deploying AI in defined, well-documented use cases where the value is demonstrable and the outputs are auditable.

Starting with a risk-based selection of use cases matters because it produces defensible documentation for regulators. Explainability is non-negotiable. Surveillance outputs a compliance officer cannot explain to a regulator are not useful surveillance outputs. Proactive regulator engagement before deployment consistently produces a more constructive relationship when questions arise later.


The Cost of Waiting

The 29% without an AI strategy are not preserving optionality. They face a widening capability gap as other firms improve detection accuracy, while accepting the full burden of existing false positive rates and the limitations of rules-based systems against AI-assisted manipulation techniques.

The governance frameworks required to deploy AI in trade surveillance responsibly take time to build. Firms that begin that work now, even in limited and well-documented scope, will be better positioned as regulatory expectations firm up than firms that wait for certainty before starting.


Related reading: What Is an AI Governance Framework? | [When AI Adoption Outruns Governance: What ASIC Found Inside 23 Australian Lenders](/governance/asic-ai-governance-gap-financial-services/) | EU AI Act: What Australian Businesses Need to Know


Stay across AI governance and compliance developments. Subscribe to the Shadow AI Watch newsletter.


Sources