In 2025 alone, the US Federal Trade Commission filed at least a dozen enforcement actions against companies making false or misleading claims about AI products, from overstated accuracy rates to fabricated earnings promises. That is a pattern, not a regulatory announcement. The FTC is building an AI enforcement playbook from existing law, case by case, without waiting for Congress to pass AI-specific legislation.
FTC Chair Andrew Ferguson stated the approach explicitly: “Imposing comprehensive regulations at the incipiency of a potential technological revolution would be foolish. For now, we should limit ourselves to enforcing existing laws against illegal conduct when it involves AI no differently than when it does not.” Commissioner Meador reinforced the case-by-case framing at the IAPP Global Summit in April 2026: “We’re approaching this as enforcers who are trying to spot harm, address it, prevent it from occurring, and remedy it for the injured consumers as much as we can.”
The legal basis is Section 5 of the FTC Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” That provision does not mention artificial intelligence. It does not need to. The FTC’s position is that AI claims are evaluated under the same substantiation standards as any other product claim.
Three enforcement tracks
Analysis from Morgan Lewis, Alston & Bird, and AI Policy Desk identifies three distinct tracks in the FTC’s AI enforcement activity.
Track 1: Deceptive AI capability claims. The FTC has targeted companies that overstate what their AI can do. Workado settled in April 2025 after the FTC alleged it made false claims about its “AI Content Detector,” advertising 98% accuracy when independent testing found the model achieved only 74.5% on mixed content and 53.2% on non-academic AI-generated text. The model had been trained exclusively on academic abstracts by Norwegian students. DoNotPay settled in January 2025 after the FTC found its “world’s first robot lawyer” was not trained or tested properly and produced ineffective legal documents. Evolv Technologies settled in December 2024 after its AI-powered security scanners failed to reliably detect weapons in schools.
Track 2: AI-powered business opportunity fraud. The FTC has pursued schemes that use AI branding to sell deceptive business opportunities. FBA Machine (also known as Passive Scaling) generated over USD 15.9 million in consumer losses by falsely promising consumers they could earn passive income through AI-powered online storefronts. Air AI faced a federal court action alleging deceptive claims about business growth, earnings potential, and refund guarantees. These are not complex AI governance cases. They are straightforward consumer fraud cases where “AI” was the marketing hook.
Track 3: AI-generated content presented as authentic. Rytr LLC, an AI writing assistant, faced FTC action for providing subscribers with the means to generate fake consumer reviews at scale. The FTC alleged that Rytr’s outputs contained specific, often material details that bore no relation to the user’s input, meaning subscribers could generate thousands of fabricated reviews and post them as if they were genuine consumer experiences. The FTC’s proposed order barred Rytr from selling any service dedicated to generating consumer reviews or testimonials.
In a notable development, the Trump-era FTC reopened and set aside the Rytr consent order in December 2025, determining that the original complaint “failed to satisfy” the relevant standard. This does not signal a retreat from AI enforcement. The FTC under Chair Ferguson has continued to bring new cases. It signals a refinement of the legal theory: the agency wants its AI cases to hold up on appeal, which means building stronger factual records and more precisely drafted complaints.
The Section 5 playbook
Morgan Lewis’s April 2026 analysis frames the broader enforcement landscape clearly: the FTC Act remains the primary federal tool for AI enforcement, alongside the SEC (targeting “AI washing” in investor disclosures), the DOJ (pursuing False Claims Act theories where AI tools are used in government-funded programmes), and antitrust enforcers (investigating algorithmic pricing and information-sharing facilitated by AI systems).
The FTC’s approach has three elements that compliance teams should understand.
Substantiation standards apply to AI claims. If a company claims its AI achieves a specific accuracy rate, detection rate, or performance metric, the FTC expects that claim to be substantiated by competent and reliable evidence. Training data limitations, testing methodology gaps, and performance variations across use cases are all areas the FTC has challenged.
Disclosure obligations are real. The FTC has taken the position that consumers and businesses have a right to know when they are interacting with AI-generated content, when decisions affecting them are made by automated systems, and when AI tools are collecting or using their data. Failure to disclose is treated as a deceptive omission under Section 5.
“AI” in the marketing does not create an exemption. As former FTC Chair Lina Khan stated during Operation AI Comply: “There is no AI exemption from the laws on the books.” The current FTC leadership has maintained this position. AI products are evaluated under the same consumer protection framework as any other product.
The state enforcement layer
The FTC is not acting alone. Morgan Lewis notes that state attorneys general are deploying unfair and deceptive acts or practices (UDAP) statutes to investigate AI-related conduct. These statutes often allow per-violation penalties, do not require proof of individual damages, and are structured to resist removal to federal court.
SAW has covered the Connecticut Attorney General’s AI enforcement activity and the broader tension between state AI laws and the White House’s preemption push. The practical effect for businesses is that even if federal AI legislation eventually preempts some state laws, state UDAP enforcement on AI claims operates under separate authority that preemption may not reach.
What this means for organisations deploying AI
Audit AI marketing claims for substantiation. Every claim about AI accuracy, performance, detection rates, or capabilities needs to be backed by evidence that matches the claim’s scope. If the AI was tested on academic text but marketed for general use, the claim is potentially deceptive. If the AI achieves a stated accuracy rate only under specific conditions, the marketing needs to say so.
Disclose automated decision-making. If customers or employees are subject to decisions made by AI or automated systems, disclose that fact. This applies to hiring tools, credit decisions, insurance pricing, content moderation, and fraud detection. The FTC’s position is that non-disclosure is a deceptive omission.
Review AI-generated content workflows. If the organisation uses AI to generate customer reviews, testimonials, product descriptions, or marketing content, ensure that the AI-generated origin is disclosed. The Rytr case (and the broader FTC enforcement trend) makes clear that AI-generated content presented as human-written is deceptive.
Treat AI vendor claims with the same scepticism as any other vendor. If a vendor claims their AI product achieves 98% accuracy, ask for the testing methodology, the training data composition, and the performance data across relevant use cases. The FTC’s enforcement actions show that vendors’ accuracy claims are often based on narrow test conditions that do not reflect real-world deployment.
Monitor state-level enforcement. Federal enforcement gets the headlines. State AG actions set the precedents. Organisations operating in multiple US states need to track UDAP enforcement activity on AI claims in each jurisdiction where they operate or serve customers.
The enforcement gap that matters
The FBI’s 2025 IC3 report logged 22,364 AI-related complaints and USD 893 million in losses. The FTC brought a dozen AI cases in 2025. The gap between reported AI harm and enforcement action is enormous. That gap is closing, not because AI-specific legislation is passing (it is not, in most US jurisdictions), but because existing law is being applied to AI-related conduct with increasing specificity and frequency.
Organisations that treat AI compliance as a future problem contingent on new legislation are misreading the enforcement landscape. The FTC does not need new law. Section 5 is enough, and the agency is using it.
Sources
- National Law Review (EINPresswire), “FTC Brings Dozen AI-Washing Enforcement Cases in 2025, Targeting Overstated AI Claims,” 2 April 2026 (12+ cases, AI-washing definition, enforcement pattern). natlawreview.com
- Morgan Lewis, “AI Enforcement Accelerates as Federal Policy Stalls and States Step In,” April 2026 (Section 5 framework, state UDAP enforcement, multi-agency landscape). morganlewis.com
- Holland & Knight, “FTC Evaluating Deceptive Artificial Intelligence Claims,” June 2025 (Chair Ferguson quote, Workado settlement detail, enforcement philosophy). hklaw.com
- IAPP, “IAPP Global Summit 2026: FTC Commissioner Meador stresses agency preference for ‘case-by-case’ enforcement,” April 2026 (Meador quote, enforcement approach, Rytr reversal context). iapp.org
- FTC, “FTC Announces Crackdown on Deceptive AI Claims and Schemes” (Operation AI Comply), 24 September 2024 (Khan “no AI exemption” quote, Rytr/DoNotPay/FBA Machine details). ftc.gov
- Lathrop GPM, “Transparency and AI: FTC Launches Enforcement Actions Against Businesses Promoting Deceptive AI Product Claims,” May 2025 (Evolv settlement detail, Rytr order detail, enforcement analysis). lathropgpm.com
- Alston & Bird, “AI Quarterly: A Review of AI Law, Policy & Practice, April 2026,” 15 April 2026 (quarterly enforcement roundup, DOJ/SEC/FTC coordination). alston.com