In 25 years of publishing annual reports, the FBI’s Internet Crime Complaint Center had never given artificial intelligence its own section. The 2025 report changes that. The numbers behind that section: 22,364 complaints with an AI-related nexus, and adjusted losses exceeding USD 893 million. Within those figures, AI-nexus investment scams alone accounted for USD 632 million. The total sits inside a broader IC3 picture of 1,008,597 complaints and USD 20.9 billion in total reported losses for 2025, a 26% increase from 2024.
The FBI is clear about the limitation of that USD 893 million figure. “AI-related complaints are determined via the complainants’ statements and keywords they may use throughout the complaint,” the Bureau told The Register. “It is possible the number could be higher.” Most victims do not know AI was involved in the scam that targeted them. The reported figure captures what victims recognise and describe. The real exposure is almost certainly larger.
What the IC3 report actually says about AI
The 2025 IC3 Annual Report documents AI involvement across multiple fraud categories rather than treating it as a standalone crime type. The FBI identifies four primary AI-enabled techniques: AI-generated text for phishing and social engineering, voice cloning for impersonation and “grandparent scams,” deepfake video for investment fraud endorsements, and synthetic social media profiles for building victim trust.
The scale of the broader crime categories that AI is now accelerating puts the USD 893 million in context. Investment fraud led all categories at USD 8.6 billion in losses across nearly 73,000 complaints (complaint volume up 52% from 2024). Business email compromise accounted for USD 3.0 billion. Tech support scams reached USD 2.1 billion. Cyber-enabled fraud, defined as crimes where technology is the primary weapon, covered 452,868 complaints and USD 17.7 billion in losses, representing 85% of all reported losses.
Within the AI section specifically, the FBI noted that investment scams with a confirmed AI component accounted for USD 632 million in losses. BEC with a confirmed AI component generated more than USD 30 million in losses, according to Abnormal Security’s analysis of the IC3 data (Abnormal is a BEC detection vendor; this figure is their reading of the report, not an FBI headline number). The gap between the BEC AI figure (USD 30 million) and total BEC losses (USD 3.0 billion) suggests that AI involvement in BEC is significantly underreported, which is consistent with the FBI’s own caveat about keyword-dependent counting.
Why “first time” matters more than the dollar figure
The three-year comparison tables in the 2025 report mark “AI Related” as “not captured in these years” for both 2023 and 2024. The decision to create a dedicated section is an institutional signal. It means the FBI considers AI-enabled fraud material enough to the overall crime landscape to track separately, report on publicly, and allocate analytical resources to. For enterprise risk frameworks, the signal matters as much as the number. AI-enabled fraud is now a formal FBI reporting category. Organisations that do not treat it as a formal risk category in their own frameworks are behind the regulator.
The detection problem AI creates for defenders
The FBI’s report highlights a specific challenge that distinguishes AI-enabled fraud from previous generations of cybercrime. Historical detection methods, such as spotting poor grammar, unusual phrasing, or low-quality impersonation, are becoming obsolete. The report notes that AI-enabled synthetic content is becoming increasingly difficult to detect and easier to produce.
Abnormal Security’s analysis of the IC3 data makes the operational point sharper. AI is not creating new fraud models. It is making proven tactics more effective at scale. BEC campaigns that once depended on a single convincing email now unfold as sequences, with initial outreach, follow-up, and reinforcement, designed to align with real workflows and pressure recipients into action. Voice cloning has turned the “grandparent scam” from a crude impersonation into a precise replication of voice, cadence, and emotional register.
The downstream consequence for enterprise security is that detection tools built for pattern-matching against known fraud signatures will not catch AI-enhanced attacks that look and sound legitimate. The security stack needs to move from pattern detection to behavioural analysis, identifying anomalies in workflow, timing, and request context rather than anomalies in language quality.
What this means for AI governance, not just cybersecurity
The IC3 data has a governance dimension that goes beyond the CISO’s remit. Organisations deploying AI tools internally face a two-sided exposure: their employees may be targeted by AI-enabled fraud, and their own AI tools may create vulnerabilities that attackers can exploit.
Shadow AI amplifies both risks. Employees using unsanctioned AI tools may be more susceptible to AI-generated phishing because they are already accustomed to interacting with AI outputs and may be less likely to question AI-produced content. Conversely, unsanctioned AI tools that process sensitive data create attack surfaces that the security team cannot see, log, or monitor. SAW’s earlier analysis of AI security statistics found that organisations with structured AI governance reported 45% fewer AI-related security incidents and resolved breaches 70 days faster. The IC3 data now shows what the cost looks like when governance is absent.
The connection to the DOJ’s recent enforcement actions on AI-generated job advertisements is also worth noting. The same AI capabilities that enable convincing phishing emails enable convincing fraudulent job ads, synthetic voice calls, and deepfake video endorsements. The tools are general-purpose. The governance response needs to be equally broad.
What the numbers mean for Australian organisations
The IC3 data is US-only, but AI-enabled fraud does not respect borders. Australian organisations are targeted by the same techniques the FBI is documenting. The Australian Competition and Consumer Commission’s Scamwatch reported AUD 2.74 billion in total scam losses for 2024, and the trajectory is upward.
For Australian CISOs and CFOs, the IC3 report provides three specific data points to take to the board. USD 893 million is the reported floor for AI-enabled fraud in a single country in a single year. Australia’s exposure is proportionally smaller in absolute terms but structurally identical. The FBI’s own methodology acknowledges that AI is only counted when victims recognise and describe it, meaning the true figure is higher. And the 26% year-on-year increase in total cybercrime losses shows the trend is accelerating rather than stabilising.
What CISOs and compliance leads should do
Revise fraud scenarios to include AI-enabled techniques. Tabletop exercises, incident response plans, and fraud detection models should explicitly include AI-generated phishing, voice cloning, deepfake video, and synthetic identity scenarios. If the current fraud framework treats all email compromise as a single category, the AI layer needs to be modelled separately because the detection methods are different.
Update staff training for AI-enhanced social engineering. Training that teaches employees to spot poor grammar, suspicious formatting, or unfamiliar senders will not catch AI-generated content that is grammatically perfect, properly formatted, and sent from a compromised legitimate address. Training should shift toward behavioural red flags: urgency, unusual payment requests, changes to established workflows, and requests that bypass normal approval chains.
Integrate AI-specific monitoring into incident response. Security operations centres that do not flag AI indicators in their triage workflows will miss the pattern. The IC3 data shows that AI is involved across multiple fraud categories. SOC playbooks should include AI-related keywords and behavioural indicators as part of standard triage.
Connect AI governance to fraud prevention. Organisations that treat AI governance (acceptable use, tool inventory, data classification) as separate from fraud prevention (transaction monitoring, BEC detection, social engineering defence) are managing two halves of the same risk in different teams. The IC3 data confirms they are the same risk.
The FBI just created a baseline
The USD 893 million figure will be cited for the next twelve months as the benchmark for AI-enabled fraud. Next year’s IC3 report will show whether it grows, shrinks, or stays flat. Given the FBI’s own assessment that the number is conservative and the detection methods are keyword-dependent, the safe assumption is that the 2026 figure will be higher.
For enterprise risk teams, the practical response is to treat this as the first official measurement of a risk category that has been underweighted in most frameworks. The FBI has now given AI-enabled fraud a dedicated section, a complaint count, and a dollar figure. Organisations that still classify it as “emerging” are a year behind the FBI’s own assessment.
Sources
- FBI, “Cryptocurrency and AI Scams Bilk Americans of Billions,” press release, April 2026 (22,364 complaints, USD 893M losses, “first time” AI section). fbi.gov
- FBI IC3, “2025 IC3 Annual Report” (full PDF; total 1,008,597 complaints, USD 20.877B losses, AI section pp. 39-42). ic3.gov
- The Register, “FBI says cybercrime losses hit record $20.87B in 2025,” April 2026 (FBI statement on keyword-dependent AI counting methodology). theregister.com
- Abnormal Security, “$893M in Losses: What the 2025 IC3 Report Reveals About AI Cybercrime,” 14 April 2026 (BEC AI component USD 30M, attack sequence analysis). abnormal.ai
- National CIO Review, “FBI Cybercrime Report Reveals $20.8 Billion Lost in 2025,” April 2026 (cyber-enabled fraud 85% of losses). nationalcioreview.com
- ASIS Security Management Magazine, “FBI IC3 Report: Half of 2025 U.S. Fraud Losses Were Linked to Cryptocurrency Scams,” April 2026. asisonline.org
- Emerge IT Solutions, “The FBI’s 2025 Internet Crime Report,” April 2026 (AI investment scam USD 632M). emergeits.com