The internet is no longer majority human. Thales’ 2026 Bad Bot Report, now in its 13th edition, finds automated bot traffic hit 53% of all web traffic in 2025. Bad bots alone account for 40%. Daily AI-enabled bot attacks rose from 2 million to 25 million over the course of the year.
The headline acceleration is in AI-driven attacks. Daily AI-enabled bot attacks rose from 2 million to 25 million over the course of 2025, a 12.5x increase. Tim Chang, Thales’ general manager for application security, told The Independent that the operational challenge has shifted from identifying automated traffic to understanding what each bot is doing, whether the activity aligns with business intent, and how it interacts with critical systems.
For governance teams, the report raises a question that most organisations have not yet addressed: if automated AI traffic is now the majority of what hits your infrastructure, do your risk frameworks, access controls, and monitoring systems account for that?
What the numbers show
The 2026 report analyses full-year 2025 bot activity using data from Thales Threat Research and Security Analyst Services teams, drawing on the 17.2 trillion blocked requests across industries worldwide.
Bots are the majority. 53% automated traffic vs 47% human is a structural shift, not a temporary spike. The report notes that bots are no longer tied to specific events like scraping campaigns or credential stuffing attacks. They operate as a persistent presence across digital environments, running continuously and interacting with applications and APIs at all hours.
Bad bots are 40% of total traffic. That means two-fifths of all internet requests are actively malicious, enabling automated cybercrime, fraud, and business logic abuse. The remaining 13% of bot traffic is classified as “good” bots (search engine crawlers, monitoring tools) plus a new third category the report introduces: AI agents.
AI agents are now a distinct traffic category. For the first time in the report’s 13-year history, Thales has introduced a third classification alongside “good bots” and “bad bots”: AI agents. These are automated systems that retrieve data, execute workflows, and complete transactions through the same interfaces as human users, often indistinguishable from legitimate traffic. The distinction between a customer and an AI agent completing the same task is collapsing.
APIs are the primary target. 27% of bot attacks now focus on API endpoints, skipping the user interface entirely and operating against backend systems at speeds no human operator could match. These attacks frequently use valid credentials and properly structured requests, targeting business logic flaws rather than technical vulnerabilities.
Financial services takes the hardest hit
The sector concentration is sharp. Financial services accounted for 24% of all bot attacks and 46% of account takeover incidents in 2025. Account takeover attacks have surged, with attackers using AI to automate credential stuffing and phishing at a scale and sophistication that traditional detection tools struggle to identify.
The financial services exposure connects directly to the FBI’s 2025 IC3 data, which SAW covered on 24 April: USD 893 million in AI-related fraud losses, with BEC and investment fraud as the leading categories. The Thales data adds the infrastructure layer: the same AI tools that generate convincing phishing content also automate the delivery, testing, and credential harvesting at a volume that manual fraud controls cannot match.
APRA’s 30 April industry letter warning Australian banks about AI-accelerated cyber threats is validated by the Thales numbers. The letter warned that frontier AI models could “increase the probability, speed and scale of cyber attacks.” The Bad Bot Report quantifies what that looks like in practice: a tenfold increase in daily AI-driven attacks over 12 months.
The governance gap the report exposes
Most enterprise security frameworks were designed for a world where human traffic was the majority and bots were a secondary concern. The Thales data shows that assumption is now inverted. Three governance implications stand out.
Bot traffic distorts business metrics. When 53% of traffic is automated, engagement metrics, conversion rates, and user behaviour analytics include a substantial non-human component. Organisations making business decisions based on web analytics without filtering bot traffic are making decisions based on contaminated data.
Identity systems cannot distinguish agents from attackers. The Five Eyes agentic AI guidance published on 1 May 2026 identifies privilege risks and accountability risks as two of five core agentic AI threat categories. The Thales data illustrates why: AI agents and malicious bots use the same interfaces, the same authentication methods, and the same request patterns. Without behavioural analysis and intent inference, security systems treat them identically.
API security is now a governance issue. With 27% of bot attacks targeting APIs directly, API security has moved from a developer responsibility to a board-level risk. APIs power core business functionality: payments, data retrieval, authentication, and workflow execution. When attackers interact with APIs at machine speed using valid credentials, the risk is not just technical. It is operational and financial.
What CISOs and platform owners should do
Treat APIs as critical infrastructure. Implement rate limiting, behavioural analysis, and anomaly detection at the API layer. Monitor for valid-credential attacks that exploit business logic rather than technical vulnerabilities. Audit which APIs are exposed to the public internet and whether they need to be.
Deploy intent-based detection. Traditional bot detection relies on identifying non-human patterns (speed, frequency, header anomalies). The Thales data shows AI bots are increasingly indistinguishable from humans. Detection needs to move from identity-based (“is this a human?”) to intent-based (“is this action consistent with legitimate business activity?”).
Filter bot traffic from business analytics. Any dashboard, KPI, or business decision that relies on web traffic data should account for the 53% automated traffic share. Marketing, product, and commercial teams need to understand that their metrics may include substantial non-human activity.
Integrate bot and agent traffic into AI governance. The Five Eyes guidance tells organisations to fold AI agent security into existing frameworks rather than building a separate silo. The same principle applies to bot management: it is not an AppSec problem. It is an operational governance problem that affects security, compliance, fraud, and business intelligence simultaneously.
Map the relationship between legitimate AI agents and malicious bots. Organisations deploying their own AI agents (Copilot, Agentforce, internal automation) are adding to the automated traffic that their own security systems must distinguish from attacks. The governance question is whether the organisation can tell the difference between its own agents and someone else’s.
Vendor disclosure
Thales/Imperva is a cybersecurity vendor selling bot management and application security products. The Bad Bot Report supports Thales’ commercial positioning. SAW has used the report because the 13-year data series, the 17.2 trillion blocked request sample, and the independent coverage (The Independent, Business Wire, Security Boulevard, Analytics Insight) give the findings weight beyond a typical vendor report. Readers should note that the report’s recommendations naturally point toward the capabilities Thales sells.
Sources
- Imperva (Thales), “Bad Bot Report 2026: Bots in the Agentic Age,” report landing page, 28 April 2026 (53% bot traffic, 40% bad bots, 27% API attacks, 24% financial services, 46% account takeovers). imperva.com
- Thales CPL Blog, “Bad Bots in the Agentic Age: What the 2026 Thales Bad Bot Report Reveals,” 30 April 2026 (2M to 25M daily AI attacks, 17.2T blocked requests, AI agent third category, Tim Chang quote). thalesgroup.com
- ResultSense, “AI bot attacks rise 10x: UK third-most targeted,” 29 April 2026 (Tim Chang Independent interview, UK/Australia geographic exposure, intent inference challenge). resultsense.com
- Analytics Insight / Business Wire, “AI-driven Bot Attacks Surged 12.5x According to Thales Bad Bot Report,” 28 April 2026 (full press release, methodology, three structural changes). analyticsinsight.net
- DQ Channels, “Thales Bad Bot Report exposes new attack patterns,” 29 April 2026 (API attack patterns, structural shift framing, SME operational implications). dqchannels.com
- Imperva, “2026 Bad Bot Report,” report download page, 30 April 2026 (full methodology statement, global coverage). imperva.com