An organisation can ban ChatGPT, block Claude, and write a comprehensive AI acceptable-use policy, and still have AI processing its most sensitive business data every day. Across the enterprise software stack, AI now operates by default in CRM platforms, search bars, HR systems, and project management tools. The exposure does not require a single employee to open a dedicated AI application. It is ambient, structural, and largely invisible to the security teams tasked with governing it.

The Question Has Changed

The governance conversation inside most organisations is still focused on the wrong threat. Boards ask whether employees are using AI, CISOs write acceptable-use policies for ChatGPT, and IT teams block specific URLs. These are reasonable responses to the adoption wave of 2023 and 2024. They address, at best, perhaps 20 per cent of actual organisational AI exposure.

The practical question has shifted to which existing vendors are already processing organisational data through AI, under what terms, and with what subprocessors.

Lanai’s September 2025 research, published by Help Net Security, found that 89 per cent of enterprise AI activity goes unseen by IT and security teams. That figure is not primarily about employees using ChatGPT. It reflects AI features built directly into tools organisations have already approved (Salesforce, Microsoft 365, Google Workspace) operating below the threshold of what conventional security tooling can detect.

Every Google Search Now Runs Through an AI Model

When employees type work-related queries into Google Search, that data is processed by Google’s Gemini large language models through AI Overviews. Google’s AI Overviews now appear on approaching 50 per cent of all search queries globally, up from 30 to 44 per cent in mid-2024. Launched broadly in May 2024 and expanded to more than 200 countries by May 2025, they are not optional. Users cannot fully disable them on standard Google Search.

The critical gap is jurisdictional. Google Workspace with Gemini, the enterprise product, offers explicit commitments that customer data will not be used for AI model training. But when employees use consumer Google Search, the default browser search bar, their queries fall under Google’s consumer privacy policy. That policy, updated in July 2023, expanded Google’s rights from using data to improve “language models” to training all “AI models” broadly. Google’s own Gemini Apps Privacy Hub warns users not to enter confidential information that they would not want a reviewer to see or Google to use to improve its services.

The aggregation risk is particularly acute for organisations. Across multiple employees, search patterns can reveal merger and acquisition targets, competitive intelligence, market entry plans, client relationships, and proprietary research directions. Ireland’s Data Protection Commission opened a GDPR inquiry into Google’s AI models in September 2024. In September 2025, a jury ordered Google to pay US$425 million for tracking approximately 98 million users who had explicitly turned off activity tracking, undermining trust in Google’s data-handling promises.

SaaS Vendors Are Processing Data Through AI by Default

Across every major software category, vendors have integrated AI features that process customer data, frequently enabled by default, with opt-out mechanisms buried in settings or requiring email requests. This pattern is consistent and documented across collaboration, CRM, HR, and financial platforms.

Slack’s privacy principles, discovered in May 2024, stated that its systems analyse customer data including messages, content, and files to build AI and machine learning models. Users were opted in by default. Opting out required emailing feedback@slack.com. Microsoft confirmed that Microsoft 365 Copilot would be automatically installed on Windows systems beginning October 2025, enabled by default for all tenants outside the European Economic Area. Atlassian Intelligence is enabled by default on all new Premium and Enterprise Cloud instances, and administrators must manually disable it each time they add a new product, and the opt-out does not carry over.

CRM and sales platforms process vast customer datasets through AI. HubSpot’s documentation discloses that other customer data within accounts may be used to train its Breeze models, with opt-out available only by emailing privacy@hubspot.com. Salesforce Einstein processes contact information, behavioural data, sales records, and communications through both proprietary and third-party large language models including OpenAI and Anthropic. Many Einstein features come pre-configured at higher subscription tiers.

HR and recruitment platforms present acute risks. According to the World Economic Forum, 88 per cent of companies now use AI for initial candidate screening. A 2024 survey found roughly seven in ten companies allow AI tools to reject candidates without human oversight. HireVue processed nearly 20 million video assessments in Q1 2024 alone and has faced multiple class-action complaints and regulatory scrutiny over its AI assessment methodology. A judge ruled in the Workday discrimination case that an AI recruiter is not a passive tool; it participates in decision-making.

Financial software adds a further layer of exposure. Intuit’s QuickBooks now deploys five AI agents that operate in the background on higher-tier plans, processing financial transactions, invoices, bank statements, and cash flow data. Xero’s JAX assistant answers financial questions via natural language, including through email and WhatsApp channels. Both process highly sensitive financial records through AI as standard product functionality.


Product Default AI behaviour Opt-out method

Slack ML models Customer data used for ML by default Email feedback@slack.com

Atlassian Intelligence Enabled by default on Premium/Enterprise Manual deactivation per product

Microsoft 365 Copilot Auto-installed on Windows (Fall 2025) Admin opt-out via tenant settings

HubSpot Breeze Data used to train models by default Email privacy@hubspot.com

Notion AI (non-Enterprise) AI providers retain data up to 30 days Enterprise plans offer zero-retention

QuickBooks Intuit Assist AI agents run in background on higher tiers Limited granular controls


A Repeating Pattern of Quiet Changes and Forced Consent

Between 2023 and 2026, a consistent pattern emerged across major technology vendors: quietly introduce AI data processing rights in terms of service, face public discovery and backlash, then walk back the most egregious provisions. This cycle has repeated at least seven times across household-name vendors.

Zoom updated its terms in August 2023 to grant itself a perpetual, worldwide, non-exclusive, royalty-free licence over customer content for AI training, with no opt-out. After viral backlash, CEO Eric Yuan personally apologised and the terms were revised three times within a week. Adobe pushed a mandatory terms re-acceptance in June 2024 requiring users to agree that the company could access their content through both automated and manual methods. Users could not continue using applications, contact support, or even uninstall software without accepting.

LinkedIn quietly enabled an AI training setting by default for all users in August 2024, then updated its privacy policy the following month after data collection had already begun. A class-action lawsuit filed in January 2025 alleges violation of the Stored Communications Act. In November 2025, LinkedIn expanded AI training to EU users with data going back to 2003, with no retroactive opt-out available.

Dropbox was found in December 2023 to have a third-party AI toggle that appeared pre-enabled, prompting Amazon CTO Werner Vogels to publicly warn users. Meta announced plans to train on EU user data in March 2024, was forced to pause by the Irish Data Protection Commission, then resumed in May 2025 despite legal challenges. Microsoft’s Recall feature, announced May 2024, was initially enabled by default with no opt-out, taking screenshots every five seconds and storing data in a plaintext database. After being labelled spyware by security researchers, Microsoft delayed the launch and made it opt-in only.

In each case, consent was structurally coerced rather than freely given. Zoom’s host-consent model meant individual employees had no choice. Adobe’s agree-or-lose-access approach made consent structurally involuntary. LinkedIn’s retroactive application applied AI processing to information shared under fundamentally different expectations. Vendor terms of service are a dynamic attack surface, not a static agreement.

AI-Washing Inflates Claims While Introducing Real Vulnerabilities

Regulators have moved against misleading AI claims. The SEC has brought more than six enforcement actions for AI-washing since February 2024, including the first case against a public company, Presto Automation, whose drive-through AI required human intervention for more than 70 per cent of orders. The FTC has brought more than 12 cases under Operation AI Comply, including a US$17 million settlement with Cleo AI over deceptive cash advance and subscription practices (the enforcement targeted the company’s consumer deception, not its AI technology specifically, though Cleo used AI-based risk assessment). Stanford Law School tracked 46 AI-related securities class actions filed since 2020, with 15 in 2024 alone.

The adoption-versus-value gap is stark. McKinsey’s 2025 State of AI report found that while 78 per cent of organisations use AI in at least one function, more than 80 per cent reported no meaningful impact on enterprise-wide earnings. Only 6 per cent qualify as AI high performers. An industry-wide 70 to 85 per cent of AI initiatives fail to meet expected outcomes. SS&P Global research found 42 per centP Global research found 42 per cent of companies abandoned most AI initiatives in 2025, up from 17 per cent in 2024.

Zscaler’s 2025 AI Security Report documented 4.2 million data loss violations through AI tools in 2024. LayerX found 71.6 per cent of generative AI access occurs through non-corporate accounts, bypassing enterprise security controls. AI hallucinations, with rates of 17 to 33 per cent even in retrieval-augmented legal research tools, cost businesses an estimated US$67.4 billion in 2024 according to AllAboutAI, a research aggregator (this single-source estimate has not been independently validated). Nearly 47 per cent of enterprise AI users admitted making at least one major business decision based on hallucinated AI content.

Data Reaches AI Systems Through Supply Chains That Are Not Visible

An organisation’s data can enter AI systems through subprocessor chains, vendor integrations, and cloud infrastructure AI features without the organisation’s knowledge. The EU AI Act (Article 25), enforceable for high-risk systems from August 2026, requires providers and third-party component suppliers to agree in writing on information-sharing, technical access, and compliance assistance across the AI value chain. Penalties reach 35 million euros or 7 per cent of global annual turnover.

The European Data Protection Board ruled in October 2024 that controllers must have identity information for all processors and sub-processors readily available at all times. France’s CNIL published guidance establishing that a processor using the same dataset across multiple customers for its own purposes likely qualifies as a controller, triggering full GDPR obligations. The Australian Privacy Act amendments from December 2024 now require privacy policies to be transparent about AI use in substantially automated decision-making.

NIST’s AI Risk Management Framework explicitly addresses third-party AI risks across its GOVERN and MAP functions, warning of cascading systemic failures where a flaw in a widely used model propagates across an entire ecosystem. ISO/IEC 42001:2023, the first certifiable AI management system standard, requires structured supplier oversight including evaluation of responsible AI requirements.

Enforcement is already underway. Italy’s Garante fined OpenAI approximately 15 million euros in late 2024 for processing European users’ data for AI training without a legal basis. The Dutch DPA fined Clearview AI 30.5 million euros for scraping 30 billion images for AI training. IBM’s 2025 Cost of a Data Breach Report found that 97 per cent of organisations that experienced AI-related breaches lacked proper AI access controls.

The Governance Blind Spot: Browser Versus Desktop

All five risk categories share a structural feature that makes them difficult to detect using the security tooling most organisations have already deployed. Conventional data loss prevention software looks for keywords and patterns. AI operates on meaning. Employees can bypass a DLP filter by rewording a prompt, writing in a different language, or using paraphrase. Only 47 per cent of organisations report their current DLP solution is effective at stopping sensitive data from leaving the organisation (Fortinet/Proofpoint, 2025).

The visibility problem is more acute for desktop AI applications. Browser-based AI tools (ChatGPT via the web, Claude at claude.ai, Gemini in a browser tab) are at least theoretically subject to network-layer controls, proxy inspection, and CASB enforcement. Cloud Access Security Brokers sit between the user and the SaaS provider and can monitor interactions, apply policies, and log prompt activity. That architecture works because the traffic passes through a visible layer.

Desktop applications route around it entirely. ChatGPT Desktop, Claude Desktop, and Windows Copilot, built into Windows 11, operate as native processes on the endpoint, bypassing web proxies, invisible to CASB, and outside the scope of standard DLP policies. Prompt Security, a SentinelOne company, described the challenge plainly when extending its governance platform to Claude for Desktop: while browser-based generative AI tools had dominated the landscape, desktop AI tools represent a distinct category requiring a separate lightweight endpoint agent. The company noted it was the first security vendor to support ChatGPT for Desktop as a distinct governance capability, because no existing tooling covered it.

The adoption scale makes this relevant at an enterprise level. ChatGPT has reached an estimated 800 million weekly active users globally (based on OpenAI’s reported 400 million in February 2025 and subsequent growth), with 20 per cent of US adults using it for work tasks and 92 per cent of Fortune 500 companies using it in some capacity. Mobile app downloads have crossed approximately 150 million across iOS and Android. As these tools shift from browser to desktop and from chat to agentic workflows, the gap between where employees use AI and where security teams can see it will widen.

Encouraging employees to use AI through the browser rather than desktop applications is a governance decision, not a technical preference. The practical difference is between usage that stays within reach of existing controls and usage that operates entirely outside them. Browser-based AI is the one category where organisations can maintain visibility and intervene at the point of entry. That window narrows as desktop and embedded AI use grows.

What Boards Should Do Now

The common thread across all five risk categories is that AI data processing has become ambient, embedded in tools organisations already use, governed by terms they may not have reviewed, and flowing through supply chains they have not mapped.

Audit the existing software stack for default AI features. Atlassian, Slack, Microsoft 365, and HubSpot are high-priority candidates. The question is not whether AI is present but whether it is processing data under terms the organisation has reviewed and accepted.

Distinguish consumer from enterprise tiers across all tools. Consumer and free-tier versions routinely offer weaker data protections than enterprise equivalents. Google Search and Google Workspace Gemini operate under different policies. The tier matters, and most organisations have a mix.

Update all Data Processing Agreements with explicit AI-specific clauses. This means no-training guarantees, subprocessor AI restrictions, purpose limitations, and audit rights. Only 33 per cent of AI vendors currently offer IP indemnification, compared to 58 per cent in broader SaaS procurement.

Map the full AI subprocessor chain for critical vendors. EDPB guidance now requires controllers to know all processors and sub-processors at all times. A vendor’s terms that reference “third-party AI providers” is not sufficient disclosure under current European enforcement standards.

Establish a policy on browser versus desktop AI use. Mandating that AI tools are accessed through managed browsers rather than desktop applications preserves visibility, enables logging, and keeps AI usage within the reach of existing network-layer controls. Desktop AI applications, once installed, operate outside that reach entirely.

Create an AI risk monitoring function. The 2023 to 2026 pattern of quiet terms-of-service changes demonstrates that vendor AI practices shift frequently and without prominent notice. Tracking policy updates, subprocessor list changes, and regulatory developments is a baseline requirement for any organisation that takes its data governance obligations seriously.

Related reading: What is shadow AI? | AI sovereign risk is now a board-level decision | Does the EU AI Act apply to Australian businesses? | Employees still do not know what data they can put into AI tools

Sources