Shadow AI is when employees use artificial intelligence tools at work without their employer’s knowledge or approval. It includes any AI platform, from ChatGPT and Claude to image generators and code assistants, used outside official IT oversight. It is now one of the most common and least visible risks facing businesses of every size.
The short version
Roughly 4 in 5 employees use AI tools their company hasn’t approved. Most do it to work faster, not to cause problems. But every unapproved prompt is a potential data leak, compliance breach, or security gap that nobody in management can see. Shadow AI is shadow IT’s successor, and it’s growing faster than any previous wave of unsanctioned technology.
How common is shadow AI in the workplace?
More common than most business owners realise.
A 2025 WalkMe/IDC survey of 1,000 US workers found that 78% of employees admit to using AI tools not approved by their employer. That number was around 50% just twelve months earlier.
And it’s not just junior staff. An UpGuard report from late 2025 found that more than 80% of workers use unapproved AI tools, with security professionals among the most frequent offenders. Executives had the highest rate of regular shadow AI use across all seniority levels.
The scale is hard to overstate. Reco’s 2025 research found that organisations average 269 separate shadow AI tools per 1,000 employees.
BlackFog’s January 2026 survey of 2,000 UK and US employees found 86% now use AI tools at least weekly for work. More than a third use free versions of company-approved tools, which often lack enterprise security and data governance protections.
The numbers come from employee surveys, not vendor estimates. People are telling researchers exactly what they’re doing.
Why do employees use shadow AI?
Three reasons, and none of them are malicious.
Speed. AI tools make people faster. Drafting emails, summarising documents, writing code, analysing data. Tasks that took an hour take minutes. When someone discovers that advantage, they don’t wait for an IT procurement process.
Gaps in approved tools. Many organisations either haven’t approved any AI tools or the ones they’ve approved don’t cover what employees actually need. BlackFog found that 63% of employees believe it’s acceptable to use AI tools without IT oversight if no company-approved option is available.
Low friction. ChatGPT, Claude, Gemini, and Perplexity are all free to start and require nothing more than a browser. No account approval, no IT ticket, no installation. An employee can go from zero to using AI in under a minute.
The result is predictable. People use what works. If the approved option is slower, harder, or doesn’t exist, they’ll find their own.
What are the actual risks?
Shadow AI creates four categories of risk, each compounding the longer it goes undetected.
Data exposure
This is the big one. Every time an employee pastes text into an AI tool, that data leaves the organisation. Client names, contract terms, financial figures, source code, medical records, HR details. Most employees don’t think of a ChatGPT prompt as a data transfer. But it is.
Samsung learned this in 2023 when engineers pasted proprietary semiconductor source code into ChatGPT on three separate occasions within 20 days. The data was retained by OpenAI. Samsung banned the tool entirely, but the information couldn’t be retrieved.
IBM’s 2024 Cost of a Data Breach report found that organisations with high levels of shadow AI added $670,000 to their average breach cost. That’s a 16% increase compared to organisations with low or no shadow AI activity. For a full breakdown of the financial exposure, see What Does Shadow AI Cost a Business in 2026?
Compliance breaches
If your organisation operates under GDPR, the Australian Privacy Act, HIPAA, or any data protection regulation, shadow AI is a compliance problem. Personal data processed through AI tools without proper safeguards can trigger regulatory action.
The EU AI Act, which takes full effect in August 2026, introduces specific obligations around AI system transparency and risk management. Organisations that can’t demonstrate oversight of how AI tools are used internally will struggle to meet those requirements. Australian businesses with EU exposure face additional complexity, covered in Does the EU AI Act Apply to Australian Businesses?
For professional services firms, the stakes are higher. Law firms, accounting practices, and financial advisers have fiduciary and confidentiality obligations to clients. An employee pasting client data into an unapproved AI tool isn’t just a technology risk. It’s a professional liability.
Security vulnerabilities
More than half of employees (51%) admit to connecting or integrating AI tools with other work systems without IT approval, according to BlackFog’s 2026 research. That means AI tools aren’t just receiving data from prompts. They’re being wired into workflows, automations, and internal systems without any security review.
Free AI tools rarely offer the same data handling protections as enterprise versions. Some store inputs indefinitely. Others use them for model training. Most employees don’t read terms of service before signing up.
Reputational damage
A data leak involving client information is bad enough. A data leak through a tool nobody knew was being used makes it worse. Regulators, clients, and partners all ask the same question: how did you not know?
The answer, for most organisations, is that they had no visibility into AI usage at all.
How is shadow AI different from shadow IT?
Shadow IT, the use of unapproved software and services, has been around for decades. Employees signed up for Dropbox when the corporate file server was too slow, ran Slack before IT approved it, and charged project management tools to a personal credit card.
Shadow AI follows the same pattern but with two critical differences.
Data flows in a different direction. Shadow IT mostly involved storing or accessing data. Shadow AI involves sending data to an external service, often in the form of detailed prompts containing proprietary information. The risk isn’t just that employees are using an unapproved tool. It’s that they’re actively feeding sensitive data into it.
The volume is unprecedented. Shadow IT adoption happened over years. Shadow AI adoption happened over months. ChatGPT reached 100 million users in two months after launch. By comparison, Dropbox took four years to reach the same milestone. Organisations have had far less time to respond.
Which industries are most affected?
Shadow AI affects every industry where knowledge workers use computers. But the risk profile varies.
Professional services firms handle sensitive client data as a matter of course. Staff at law firms, accounting practices, and consultancies use AI to draft correspondence, summarise contracts, and prepare reports. Almost all of this data is confidential, and many professional bodies have specific obligations around how it’s managed.
Financial services faces a different challenge. Client portfolios, transaction records, and regulatory filings are ending up in AI prompts. UpGuard’s 2025 report noted that employees in finance trust AI more than they trust their colleagues, which tracks with higher unsanctioned usage in the sector.
In healthcare, the regulatory framework is stricter but the behaviour is the same. Patient records, treatment notes, and clinical data carry protections under HIPAA, GDPR, and equivalent frameworks globally. AI tools used without oversight create direct compliance exposure.
Technology companies might seem better equipped to manage AI usage, but they’re often the heaviest users. Source code, architecture designs, API keys, and infrastructure details are commonly pasted into AI tools for debugging and code generation. The Samsung incident showed how quickly this can become a serious problem even at a company with significant security resources.
Government and education tend to have the slowest procurement processes, which makes the gap between what employees want and what IT provides even wider. Policy documents, student records, and internal strategy materials are all at risk.
What doesn’t work
Blocking AI tools outright is the most common knee-jerk response, and it doesn’t work.
Samsung, Apple, JPMorgan, and several other large organisations banned ChatGPT or restricted AI tool access in 2023 and 2024. The result? Employees found workarounds. Personal phones. Home computers. VPNs. The usage didn’t stop. It just became invisible.
Blanket bans also create resentment. Workers who feel productive with AI tools will push back against policies that remove them. And in a competitive hiring market, restricting AI access can actually drive talent away.
Writing an AI usage policy and leaving it at that doesn’t work either. Half of employees surveyed by WalkMe said their organisation’s AI guidelines are unclear. A policy nobody reads or understands provides no protection.
Training alone isn’t sufficient. BlackFog found that 60% of employees would take security risks to meet deadlines, even when they understand the risks involved. Awareness without enforcement changes very little.
What actually works
Effective shadow AI management combines four elements, and none of them deliver much without the others. For a structured approach to building these into a formal programme, see How to Build an AI Governance Framework.
Visibility first
You can’t govern what you can’t see. The first step is understanding which AI tools your organisation uses, who uses them, and what data is being shared.
This doesn’t mean surveillance. It means having a clear picture of AI activity across the business. Browser-based monitoring tools can detect AI platform usage without requiring network changes or endpoint agents. Some tools can identify when sensitive data patterns appear in prompts before they’re submitted.
For organisations with no visibility today, even a simple audit (asking teams which AI tools they use) is a useful starting point. The answers are usually surprising.
Clear, practical policies
An AI usage policy should tell employees three things: what they can use, what they can’t share, and what to do when they’re unsure.
The best policies are short, specific, and written in plain language. They cover approved tools by name, define categories of data that must not be shared with AI platforms (client data, credentials, financial records, personal information), and provide a clear escalation path for grey areas.
Policies should also evolve. The AI landscape changes fast. A policy written in January may be outdated by June.
Employee enablement
If people are using shadow AI to solve real problems, give them approved alternatives. Provide access to enterprise versions of AI tools with proper security and data handling. Train staff on which tools are safe, which data types are off-limits, and how to get the most from approved platforms.
The organisations with the lowest shadow AI risk aren’t the ones that restrict the most. They’re the ones that provide the best approved options.
Ongoing monitoring
A single audit isn’t enough. AI adoption is continuous, and new tools launch weekly. Organisations need ongoing visibility into AI usage patterns to spot emerging risks before they become incidents.
This is where dedicated AI governance tools add the most value. Automated monitoring catches what periodic audits miss and provides the evidence base needed for compliance reporting.
How to get started this week
If you’re reading this and realising you have no visibility into your organisation’s AI usage, start here.
Talk to your team. Ask directly which AI tools people use and what for. Make it clear this isn’t about punishment. It’s about understanding. Most employees will be honest if the conversation feels safe.
Identify your sensitive data. List the categories of information that should never enter an AI prompt. Client data, credentials, financial records, personal information, proprietary code. Make that list visible and specific.
Check your existing tools. Many organisations already pay for platforms that include AI features with enterprise-grade data protections. Microsoft 365 Copilot, Google Workspace Gemini, and others offer AI within controlled environments. Your team may not know these exist.
Write a short AI usage policy. It doesn’t need to be 50 pages. One page that covers approved tools, restricted data types, and who to ask when unsure will do more than a comprehensive policy document that nobody reads.
Consider a monitoring tool. If your organisation handles sensitive data, client information, or operates under regulatory requirements, visibility into AI usage isn’t optional. It’s a compliance obligation. Browser-based AI monitoring tools can be deployed in minutes and provide immediate insight into usage patterns.
Frequently asked questions
Is shadow AI illegal?
Shadow AI itself isn’t illegal. Using AI tools at work doesn’t break any law. But the data shared through those tools can trigger legal consequences. If an employee pastes personal data into an AI platform without proper safeguards, that can breach GDPR, the Australian Privacy Act, HIPAA, and other data protection regulations. The legal risk sits with the organisation, not the individual employee.
How do I know if my company has a shadow AI problem?
If your organisation employs knowledge workers and you haven’t actively monitored AI tool usage, you almost certainly have shadow AI. The statistics are consistent across every major survey: between 70% and 85% of employees use AI tools without IT approval. The question is not whether it is happening but how much.
Can’t I just block AI websites on our network?
You can, but it won’t solve the problem. Employees use personal devices, mobile networks, and home computers. Blocking a handful of AI domains on the corporate network misses the majority of usage. It also pushes AI use further underground, making it harder to detect and manage.
What’s the difference between shadow AI and BYOAI?
BYOAI (bring your own AI) is sometimes used interchangeably with shadow AI, but there’s a subtle distinction. BYOAI describes employees choosing their own AI tools, which may or may not be sanctioned. Shadow AI specifically refers to AI usage that’s unknown to or unsanctioned by the organisation. All shadow AI is BYOAI, but not all BYOAI is shadow AI. If an organisation knows about and permits the use of personal AI tools, that’s BYOAI without the shadow.
Which AI tools are most commonly used without approval?
ChatGPT is the most widely used unsanctioned AI tool in most workplaces, followed by Google’s Gemini, Microsoft Copilot (outside of enterprise deployments), Claude, and Perplexity. Image generators like Midjourney and DALL-E also appear frequently. But the landscape shifts fast. New tools launch weekly, and employees adopt them before IT has time to evaluate them.
Does the EU AI Act apply to companies using AI tools, or just companies building them?
Both. The EU AI Act applies to providers (companies that build AI systems) and deployers (companies that use them). Organisations that deploy AI tools in a workplace context, even off-the-shelf tools like ChatGPT, have obligations around transparency, risk assessment, and human oversight. If your team uses AI tools and you operate in or serve the EU market, the Act applies to you. The full requirements take effect in August 2026.
How much does shadow AI cost a company?
Direct costs are hard to quantify because most shadow AI activity goes undetected until something goes wrong. IBM’s data breach research provides the clearest figure: organisations with high levels of shadow AI pay an average of $670,000 more per data breach than those with low shadow AI activity. Beyond breach costs, there are indirect costs: duplicated tool spending, inconsistent outputs, compliance preparation, and the time spent on incident response when things go wrong.
Is monitoring AI usage the same as employee surveillance?
No. AI usage monitoring tracks which AI platforms are accessed and whether sensitive data patterns appear in prompts. It doesn’t record keystrokes, take screenshots, or monitor browsing activity outside of AI tools. The goal is data protection and compliance, not watching what employees do all day. The distinction matters because surveillance erodes trust, while visibility built on clear policies and transparency tends to improve it.
Where this is heading
Shadow AI isn’t going away. The tools are getting better, more accessible, and more embedded in daily work. Desktop AI applications, browser-integrated assistants, and agentic AI workflows are all expanding the surface area of unsanctioned AI use.
The EU AI Act deadline in August 2026 will force many organisations to formalise their AI governance for the first time. Companies in the UK and Australia face similar pressure from existing privacy regulations being applied to AI use cases.
The organisations that act now, even with basic visibility and a simple policy, will be in a far stronger position than those that wait for a data breach or a regulatory inquiry to force the issue.
AI governance doesn’t need to be complicated. It starts with knowing what’s happening.
Last updated: February 2026
Sources:
- WalkMe/IDC AI in the Workplace Survey (Aug 2025)
- UpGuard Shadow AI Report (Nov 2025)
- BlackFog Shadow AI Research (Jan 2026)
- Reco Shadow AI Discovery Report (2025)
- IBM Cost of a Data Breach Report (2024)
- Samsung ChatGPT Leak (Apr 2023)