Shadow AI: What It Is, Why It Matters, and What to Do About It

Shadow AI is the use of unauthorised AI tools by employees without IT approval or oversight. 68% of employees do it. Here's what it means for your business and how to manage it.

7 min read · By Jamie Oarton

Shadow AI is the use of artificial intelligence tools — particularly generative AI like ChatGPT, Gemini, Claude, and Copilot — by employees without the knowledge, approval, or oversight of their organisation's IT or leadership team.

It's not malicious. In most cases, employees are using these tools because they're genuinely useful — they help write emails faster, summarise documents, generate code, analyse data. The problem isn't that people are using AI. The problem is that they're using it with company data, on personal accounts, with no governance in place.

How Big Is the Problem?

The scale of shadow AI usage is far larger than most leadership teams realise:

Shadow AI usage has increased by 156% from 2023 to 2025 across multiple studies. The growth in enterprise AI application traffic has been even more dramatic — 595% increase between April 2023 and January 2024 alone (Zscaler ThreatLabz, 2024).

What Data Is Being Exposed?

When employees paste company information into public AI tools, they're typically sharing:

  • Email drafts and correspondence
  • Meeting summaries and notes
  • Financial reports and projections
  • Customer data and communications
  • Source code and technical documentation
  • Legal documents and contracts
  • HR information and personnel data

This isn't theoretical. Samsung engineers pasted proprietary semiconductor code and meeting transcripts into ChatGPT on three separate occasions in 2023. Amazon, JPMorgan, Apple, Verizon, and Deutsche Bank all restricted or banned employee ChatGPT use after similar incidents.

Perhaps the most striking example: in mid-2025, the acting director of CISA — the US government's own cybersecurity agency — uploaded at least four documents marked "For Official Use Only" into public ChatGPT. The person literally in charge of US cybersecurity made the same mistake any employee could make.

The Financial Impact

Shadow AI isn't just a security risk. It has measurable financial consequences:

  • Shadow AI breaches cost an additional $670K compared to organisations with low or no shadow AI usage (IBM Cost of a Data Breach Report, 2025)
  • 98% of UK respondents reported financial losses from unmanaged AI risks, with an average loss of US$3.9M per organisation (Compare the Cloud, 2025)
  • AI-related security incidents take 26.2% longer to identify and 20.2% longer to contain than other breaches (IBM, 2025)
  • Organisations with no AI governance have 5x more redundant AI subscriptions, meaning they're paying for overlapping tools nobody is coordinating (Zylo SaaS Management Index, 2025)

The irony is that enterprise-grade AI tools cost relatively little — $60/user/month for enterprise ChatGPT — compared to the average breach cost of $4.45M (Breached.Company, 2026).

Why Policies Alone Don't Work

Most organisations' response to shadow AI is to write a policy. The data shows this doesn't solve the problem:

  • 43% of UK organisations have a written AI policy, but only 14% actually enforce it (Compare the Cloud, 2025)
  • 40% of employees recall receiving AI training, yet 40% still use unauthorised tools daily (UpGuard, 2025)
  • 45% of workers find workarounds to access blocked AI apps (UpGuard, 2025)
  • Only 15% of organisations have updated their acceptable use policies to include AI guidelines (ISACA, 2025)

The pattern is clear: banning AI or writing policies without enforcement doesn't reduce usage. It just pushes it underground, making it harder to monitor and manage.

What Actually Works

Organisations that successfully manage shadow AI do four things:

1. Provide approved alternatives

If you don't give employees good AI tools, they'll find their own. The most effective approach is to provide enterprise-grade AI tools with proper data handling, then make them easy to use. Embedding AI into existing tools drives higher adoption than standalone AI tools (PwC AI Predictions, 2026).

2. Build governance that enables, not blocks

Effective AI governance isn't about saying no — it's about creating a clear framework for saying yes safely. This means:

  • Classifying data by sensitivity level and defining what can and can't be used with AI
  • Creating approved tool lists with clear criteria for evaluation
  • Establishing review processes for new AI tools that are fast enough to keep up with demand

Organisations with formal AI oversight see 35% more revenue growth and 40% better cost control than those without (Compare the Cloud, 2025).

3. Monitor and measure

You can't manage what you can't see. Implement monitoring to understand what AI tools are being used across the organisation, what data is flowing through them, and where the risks are concentrated. This isn't about surveillance — it's about visibility.

4. Own the strategy at leadership level

Shadow AI thrives in the absence of leadership. When nobody at the board level owns AI strategy, individual teams and employees fill the gap themselves. Only 7% of UK businesses have fully embedded AI governance frameworks (Compare the Cloud, 2025). The organisations that manage AI well have a named leader responsible for AI direction.

The UK Regulatory Context

UK organisations face specific regulatory exposure from unmanaged AI use:

  • UK GDPR fines can reach up to £17.5M or 4% of annual global turnover — whichever is greater
  • The £14M Capita settlement in October 2025 was the largest ICO enforcement action ever, affecting 6.6M people (Measured Collective, 2025)
  • The ICO has stated explicitly that "ignorance of employees' AI use doesn't absolve the organisation"
  • In June 2025, the ICO launched an AI and biometrics strategy with enforcement action planned for 2025/2026, including updated guidance on automated decision-making

For mid-market companies, the question isn't whether regulators will look at AI use — it's when.

Frequently Asked Questions

Is shadow AI illegal?

Shadow AI itself isn't illegal, but the data handling it involves often violates existing regulations — particularly GDPR, industry-specific data protection requirements, and contractual obligations around client data confidentiality.

How do I find out what AI tools my employees are using?

Start with a confidential, non-punitive survey. Complement this with SaaS management tools that can detect AI application usage across your network. The goal is visibility, not punishment — most employees will be honest if they feel safe to be.

Should I ban AI tools entirely?

No. Banning AI tools has been shown to be ineffective — employees simply find workarounds. A better approach is to provide approved alternatives with proper security controls and create clear guidelines for acceptable use.

How quickly can shadow AI be addressed?

A basic AI governance framework can be established within 90 days: tool audit in weeks 1-2, policy development in weeks 3-6, approved tool deployment in weeks 4-8, and monitoring implementation in weeks 8-12. However, changing culture and behaviour is an ongoing process that takes 6-12 months.

What's the first step?

Understand what's actually happening. Before writing policies or buying tools, conduct an honest assessment of AI usage across your organisation — what tools, what data, what scale. Most leadership teams are surprised by what they find.

Jamie Oarton is an AI strategy advisor and fractional Chief AI Officer through Bramforth AI, helping UK mid-market businesses build AI strategies that work.