AI Change Management: Why 70% of AI Challenges Are About People, Not Technology

Most AI failures aren't technology problems - they're people problems. 70% of challenges rolling out AI are related to people and processes. Here's how to manage the human side of AI adoption.

7 min read · By Jamie Oarton · Last updated March 2026

AI change management is the process of preparing, supporting, and guiding your workforce through AI adoption - ensuring that new AI tools, workflows, and ways of working are actually used, not just deployed.

According to BCG's 2024 research, 70% of challenges rolling out AI are related to people and processes, not technology (BCG, 2024). This means the majority of your AI investment's success depends on how well you manage the human side - and most companies manage it badly.

The People Problem

The gap between what leadership thinks is happening with AI and what's actually happening on the ground is enormous:

  • According to BCG, approximately 50% of CEOs say most employees are resistant or openly hostile to AI-driven changes (BCG, 2024)
  • 41% of Millennial and Gen Z employees admit to actively sabotaging their company's AI strategy by refusing to use AI tools (BCG, 2024)
  • Only 45% of employees think their company's AI rollout was successful, compared to 75% of C-suite executives - a 30-point perception gap (Axios, 2025)
  • 63% of organisations cite human factors as the primary challenge in AI implementation
  • Only 36% of employees are satisfied with their AI training (BCG AI at Work, 2025)

The pattern is consistent: leadership buys the tools, announces the strategy, and assumes adoption will follow. It doesn't. And when adoption stalls, companies find themselves stuck in AI pilot purgatory - running experiments that never reach production scale.

Why AI Change Management Fails

1. Training without context

Most AI training teaches people how to use a tool. It doesn't explain why this tool matters for their specific role, what's expected of them, or how their job will change. The result: people complete the training and go back to working the way they always have.

According to a Microsoft 365 Copilot adoption study, 7 in 10 employees ignore formal AI onboarding and training videos, preferring to learn through hands-on experience. Formal training isn't enough - people need to learn by doing, with support.

2. Top-down mandate without bottom-up engagement

AI adoption imposed from above generates resistance. According to McKinsey's 2025 research, companies involving at least 7% of employees in the transformation process double their chances of positive excess total shareholder returns. Top performers involve 21-30% of employees (McKinsey, 2025).

AI change management works when people feel involved in shaping how AI is used in their work - not when they're told to use a tool they didn't ask for.

3. Ignoring legitimate fears

Employees who resist AI aren't being difficult. They have legitimate concerns about job security, skill relevance, and the quality of AI outputs. Dismissing these concerns as "resistance to change" misses the point and deepens the divide.

4. No visible leadership commitment

If the leadership team isn't visibly using AI, why should anyone else? Change management research consistently shows that executive modelling of new behaviours is the strongest predictor of adoption.

5. Measuring deployment, not adoption

Companies track how many licences they've deployed, not how many people are actually using the tools effectively. Deployment is not adoption. A tool that's installed but unused is a cost, not an investment.

What Actually Works

1. Start with the workflow, not the tool

Don't ask "how can we use AI?" Ask "what workflow is painful, slow, or error-prone - and could AI make it better?" Starting from the problem keeps the focus on value rather than technology.

When people see AI solving a real problem they care about, adoption follows naturally.

2. Involve employees early

Identify 10-15 AI champions across different departments - people who are curious about AI and respected by peers. Involve them in pilot selection, tool evaluation, and workflow design. They become advocates who drive adoption from within.

According to McKinsey, this involvement-driven approach is the single strongest predictor of transformation success.

3. Experiential learning over formal training

Instead of classroom-style AI training, give people:

  • Guided exercises using AI on their actual work
  • A "first win" within the first week (a task they do regularly, made faster or better with AI)
  • A support channel (Slack, Teams, or a named person) for questions
  • Regular show-and-tell sessions where teams share what's working

People learn AI by using AI, not by watching videos about it.

4. Address concerns directly

Hold honest conversations about how AI will change roles. Be specific:

  • "AI will handle the first draft of client reports - your job shifts to review, insight, and client relationships"
  • "AI will triage customer enquiries - your job shifts to complex cases and relationship management"

Specificity reduces fear. Vagueness ("AI will augment your work") increases it.

5. Measure adoption, not deployment

Track what matters:

MetricWhat it measuresWhy it matters
Active users / weekHow many people are actually using AI toolsDeployment ≠ adoption
Tasks completed with AIVolume of real work done with AI assistanceShows productive use, not just login
Time saved per userMeasurable efficiency gainProves value to sceptics
Employee satisfactionHow people feel about the changePredicts sustained adoption
Quality impactError rates, output quality with AI vs withoutAddresses "is it actually better?" question

6. Lead from the front

The leadership team should be visibly using AI in their own work - in board meetings, in communications, in decision-making. If the CEO uses AI to prepare board papers, that sends a stronger signal than any training programme.

The Change Management Timeline

For a mid-market company introducing AI across the organisation:

PhaseTimelineFocus
PreparationWeeks 1-4Identify workflows, select champions, set up governance, choose tools
PilotWeeks 5-12Small group (10-20 people) using AI on real work, gathering feedback
ExpandMonths 3-6Roll out to broader teams based on pilot learnings, with support
EmbedMonths 6-12AI becomes normal - part of workflows, performance expectations, and culture

The biggest mistake is trying to skip from preparation to embed. The pilot phase is where you learn what works, what doesn't, and what support people need.

Frequently Asked Questions

How long does AI change management take?

Plan for 6-12 months from first pilot to embedded use across the organisation. You'll see early wins within weeks, but sustained behavioural change takes time. Companies that rush this see initial adoption followed by gradual abandonment.

What about employees who refuse to use AI?

Understand why before responding. If it's fear about job security, address it directly. If it's scepticism about AI quality, show them evidence from the pilot. If it's genuine preference for existing workflows, consider whether AI actually adds value for their specific role. Not every role needs AI - forcing it where it doesn't fit creates resentment.

Do we need a dedicated change management resource?

For most mid-market companies, a dedicated resource isn't necessary - but a named owner is essential. This could be the fractional CAIO, a senior HR leader, or an operational director who understands both the business and the technology. The key is that someone owns the human side of AI adoption, not just the technical side.

What's the relationship between AI change management and AI governance?

They're deeply connected. Governance provides the framework (approved tools, data policies, usage guidelines). Change management ensures people understand and follow that framework. Without governance, change management has no structure. Without change management, governance is just a policy nobody reads. This is especially relevant when dealing with shadow AI - employees using unapproved tools because the organisation hasn't provided managed alternatives.

Should we change incentives and performance metrics?

Eventually, yes. Once AI tools are embedded, performance expectations should reflect the new capability. If a team can produce reports 3x faster with AI, the expectation should adjust. But don't change metrics before people have had time to learn and adopt - that creates anxiety rather than motivation.

Jamie Oarton

Jamie Oarton

AI strategy advisor and fractional Chief AI Officer through Bramforth AI. Helping UK mid-market businesses build AI strategies that connect to how they make money.