Changing minds in the age of AI: Lessons from Psychology for Organisational Change
- Oliver Nowak
- Aug 4
- 12 min read
Updated: Aug 7
When business leaders discuss AI transformation, the conversation often gravitates towards technology: what platform to buy, how to manage data, or the return on investment. But successful AI adoption is less about algorithms and more about the people using it. David McRaney, author of How Minds Change, discovered that persuasion hinges on human psychology rather than the strength of your arguments. People change when they feel safe to examine their own thinking, not when they are inundated with facts. In this article, I will explore research from social psychology, behavioural science, and change management to understand how to bring people along an AI journey.

Why smart people resist facts
Imagine you're tasked with telling a sceptical team of your colleagues about how AI will transform their jobs. In most businesses you would bring your polished slides showing productivity gain statistics, yet the sceptics dig in. This isn't because they are ignorant, like we are all guilty of assuming. It's because their brains are wired to do so. Studies show that when people are presented with facts that contradict their beliefs, people instinctively entrench further. We've all witnessed this in our lives. This instinct evolved to help us preserve our social identity and reduce cognitive dissonance (the feeling of conflicting beliefs). In an organisation, a person's job role is part of their identity so changing how they work can feel like an attack on who they are. This is especially true when leaders present AI as a solution to cut jobs or lower costs because staff view this as a threat, triggering defensive reasoning.
Another bias at play during AI discussions is optimism bias. This is our tendency to believe that negative outcomes happen to others, not ourselves. Irrational Labs surveyed 767 knowledge workers and found that only 8% believed that AI would replace their own job, yet 29% thought it would replace workers in other industries. This optimism makes people underestimate the need to upskill. Another related phenomenon is the better-than-average effect which leads people to overestimate their competence relative to their peers (for example, everyone thinks they're a better than average driver). When you combine these biases, you get a person who sees AI risk as someone else's problem. That's not ideal.
To better understand how people process persuasive messages, psychologists Richard Petty and John Cacioppo developed the elaboration likelihood model (ELM). What this boils down to is that individuals process information in two ways. When they have high motivation and ability, they take the central route, scrutinising the arguments and requiring strong evidence to be persuaded. In an AI context this will be your high involvement audiences like data engineers, data scientists, subject matter experts etc.. When people have low motivation or limited ability, they rely on peripheral route processing, using cues such as the messenger's credibility, emotional appeal, or social proof. Front‑line workers who are time‑poor may not pore over every slide but will notice whether their manager uses AI or whether colleagues rave about it. The reason why this matters is because a one-size-fits-all presentation may resonate with some but leave the rest completely unmoved.
The role of identity
An interesting UConn article explains that when information threatens a person’s worldview or social identity, they may perceive it as a threat and cling harder to their beliefs. When it comes to AI, many people fear being seen as obsolete or incompetent. They're scared of being replaced. In fact, a Wharton paper on AI adoption notes that 48% of workers feel uncomfortable admitting to their manager that they used AI because they worry it looks like cheating or laziness. These anxieties are not technical; they are emotional. Leaders must therefore address identity and psychological safety rather than just training. So what techniques can help with this?
Persuasion as self-persuasion:
David McRaney says that persuasion is all about self-persuasion, rather than coercion. You cannot force someone to adopt AI, in the same way you can't force someone to agree with you in any other walk of life. You can only help them explore what different perspectives and opinions might look like, and the same is true for AI. There are many useful techniques borrowed from counselling which can help with this:
Engaging - build rapport and trust through reflective listening and unbiased open questions.
Focusing - agree on a target behaviour that feels reasonable based on what the person has shared, e.g. experimenting with a new AI tool.
Evoking - draw out the person's own motivations and concerns which will likely stem from past personal experiences and memories, e.g. they might have a family member who lost their job after their company invested in AI.
Planning - collaborate on small steps towards change allowing the person to lead on this so they feel a sense of autonomy.
Deep canvassing:
One of the most effective persuasion techniques to emerge in recent years is deep canvassing. Developed by LGBTQ+ activists campaigning for marriage equality, deep canvassing involves long, empathetic conversations where the canvasser listens more than they talk. A Yale study found that these conversations can shift attitudes for at least nine months, a significant improvement over typical campaign messaging. In a renewable‑energy campaign, 1,181 one‑to‑one conversations achieved a 40% movement rate and convinced a city council to adopt renewable goals.
Deep canvassing follows a pattern similar to motivational interviewing:
Start with stories – the canvasser introduces themselves briefly and invites the other person to share a personal experience related to the issue.
Ask open‑ended questions – queries like “On a scale of one to ten, where do you stand on this issue?” and “Why does that feel right to you?” surface underlying motivations.
Reflect and validate – repeating back what you hear and acknowledging feelings builds trust.
Explore the origin – probing when someone first formed their opinion helps shift focus from ideological conclusions to concrete memories.
Close gently – rather than telling the person what to think, ask whether anything could have been different. The canvasser may then share a personal story and plant a seed for change.

This technique works because it removes the adversarial frame. When people feel heard and respected, they are more willing to consider new perspectives. In organisational contexts, deep canvassing can be adapted to AI change discussions. Instead of lecturing a customer service rep on the NLU accuracy of a new chatbot understand how they first came to learn about AI, their experiences with it so far, any early negative experiences they might have had. Then plant a seed for change around how they might be able to use this as an opportunity to elevate themselves by upskilling. And, crucially, this only really works on a one-to-one basis because it must be personalised.
Storytelling:
We humans love a story. A story gets to the core of an issue and to the core of a person's emotions so much more than facts and figures on a page. The best technology companies are also the ones with the best story to tell, that's not a coincidence. The anchor point to self-persuasion and deep canvassing is storytelling. If people can see themselves as a character in a larger story, they are much more likely to be convinced. But it also goes both ways. Don't forget to listen to the stories your focus group has to tell about their experiences. For example, a particularly reluctant colleague may share that they once saw a chatbot misinterpret a customer query, leading to huge embarrassment for them. This memory explains their resistance better than any general concern about “automation.” Armed with these stories, you can craft training that acknowledges specific fears and highlights relatable successes and I guarantee it will be orders of magnitude more effective.
The unique challenges of AI change
Despite the hype, AI adoption often stalls. Consultport reports that only 11% of companies realise significant financial benefits from AI initiatives. Worse, only 10% manage to scale AI solutions beyond pilots. Most organisations implement AI in pockets but fail to integrate across functions, leaving value untapped. The reasons are rarely technical. Instead, cultural resistance, organisational silos and misaligned strategies dominate the list.
McKinsey’s 2025 report found that while nearly all companies invest in AI, only about 1% feel mature in deployment because leaders, not employees, hesitate. In other words, employees are often ready for AI but leadership inertia prevents progress. This misalignment creates confusion: employees may want to experiment but fear negative consequences. Meanwhile, leadership champions AI in presentations yet fails to allocate time or resources.
Psychological barriers:
Okoone’s analysis highlights emotional barriers. 63% of U.S. workers rarely or never use AI, not because it’s unavailable but because they feel uncertain or fearful. Among employees in AI‑enabled environments, 56% experience impostor syndrome. This is the persistent feeling of inadequacy despite evidence of competence. This is magnified in AI contexts where the tools are new and the pace is fast. When people fear being exposed as incompetent, they may avoid using AI altogether.
The Yerkes‑Dodson law suggests performance improves with anxiety up to a point, then declines. Irrational Labs notes that employees who use AI regularly actually show a healthy level of concern about job displacement. The right amount of anxiety motivates learning; too little leads to complacency, too much to paralysis. Leaders must strike a balance by normalising concerns while offering support.
Structural barriers:
Changing attitudes is only part of the solution; we must also remove structural barriers. Interestingly, Irrational Labs found that managerial support is the strongest predictor of AI usage. When managers endorse AI, usage reaches 79%; without support it falls to 34%. Employees often lack time, approval or access to AI tools, creating an intention‑action gap . Without proper incentives, there's a real chance that workers who find efficiency gains may fear being punished with more work or even risk job loss.
Similarly, this Wharton article emphasises the AI knowing‑doing gap where 76% of workers feel the urgency to become AI experts, yet only 33% adopt AI in their daily tasks. That's a significant mismatch. Nearly half of desk workers hesitate to admit using AI to their manager due to fears of being seen as cheating or lazy. Additionally, only 40 % of executives provide formal AI training. Without structured learning, employees simply don’t know how to start, how to work, or what to admit to. All for fear of being exposed.
Digital temperament:
The Wharton study also introduces the concept of digital temperament. This is an individual’s psychological relationship with technology. People with high digital temperament approach new tools with curiosity; those with low digital temperament experience anxiety. Training alone cannot change these traits, which explains why some employees remain hesitant even after training. This indicates that effective change management must therefore offer multiple forms of support, including peer mentoring, user experience design, and permission to experiment.
Designing Interventions
Behavioural contagion:
People do not adopt new behaviours in a vacuum; they look to others. Irrational Labs found that employees are three times more likely to use AI if they know someone who does. This is behavioural contagion, a phenomenon where behaviour spreads through networks. Making AI usage visible fosters adoption. Encouraging early adopters to share their workflows or establishing dedicated channels for AI tips creates social proof. Best of all, shine a light on yourself because leaders should demonstrate how they use AI in their own work.
The Fogg behaviour model:

Behavioural scientist BJ Fogg suggests that for a behaviour to occur, three elements must coincide: motivation, ability, and a prompt. The Wharton article adapts the Fogg behaviour model for AI adoption. Employees need motivation (a reason to use AI), ability (skills and tools) and prompts (triggers that nudge them). Training builds ability, but organisations often neglect prompts and motivation. Leaders can provide prompts by weaving AI usage into daily workflows, creating formal objectives linked to AI adoption and allocating time for experimentation.
To increase ability, the article suggests exposing employees to training, mentoring and real-world experiences. To enhance motivation, leaders must first grant permission to experiment, then provide clear rewards and tie AI use to job satisfaction. Without progress on both axes, employees struggle to think creatively about AI tasks or to design effective prompts for language models. The reason why this model is so interesting is because it explains why so many people remain stuck: they lack either motivation or ability, and no one has provided a prompt at the right moment.
Time as capital and incentives:
One of the Wharton authors’ recommendations is to view time as capital. Instead of using AI to simply eliminate roles, organisations can let employees “invest” saved hours into innovation or upskilling. For novice workers, this might mean basic AI training; for experts, it could involve fine‑tuning models or exploring high‑risk, high‑payoff use cases. This reframing reduces fear because efficiency gains are no longer a threat but an opportunity for growth.
Show what good looks like:
Leaders often assume that employees will naturally see how AI applies to their role, but many don't. The Wharton article suggests simply showing people what good looks like. Provide role‑specific examples and pair formal training with peer mentoring. Identify AI champions within teams who can share practical tips, reducing stigma. This echoes the behavioural contagion principle: peer influence is more powerful than top‑down mandates.
Co-create AI-first roles:
Encourage employees to rewrite their roles for an AI‑first future. When people help define how their work might evolve, they feel more agency and are more willing to adopt the tools that enable that future. This reduces resistance rooted in fears of replacement and shifts the narrative from “AI will take my job” to “AI will help me reinvent my job.”
Create peer support networks:
For many employees, AI tools feel foreign. To increase adoption, embed AI capabilities within existing workflows, offer proactive suggestions, provide templates and use familiar working patterns. The goal is to reduce cognitive load so that interacting with AI feels natural. On top of that, create peer support networks where employees who successfully integrate AI share experiences. For those with low digital temperament, seeing a colleague succeed demystifies the technology.
Your AI Change Management Action Plan
Armed with all of the insights from psychology and change management that I've gone through, how can leaders design AI programmes that actually change behaviour? Here's an action plan.
Start by listening
Before announcing your grand AI vision, invest time in listening. Use deep canvassing techniques. Sit down with employees one‑to‑one, not to sell, but to understand. Ask:
“On a scale from 1–10, how comfortable are you with AI in your role?”
“Why does that number feel right to you?”
“When did you first hear about AI at work or at home?”
The goal is to uncover the personal experiences driving their attitudes. Reflect back what you hear to build trust. As McRaney suggests, once people start explaining their own reasoning, they naturally spot inconsistencies. This self‑persuasion is far more durable than any external argument you can make.
Segment your audience and tailor your message
Recognise that not everyone processes information the same way. For central processors, provide thorough documentation, risk assessments, performance metrics and opportunities to experiment. Encourage engineers or analysts to challenge assumptions; their scrutiny will strengthen the initiative. For peripheral processors, keep communication concise and engaging. Use testimonials from peers, videos, or interactive demos. A short clip of a customer service agent saving time with generative AI can be more persuasive than a long white paper. Always consider personal relevance by highlighting how AI solves their specific pain points.
Coach rather than sell
In training sessions, shift the tone from preaching to coaching. Begin by asking participants what they hope to gain from AI and what concerns they have. Use affirmations to recognise their expertise and efforts. Playback their comments to show you are listening. Then collaboratively set small, achievable goals: “This week, let’s try using the AI tool to draft one email. How does that sound?” This respects autonomy and fosters ownership. People are more likely to embrace change when they feel in control of the journey.
Make AI usage visible and celebrate success
Behavioural contagion thrives on visibility. Publicly recognise early adopters. Invite them to demonstrate how they use AI at staff meetings. Create messaging channels or forums dedicated to AI tips. Encourage leaders to model vulnerability sharing their own missteps and learning curves normalises experimentation. As the research shows, employees are more likely to adopt AI when they know others are using it. Visibility turns AI from a mysterious technology into a social norm.
Remove structural barriers
Secure the resources and approvals that allow employees to experiment. Offer clear guidelines on which AI tools are authorised, who can approve experimentation expenses and how data privacy is managed. Provide scheduled time for experimentation. Tie performance objectives to AI adoption where appropriate. Use prompts, such as reminders in project management tools, to nudge employees to use AI at key moments. Without these structural supports, motivation and ability will not translate into action.
Foster a culture of psychological safety
Anxiety and impostor syndrome are common during AI transformation. Leaders should normalise feelings of uncertainty and emphasise that learning AI is a journey. Encourage employees to share mistakes and lessons without fear of judgement. Recognise that moderate anxiety, when paired with support, can enhance performance. Too much pressure, however, will cause disengagement.
Reframe efficiency and job security
The fear that AI will eliminate jobs cannot be ignored. Reframe efficiency gains as opportunities to reallocate time. Adopt Wharton’s time as capital approach: allow employees to invest freed-up hours in creative projects, learning or innovation. Work with HR to design new career paths that integrate AI. Encourage employees to help define their AI‑first roles. When people see how AI can enhance rather than threaten their identity, resistance decreases.
Continuously measure and adapt
Finally, track what matters. Counting the number of training sessions or pilot projects is not enough. Measure voluntary usage, time saved, quality of output and employee satisfaction. Use these metrics to iterate on training, user experience and incentives. Behaviours and attitudes change over time; so should your change strategy.
Some closing reflections
Changing minds is not about winning debates but about forging partnerships. The social science we’ve explored (biases, ELM, motivational interviewing, deep canvassing, the Fogg behaviour model) suggests that people change when they feel respected, heard and empowered. In AI transformation, this means moving away from top‑down mandates and towards co‑creation. Your employees are not obstacles; they are collaborators who bring valuable domain knowledge and lived experiences.
The journey starts with listening. Leaders need to step out of the boardroom and into one‑to‑one conversations. They must tailor messages to different cognitive styles, coach rather than command, make adoption visible, remove structural barriers, support psychological well‑being, and invite employees to co‑craft their future roles. The rewards are worth the effort: organisations that manage the human side of change will not only deploy AI successfully but will also cultivate a culture of learning and innovation.
AI will undeniably transform work. Whether that transformation leads to empowerment or alienation depends on how we manage the human transition. By applying the science of persuasion and behaviour change, leaders can help their teams write a new narrative; one in which AI is not something done to people but something built with them.
