AI Is a Change Leadership Test: Here’s How to Pass It
Most AI conversations are happening at the surface.
Which tools?
Which prompts?
Which policies?
Which risks?
While important, these questions miss an essential consideration:
Is your organization capable of absorbing the biggest workplace shift most of us will see in our careers?
A challenge emerges
At first, my own exploration of AI focused on what it meant for my role, our agency, and the communication profession. That’s a natural place to start—we all wonder how our work will change and what it means for us personally.
But it didn’t take long to realize that the bigger challenge isn’t technical at all. It’s organizational. In 25 years of leading technology and transformation efforts, I’ve seen how often promising tools fall short because organizations underestimate the human side of change. We’ve never been particularly good at helping people navigate change at this scale.
Now we have a revolution on our hands. With the stakes higher than ever, the pace is faster, and AI will test our change capability in ways few transformations have before. And you can bet AI’s potential won’t be realized unless communication, culture, and leadership rise to meet the moment.
The gap between investment and readiness
Organizations are investing heavily in AI. Some are even restructuring and reducing roles in anticipation of the efficiencies it promises to deliver.
Across industries, professionals are swapping prompts and productivity hacks, eager to stay relevant and work smarter. There are inspiring stories of innovation emerging, too.
But beneath the excitement, something critical is being overlooked.
Many organizations are introducing AI tools without clear guardrails, shared expectations, or alignment on how these tools should be used. Leaders are mandating adoption without addressing the deeper issues that determine success: trust, clarity, consistency, and the mindsets and behaviours required to work differently. Technology is moving fast. Organizational readiness is not.
This is why so many AI investments fail to deliver the return leaders expect. Not because the tools aren’t powerful—but because organizations are treating AI as a capability rollout instead of the large-scale change effort it truly is.
Technology change is always about one thing
When I see a shift this significant, I look back on what decades of leading technology change have taught me.
It’s about people.
Again and again, I’ve seen change efforts stumble not because the strategy was wrong or the technology wasn’t powerful, but because communication strategies and infrastructure (including leaders) failed to address what people need to navigate change successfully. AI is no different.
If organizations want AI to strengthen performance rather than strain it, they need to start where all successful change efforts start: the human foundation.
Where the AI advantage is really built
Tackle the trust issue first
There is real enthusiasm about AI—and real fear.
People feel vulnerable. Not just because they’re unsure about the technology, but because they’re unsure how it will affect them. And in many organizations, the trust gap about AI is exacerbated by a lack of trust in leaders.
The latest Edelman Trust Barometer underscores the challenge. Trust in AI remains low, with many people expressing concern about being left behind. Only 44% of people globally say they feel comfortable with businesses using AI, and that number is even lower in the U.S.
This creates a clear reality: organizations that fail to address trust will face resistance. Those that prioritize transparency, fairness, and clear use cases are far more likely to build long-term trust, drive meaningful adoption, and ultimately realize competitive advantage.
Trust, however, doesn’t come from enthusiastic town halls or top-down messaging about how “exciting” AI is.
Trust is earned.
It grows through inclusion, openness, and visible proof that AI is being used responsibly and in ways that genuinely help people do their work better. Storytelling matters—but so does honesty about risks, limitations, and unknowns.
And perhaps most importantly, people don’t build trust by being told what’s great about the technology. They build it through their own experience and through what they see working among their peers.
Trust starts with transparency, openness, and clarity. But it’s sustained through experience.
Turn on learning through psychological safety
Even in organizations where trust is growing, another barrier often appears: fear of getting it wrong.
No one wants to fail, especially not in a high-visibility, high-stakes, high-coolness factor area like AI. But AI adoption depends on experimentation and learning, which inevitably includes missteps, unexpected outputs, and failures.
That’s where psychological safety becomes essential.
Teams need to know they can question AI outputs, flag issues like “workslop,” and raise concerns without fear of looking unskilled or resistant. Unlike traditional work, where people learn primarily from their own experience, AI introduces a new layer of learning: learning from system behaviour. That means teams must be able to openly share what didn’t work, where AI fell short, and where human oversight made the difference.
More than ever, organizations need to be learning organizations, not knowing ones.
And there’s another important dimension to this. The technology itself is evolving at a lightning pace. AI should be framed not as a rollout with a finish line, but as an ongoing process of learning and experimenting. The challenges teams face today will not be the same as those they face tomorrow, which makes adaptability, curiosity, humility, and shared learning essential.
Leaders play a critical role here. They should communicate clearly that discovering AI’s limitations is not a failure—it’s valuable intelligence. Early AI “mistakes” should be treated as learning opportunities that help teams calibrate expectations, refine practices, and build stronger collaboration protocols between humans and technology.
Build skills in the flow of work
AI adoption doesn’t happen because people attend a single training session. It happens because they learn, practice, and refine new ways of working over time.
This is where communication plays a critical—and often overlooked—role.
Learning doesn’t live only in formal training platforms. It happens in the flow of work, through timely guidance, shared experiences, and practical tools people can use immediately. There’s a middle ground between one-way announcements and structured learning programs—and communication sits squarely in that space.
Communication teams can help make learning visible and accessible by providing content that supports real work, such as:
- Checklists for responsible AI use
- Examples of new workflows that integrate AI effectively
- Stories from teams about what’s working (and what isn’t)
- Short “how we use AI here” guides tailored to different roles
An AI Centre of Excellence on the intranet is one powerful way to bring this together—a single, evolving hub where people can find guidance, guardrails, examples, and shared learning.
When communication supports skill-building in this way, it reinforces that AI adoption isn’t a one-time event. It’s a continuous capability that grows through practice.
Stop scaling the mess
It’s ironic, but many of the challenges organizations are encountering with AI are the same ones that have slowed technology change for decades.
In my experience leading large-scale change efforts, the issues that caused the most delays weren’t the systems themselves. They were messy data, unclear workflows, and a lack of standardized processes. Long before a project got off the ground, we’d be untangling how work was really done, cleaning up information, and mapping processes, just to create the foundation for change. (I learned the important data rule—garbage in, garbage out—very quickly!).
AI exposes these gaps even faster.
Efficiency at scale doesn’t come from simply adding AI tools. It comes from consistency and finally confronting the mess that already exists. When teams use AI differently, rely on fragmented data, or operate with disjointed workflows, the result isn’t productivity—it’s confusion, rework, and systems people don’t trust.
This is why so many AI pilots show promise but struggle to scale into day-to-day operations. Without alignment around processes, data, and expectations, AI can’t move beyond isolated wins.
Communication plays a key role in addressing this. Not by broadcasting updates, but by helping create shared understanding and visible structure. That includes:
- Clearly communicating governance and expectations
- Making standardized workflows and processes easy to find
- Using the intranet as a single source of truth that breaks down silos
- Sharing process flows, templates, and agreed ways of working
Consistency doesn’t limit innovation—it enables it. It’s what allows learning to compound and efficiency to scale across the organization.
Don’t outsource thinking
AI can make work faster. But speed isn’t the same as progress.
Using AI effectively requires a different kind of skill—not just prompting, but imagination and judgment. When routine tasks are supported by AI, people have more space for the work only humans can do: connecting ideas across disciplines, challenging assumptions, spotting opportunities, telling better stories, and designing smarter solutions.
Yes, we still need to question outputs, spot “workslop,” and apply context. But that analytical layer is only part of it. The real opportunity is that AI can create room for deeper thinking, creative problem-solving, and more meaningful human contribution if we are intentional about it.
Much of the public conversation about AI-risk focuses on ethics or errors. But there’s another risk we don’t talk about enough: over-reliance. When people stop engaging deeply with their work and default to AI outputs without reflection, we risk weakening the very capabilities that make humans innovative and resilient.
Unlocking human potential in an AI-enabled workplace doesn’t happen by accident. It requires visible norms and consistent leadership support. This includes establishing new performance norms, equipping managers to lead differently (for example, providing managers with discussion guides so they can normalize thoughtful use), making human contributions visible and reinforcing that reflection isn’t a luxury but a necessity.
Give people a North Star
AI adoption can’t be sustained on tools, training, or policies alone. People need to understand where the organization is heading—and why.
A clear, compelling narrative helps people make sense of the change. How does AI connect to our purpose? What are we trying to become? How will work be different—not just faster, but better? Without that broader context, AI feels like a series of disconnected initiatives rather than part of a meaningful direction.
This is where communication leadership is essential.
People need a vision they can align to and a roadmap they can follow. They need to see how today’s experimentation connects to tomorrow’s way of working. They need a credible plan, and a sense of progress, not just activity.
A strong narrative does three things:
- Provides direction — what success looks like beyond tool adoption
- Creates coherence — how AI fits with strategy, culture, and purpose
- Builds motivation — why this matters for the organization and for them
Without a North Star, AI efforts feel fragmented and reactive. With one, they become part of a shared journey.
AI is a change leadership test
According to McKinsey & Company, 92 percent of companies plan to increase their AI investments over the next three years.
That level of commitment reflects confidence in the potential of artificial intelligence. But it also raises an important question: what happens if organizations invest in technology without investing in the support people need to realize the benefits?
And that brings us back to where we started. Effective deployment of AI doesn’t happen on its own. It succeeds when organizations pay attention to communication, culture, and change leadership—the human systems that determine whether people can absorb and use change well.
My hope is that this article offers a practical starting point.