Skip to content

I've spent years watching teams wrestle with technology that promises to help but rarely explains itself. A hot topic is trust and technology, a story as old as time. Every new technology arrives with a trust problem.

Talk to our experts to learn more about building trust in AI.

The pattern of distrust

Fifteen years ago, technology was the engine of optimism. Today, according to Edelman's 2025 Trust Barometer, AI stands at the centre of a new trust challenge. With great potential comes great scrutiny, where trust has to be earned.

The New York Times columnist and Nobel Prize-winning economist Paul Krugman once believed that "the internet's impact on the economy would be no greater than that of the fax machine". Now it's AI that's making us nervous.

And we've been here before. From the telephone to the light bulb to the personal computer, history is crammed with confident predictions that turned out to be wrong. As James Clive-Matthews notes in PwC's "A Brief History of Tech Scepticism," the world's sharpest minds once dismissed railways as dangerous, television as fleeting, and computers as useless at home. Each new technology follows the same cycle of doubt, resistance, and eventual trust.

We worry it will take our jobs, break our data, or quietly make decisions we don't understand, which are all fair concerns. But beneath the headlines, something quieter is happening. What we really don't trust isn't the technology. It's each other.

The trust gap

In Edelman's 2025 global data, those fears are visible and rising. Fifty-nine per cent of employees now fear job displacement due to automation, and 63% of people worry about misinformation and information warfare. In the U.S., only 32% of people say they trust AI, compared to 72% in China. This gap reflects how differently societies perceive risk and control, and not just the technology itself.

China 72% ██████████████████████

Global avg 44% ████████

U.S. 32% █████

The cultural code behind AI trust

When people say they don't trust AI, they usually mean they don't trust how it will be used, who will control it, who will check it, or who will take the blame when it goes wrong. That's not a technical problem. It's a cultural one.

That cultural layer is what makes trust uneven. Edelman's research shows that older adults, women, and lower-income groups are less likely to trust AI, not because they reject innovation, but because they've seen too many broken promises. Addressing that gap will be key for AI leaders who want to build durable, inclusive trust.

Trust follows a pattern: starting with family and friends, then the people we work with. The idea that a system could make decisions about customers, compliance or hiring feels uncomfortable because we've barely agreed on how humans should make them.

It doesn't help that AI is confident in a way humans rarely are. It never shrugs, hesitates, or says "I'm not sure". It delivers answers that sound certain, even when they're nonsense. People sense that, and the instinct to double-check kicks in.

The Work of Trust

That's why trust in AI doesn't come from accuracy alone. It comes from transparency: showing users how the AI makes decisions, who reviewed it, what data informed it, and what happens next. It's about making the invisible visible. This is what I call the Work of Trust.

Transparency is good business. Only 44% of people globally feel comfortable with companies using AI. That number drops even lower in the U.S.

Organisations that clarify how AI operates, who its beneficiaries are, and its governance will gain both customers and a competitive edge.

The trust lifecycle

OECD AI Systems Lifecycle Graphic

The OECD's AI System Lifecycle offers a useful parallel. It frames trust not as a one-off achievement but as a continuous process from design to deployment and eventual retirement. Each stage demands visibility, fairness, and human oversight. In practice, the Work of Trust mirrors that lifecycle: explaining, proving, refining, and earning confidence with every iteration.

Eight stages for earning confidence in AI systems.

  1. Plan and design with purpose. Define why the system exists, who it serves, and how to build in fairness and human oversight.
  2. Collect and process data responsibly. Be open about what data is needed, why it's needed, and how privacy is protected.
  3. Build and adapt models transparently. Design models that can be explained and challenged. Trust grows when people can see how the system thinks.
  4. Test, evaluate, verify, and validate. Prove reliability under real conditions through independent review and fairness checks.
  5. Make available for use with clarity. When systems are shared or licensed, communicate their purpose, capabilities, and limits.
  6. Deploy with accountability. Assign ownership for outcomes and decisions. Governance should be visible, not implied.
  7. Operate and monitor continuously. Monitor performance, act on feedback, and openly correct errors. Trust compounds through consistency.
  8. Retire or decommission responsibly. When systems reach their end of life, close the loop transparently. Explain what happens to the data and models. Ending well is part of earning trust.


The businesses that get this right are the ones building small, visible wins that people can understand and verify. Trust accumulates, and every time an AI suggestion makes sense and no one gets fired, the collective confidence grows.

Earning that trust will take more than accuracy or regulation; it requires consistency. Every AI interaction that makes sense, saves time, or helps without harm adds a layer of credibility. Over time, trust compounds through small, visible wins.

The human test

So no, we don't trust the machines yet, and we shouldn't. A bit of healthy doubt keeps us human. Blind trust is dangerous, in software and in management.

The goal isn't to believe the machine. It's to build a system where you don't have to.

Because, as history shows, scepticism isn't the enemy of progress, it's the precondition for it. The railways, the internet, and electricity all earned trust by proving their value. AI will too, when it stops asking for blind faith and starts delivering transparent, human-centred results.

The real question for every business leader now isn't "Can we build AI that works?". It's "Can we build AI people believe in?"

Card Bg Graphic 2
2024 Adam Weston Hq Oval

Adam Weston

Adam Weston, Co-Founder and CMO of Growcreate and Invessed, brings energy and creativity to AI consulting. With cross-sector experience, he helps organisations amplify brand visibility, spark client engagement, and accelerate digital transformation.

Connect on LinkedIn