Human-Led Trust in a Machine-Led World

Human-Led Trust in a Machine-Led World

Two moments this week made me rethink where trust in tech really comes from. Not trust in theory, but trust in systems we don’t control, can’t see inside, and increasingly rely on.

First - Las Vegas - inside The Sphere (a 17-storey globe of light and immersion) I met Aura, a humanoid robot and asked: “Can you greet me in different languages?” She turned, looked directly at me, and responded in Spanish, French, and Italian (I think, I’m not fluent!). Then seamlessly continued the conversation in English. Apparently, she speaks 36 languages.

It was a cool demo, but more importantly: it was first hand experience of seeing these AI robotic systems - already fluent in the real world. And it came in the same week that Unitree China shook up the Robotics industry by unveiling its most game-changing and affordable humanoid, with a tiny $5900 price tag!

Then, after multiple Tesla uber rides where the drivers sat and watched the car drive, I stepped into a Waymo. No driver at all. Just code taking us home from our evening out. I watched it navigate the streets of San Francisco like a pro.
No sudden stops or missed turns. No risk to the four humans inside. And Waymo accounts for over 25% of San Francisco's ride share market share.

What struck me most wasn’t the tech, it was the feeling of trust. Not just because it was really cool, but because it felt controlled.

→ The AI didn’t just work. It worked with governance baked in.
→ Safety wasn’t an afterthought, it was the architecture - embedded with design, oversight, and intent

We are entering a new era, where robots make small talk, cars think and tech starts to feel human.

The real question for directors is no longer “can this be done?”, it’s “are the controls strong enough to trust it?”

And if the answer isn’t yes, it’s not ready.
* AI needs oversight.
* Trust needs structure.
* And leaders need frameworks, not blind faith.

For directors, this is the takeaway - Ethical AI isn’t about slowing things down. It’s about making the right acceleration possible. And yes, I know that ethical AI and trust is a much wider topic - this is just the aspect of it that resonated with me today!

Governance isn’t a blocker, it’s the reason a humanoid robot can talk to your customers and stay on track and a driverless car can get you safely from A to B.

We’re entering a world where machines will increasingly “feel human.” But the trust must stay human-led.

What governance questions are you asking about the tech you’re adopting?

aigovernance AIforBoards aileadership

Back to blog