Let me ask you something blunt.
If a customer, regulator, or journalist challenged one of your organisation’s AI-driven decisions - right now - could you explain it?
Not dodge it. Not PR-spin it. Actually explain how that outcome was reached, what logic the model used, and who signed off on it?
Because here’s the truth: AI isn’t just some tech experiment happening in the back office. It’s already deciding who gets hired, who gets credit, and which customers get prioritised.
And if your board isn’t across that… you’ve got a bigger problem than you think.
This isn’t fear-mongering. It’s a heads-up. Because AI and digital services governance isn’t optional anymore and explainability is right at the centre of it.
The Black Box Problem
We’re entering a world where AI decisions are baked into everyday operations yet often remain unchallenged. Why? Because the models are complex. Because it’s easier to trust the output than interrogate it.
But if the board can’t interrogate or explain those models’ logic, we may be accepting liability without understanding the risk.
That’s something we’ve been tackling head-on at Centrix. As a director, I’ve been part of thoughtful boardroom conversations about AI governance and the importance of being able to stand behind the decisions our models influence.
The team have worked closely with the University of Auckland to ensure our algorithms are transparent and explainable and avoid bias in the outcomes they produce.
As our GM of Analytics, Stuart Baxter, put it: “As AI/ML models play an increasing role in decision-making, ensuring they are both responsible and explainable is essential. Explainability provides insight into how models arrive at their conclusions, helping businesses comply with regulations, align decisions with ethical standards, and build trust with customers and stakeholders.”
This isn’t about blocking innovation. It’s about governing it well.
Regulators Are Warming Up
The European Union has introduced two pivotal regulations to safeguard consumers in the digital age:
• The AI Act focuses on ensuring that high-risk AI systems are transparent, subject to human oversight, and operate without bias, placing explainability, auditability and human oversight at the centre of AI compliance - especially for high-risk systems.
• The Digital Services Act (DSA) mandates that online platforms enhance content moderation, prevent the spread of disinformation, and provide transparency regarding their algorithms.
These acts work in tandem to promote a safer and more accountable digital environment.
And regulators aren’t just sitting back. In April 2025, the NY Times reported that the EU is preparing potential fines of up to €1Bn against X (formerly Twitter) for alleged violations of the DSA, specifically related to inadequate content moderation and transparency failures.
One now-infamous case involved Air Canada’s chatbot, which gave a customer incorrect fare advice. The company tried to disclaim responsibility by saying the chatbot wasn’t human. The court disagreed and ruled the company liable.
Even in jurisdictions like New Zealand or Australia, where AI-specific regulation is still forming, directors are not exempt from accountability. Privacy, discrimination, and consumer protection laws still apply and AI is falling within their scope.
The message is clear: If your brand uses AI to make decisions, you’re accountable for the outcomes.
Formal fines haven’t landed in NZ (yet), but the intent is clear: enforcement is coming. The message to boards? Be ready.
So, What Should Boards Be Asking?
You don’t need to be an AI expert to ask smart questions. Start here:
- Where are we using AI - and what is it influencing?
- Can we explain how it works, in plain language?
- Who’s responsible for oversight - and are they reporting to the board?
- What happens if something goes wrong?
- Are we comfortable standing behind our AI’s decisions?
If you don’t like the answers, you’ve found your governance gap. And whilst Explainable AI is one of the blind spots in AI governance - it’s far from the only one.
Boards need to be engaging with a broader spectrum of oversight: data ethics, bias risk, regulatory exposure, shadow AI… and more.
If those topics haven’t made it onto your board agenda yet, that’s an even bigger governance gap.
Explainability Builds Trust
It’s not just about avoiding fines or bad press. Explainability builds internal confidence, customer trust, and cultural alignment.
If your people understand how AI works, they’ll use it well. If your customers trust your decisions, they’ll stay loyal. If your board knows what’s going on, it can lead, rather than react.
Final Word: Curiosity Is a Governance Superpower
AI is here. It’s evolving fast. And it’s touching every part of the business.
You don’t need to understand every line of code. But as a Director, you do need to ask the questions that make sure your organisation can.
Because “we didn’t know what the AI was doing” won’t fly with a regulator or a stakeholder.
So ask. Challenge. Stay curious.
It’s not just smart, it’s good governance.
And if you’re not sure where to start, start here.
At Directorly, we support boards and executive teams to build the clarity, confidence, and context they need to govern AI effectively. Our workshops don’t overload you with jargon or technical deep dives. Instead, we focus on what really matters at the board table: the questions to ask, the risks to understand, and the governance frameworks that enable smart, safe, and strategic decision-making.
It’s not about learning to build the model. It’s about understanding the implications, so you can lead with confidence, not caution.
Because the boards that are asking better questions today will be the ones setting the standard tomorrow.
This blog is a product of thoughtful human input and strategic use of AI tools, helping us deliver impactful, insightful and high-quality content and images efficiently and effectively. Because leading innovation means leading by example.