In a Deloitte Global survey of board directors and executives published last month (Oct 24), almost 50% say AI is not yet on the board agenda. But the world of AI is moving really fast. So fast, it's easy for boards to be caught off guard. One minute you're hearing about AI's potential to revolutionise your industry, the next you're grappling with a data breach caused by an employee using a free AI tool without understanding the risks.
This is the danger of shadow AI: the use of AI tools and systems within an organisation, often without proper oversight or control. Even if your company hasn’t formally deployed AI, don’t fool yourselves that it’s not already being used by employees.
In my experience delivering AI training to boards and speaking at webinars and conferences, it’s rare to find anyone who has even heard of the term ‘shadow AI,’ let alone understands its risks.
The Hidden Threat of Shadow AI
Think shadow AI isn't a big deal? Think again.
A recent Section survey showed that even among companies where AI is banned, 43% of employees are using AI (Section AI Proficiency Benchmark, September 2024) and their advice to CEOs? “Expect AI use…even if it’s not sanctioned”
The rapid adoption of AI by employees, often through free or easily accessible tools, can lead to significant risks:
Data Breaches: Unauthorised AI applications may not comply with the organisations data security protocols, leading to potential leaks of sensitive information.
Intellectual Property Theft: Without proper oversight, proprietary data and trade secrets could be exposed or misused.
Bias and Discrimination: Unvetted AI tools may introduce biases, leading to unfair or unethical outcomes.
Reputational Damage: Misuse of AI can result in public backlash, eroding trust among customers and stakeholders.
What Does This Mean for Boards?
Boards must be proactive. Don't wait for a crisis to happen. Start by understanding how AI is being used (or could be used) across your organisation. This includes those "harmless" free AI tools that employees are probably already experimenting with.
Risk management needs to be front and centre. Shadow AI introduces a whole new set of risks, including data breaches, intellectual property theft, bias, and reputational damage. Boards need to ensure that robust risk management frameworks are in place.
Board education is crucial. Directors need to have at least a basic understanding of AI concepts and risks to provide effective oversight. Ask yourself the question - how are we actively fostering AI literacy within the board and across the organisation?
Practical Steps to Mitigate Shadow AI Risks:
Establish clear AI policies. Outline acceptable and unacceptable uses of AI within the organisation, including the use of free or third party AI tools.
Provide AI training for employees. Ensure that employees understand the potential risks associated with shadow AI and how to use AI responsibly.
Implement strong data governance practices. This includes data security, privacy, and quality controls.
Engage legal expertise. Stay informed about the evolving regulatory landscape surrounding AI.
The Bottom Line
AI is an integral part of the modern business landscape, and its influence will only continue to grow. Boards cannot afford to be complacent. By proactively addressing the challenges of shadow AI, they can mitigate risks, capitalise on opportunities, and guide their organisations toward a successful, AI-integrated future.
As Frith Tweedie from Simply Privacy aptly stated, “The responsible use of AI gives you a license to continue innovating. The flip side is a loss of trust and social license – and an unwelcome presence in the headlines.”
At Directorly, we specialise in equipping boards and executives with the knowledge and tools to navigate the AI landscape confidently. Our AI Training Programs are designed specifically for Boards and Executive teams - practical, insightful, and tailored to your needs. For more details about the course or to register, please contact us.
The future is here. Ready?
This blog is a product of thoughtful human input and strategic use of AI tools, helping us deliver impactful, insightful and high-quality content and images efficiently and effectively. Because leading innovation means leading by example.