For most boards, cyber security is already a priority. The IBM Data Breach Report reveals a new blind spot: shadow AI - and with ChatGPT-5 launching this week, that risk just got a lot bigger.
The IBM 2025 Data Breach Report, released on 30 July, found that breaches involving unsanctioned AI tools cost organisations an average of US $670,000 more than other breaches. 97% of these incidents had no AI access controls in place and 63% of impacted organisations had no AI governance policy at all.
Shadow AI is when employees use AI tools like ChatGPT or Claude without approval, often pasting in sensitive company or client data.
I spoke to a senior leader recently who said, “We haven’t rolled out AI in our organisation, so we don’t need an AI policy.” Yeah right! You may not have rolled out AI formally, but it’s defintely in use in your organisation. With 700 million people using ChatGPT every week, some of them are your employees - and the massive capability enhancements from ChatGPT 5 (launched in the last couple of days) means they’ll be using it even more. And that means you have a shadow AI risk, whether you can see it or not.
And the research is clear, the KPMG-University of Melbourne study found that in Australia and New Zealand, 45% of employees use AI tools in ways that breach company policy.
The Gusto 2025 survey revealed 57% of workers globally use AI in secret, often paying for tools themselves and never telling their manager.
This means sensitive data, intellectual property and decision-making are moving through ungoverned tools every day, expanding your cyber risk in ways you cannot yet see.
Questions every board should be asking now
* Has shadow AI been identified on our cyber risk register
* Do we have a clear, practical AI policy that people know about and use
* When did we last see a report on where AI is being used in the organisation
* Is AI risk integrated into our enterprise risk management framework
* Who has accountability for AI governance and reports to the board
Cyber security is already a top concern for directors. The combination of shadow AI and the scale of ChatGPT-5 makes it an urgent one.
The sooner it is brought into the light, the safer your organisation will be.
aigovernance AIforBoards aileadership