Just when you thought it was safe to get back in the water… a new set of AI risks is surfacing. Ones that go beyond data and privacy into how agentic AI systems act on our behalf. And critically, this isn’t just an AI issue. It’s the next frontier of cybersecurity and risk management.
The first wave of AI risk was about information: data leakage, privacy breaches, intellectual property. Add to that bias, explainability challenges, hallucinations, and the spread of “shadow AI.” These are not niche problems- 76% of companies are already using AI, and nearly 70% have generative AI in play. Regulators are stepping up scrutiny, and boards are expected to show accountability.
The second wave is about behaviour. Yesterday, OpenAI CEO Sam Altman announced the launch of AgentKit, a toolkit for building and deploying AI agents… which means agents are about to become even more mainstream - agents that don’t just analyse but act - booking meetings, moving data, even executing transactions. That creates two urgent risks:
• Inside the organisation, “agent sprawl” as unsupervised tools proliferate without oversight.
• Externally, these agents are becoming points of attack. Targeted promptware attacks such as a poisoned email, a manipulated invite, or a compromised integration can exploit the agent directly. When an AI agent is linked to your email or diary, a simple calendar invite can become the Trojan horse for a cyber attack. Hackers no longer need to bypass people, they go after the software that acts as a proxy.
For boards, this is a governance inflection point. AI risk now belongs firmly alongside cyber on the agenda. Because when agents act, they expand the attack surface, and the consequences move from informational to operational.
So what?
Boards should:
• Make AI risk a standing board agenda item - treat it with the same seriousness as cyber or financial risk.
• Require a full register of AI and agent use across the organisation - you can’t govern what you can’t see.
• Set expectations for governance and accountability - ensure clear ownership, policies, and oversight structures are in place.
• Seek assurance that problematic AI systems can be paused or contained quickly - and that escalation protocols are tested.
Strong AI governance isn’t a brake on innovation. It’s how organisations scale safely and with confidence.
Organisations that get AI governance right will move faster, win trust, and unlock value. Those that don’t will be caught reacting to the next wave instead of riding it.