Sam Altman just told the Federal Reserve that AI can mimic any customer’s voice on command - is your board still betting its balance sheet on biometric IDs? Having just read Altman’s comments to the Fed this week, I’m re-thinking how consciously boards (well beyond the banking sector) should be treating ID and verification in their risk profiles. Altman called it “crazy” that money can still move on the strength of a spoken pass-phrase. He didn’t stop there: “AI has fully defeated most forms of biometric or behavioural authentication, except, ironically, a strong password. But all of these fancy take a selfie and wave, or hear your voice, or whatever, I’m very nervous that we have a significant impending fraud crisis because of this,” he said. And a final warning, “The sophistication of these attacks is growing faster than many firms’ ability to defend against them.” The implication is clear: if a cloned voice can move funds, it can just as easily unlock a patient’s health record, reroute a utility payment or redirect a tax refund. Voiceprints and other biometrics now underpin customer journeys in finance, healthcare, energy, teleco and government portals. That’s a ticking time-bomb of fraud that’s wired into every customer interaction. And directors should also remember that voice-clone fraud is only one piece in the broader AI-risk puzzle; we also face rising board accountability and regulator scrutiny, AI cyber attacks and “shadow AI” that slips sensitive data into public tools. So, as directors, where do we place this in our governance lens? * Name it. Elevate “AI-enabled impersonation” into the enterprise risk framework as part of your AI risk, with clear appetite, owners and impact. * Follow the data. Ask management for a map - where biometrics are used and for what. * Extend AI-governance oversight to third-party vendors running call-centre or IVR platforms. There’s also a need to overlay wider AI governance. * Conduct a comprehensive audit of all AI tools. Document and prioritise AI risks, (with regular reviews). * Build fluency: launch a board capability-building programme and consider appointing at least one AI-knowledgeable director. * Embed ethics and explainability. Regulatory scrutiny is sharpening; ensure oversight structures and transparency standards are in place. If you want a quick overview of AI risk - watch this recentBoardProwebinar I co-hosted: Governing AI Risk: What Every Director Needs to Know https://lnkd.in/gHdNiF_n Altman’s warning isn’t alarmist; it’s the canary in the coal mine... an early warning of what to come! Boards that weave biometric and voice clone threats into a broader AI-governance programme, will preserve the trust that underpins every digital interaction. As directors, are we confident that our AI-risk posture still deserve that trust? #aigovernance #aileadership
The implication is clear: if a cloned voice can move funds, it can just as easily unlock a patient’s health record, reroute a utility payment or redirect a tax refund. Voiceprints and other biometrics now underpin customer journeys in finance, healthcare, energy, teleco and government portals. That’s a ticking time-bomb of fraud that’s wired into every customer interaction.
And directors should also remember that voice-clone fraud is only one piece in the broader AI-risk puzzle; we also face rising board accountability and regulator scrutiny, AI cyber attacks and “shadow AI” that slips sensitive data into public tools.
So, as directors, where do we place this in our governance lens?
* Name it. Elevate “AI-enabled impersonation” into the enterprise risk framework as part of your AI risk, with clear appetite, owners and impact.
* Follow the data. Ask management for a map - where biometrics are used and for what.
* Extend AI-governance oversight to third-party vendors running call-centre or IVR platforms.
There’s also a need to overlay wider AI governance.
* Conduct a comprehensive audit of all AI tools. Document and prioritise AI risks, (with regular reviews).
* Build fluency: launch a board capability-building programme and consider appointing at least one AI-knowledgeable director.
* Embed ethics and explainability. Regulatory scrutiny is sharpening; ensure oversight structures and transparency standards are in place.
If you want a quick overview of AI risk - watch this recent BoardPro webinar I co-hosted: Governing AI Risk: What Every Director Needs to Know
https://lnkd.in/gHdNiF_n
Altman’s warning isn’t alarmist; it’s the canary in the coal mine... an early warning of what to come! Boards that weave biometric and voice clone threats into a broader AI-governance programme, will preserve the trust that underpins every digital interaction.
As directors, are we confident that our AI-risk posture still deserve that trust?
#aigovernance #aileadership