“AI is fundamentally transforming financial fraud—making sophisticated scams more accessible, scalable, and convincing than at any point in history.” — Perry Carpenter, Chief Human Risk Management Strategist, KnowBe4, in prepared remarks to the SEC Investor Advisory Committee
Scams aren’t new, but the tools for both perpetrating them and guarding against them are new and incredibly advanced. Deepfake-enabled hacking is here, and the enterprise must simultaneously get back to basics and deploy leading-edge technologies to protect assets.
At one time, training employees to avoid scams meant teaching them to look carefully at email addresses, avoid putting random USBs into their drives and look closely at the grammar and structure of messages. Today, voices and likenesses of real people can be duplicated so faithfully that even friends and close acquaintances can be fooled.
The conclusions of a pair of recent reports are startling:
- Risk management firm Riskonnect found that more than 80% of companies did not have protocols in place to defend against AI-based attacks in 2024.
- At the same time, voice authentication experts Pindrop found that deepfake fraud attempts rose by more than 1,300% in 2024.
Read more on TechChannel.


