← Back to incidents
AI Voice Deepfake Used to Defraud Company of $25 Million in CEO Impersonation Scam
HighFraudsters used AI deepfake technology to impersonate a company's CFO and other executives on a video call, convincing a finance worker to authorize a $25 million wire transfer to criminal accounts.
Category
Deepfake / Fraud
Industry
Finance
Status
Under Investigation
Date Occurred
Jan 15, 2024
Date Reported
Feb 8, 2024
Jurisdiction
International
AI Provider
Other/Unknown
Application Type
other
Harm Type
financial
Estimated Cost
$25,000,000
People Affected
1
Human Review in Place
No
Litigation Filed
No
deepfakevoice_cloningvideo_fraudwire_transferbusiness_email_compromisesocial_engineeringfinancial_fraudceo_fraud
Full Description
In January 2024, a multinational company fell victim to a sophisticated AI-powered fraud scheme that resulted in the loss of $25 million. The attack targeted a finance worker at the company's Hong Kong office through an elaborate deepfake video conference call. Criminals used artificial intelligence to create convincing real-time impersonations of the company's chief financial officer and several other colleagues, complete with their voices, facial features, and mannerisms.
The fraud began when the finance employee received what appeared to be a legitimate video call from the company's CFO requesting an urgent wire transfer. During the multi-party video conference, the AI-generated deepfakes of various executives provided supporting authorization for the transaction. The technology was so sophisticated that the finance worker did not suspect the call was fraudulent and proceeded to execute the $25 million transfer to accounts controlled by the criminals.
The scheme represents a significant escalation in the sophistication of business email compromise (BEC) fraud, which has traditionally relied on email impersonation rather than real-time video and voice synthesis. The case demonstrates how cybercriminals are leveraging advanced AI tools to create more convincing social engineering attacks that can bypass traditional security awareness training focused on email-based threats.
Hong Kong police launched an investigation into the incident, working with international law enforcement agencies to track the fraudulent transfers and identify the perpetrators. The case has prompted renewed warnings from cybersecurity experts about the growing threat of deepfake technology in corporate fraud schemes. Financial institutions and multinational corporations have begun implementing additional verification protocols for high-value transactions, including multi-channel confirmation requirements and enhanced employee training on recognizing AI-generated content.
The incident highlights the urgent need for organizations to update their financial controls and verification procedures to account for the reality of sophisticated AI impersonation attacks. Traditional security measures designed for email-based fraud are proving inadequate against real-time deepfake technology that can convincingly replicate trusted individuals in live video calls.
Root Cause
Sophisticated AI deepfake technology created convincing real-time video and voice impersonation of multiple executives during a video conference call. The finance employee was tricked into believing they were receiving legitimate authorization from senior management to execute a large wire transfer.
Mitigation Analysis
Multi-factor authentication for high-value transactions, pre-arranged code words or verification protocols, and mandatory in-person or secure channel confirmation for large transfers could have prevented this fraud. Voice biometric verification systems and employee training on deepfake detection would add additional layers of protection.
Lessons Learned
This incident demonstrates that deepfake technology has advanced to the point where real-time video impersonation can be used effectively in social engineering attacks. Organizations must urgently update their financial authorization protocols to include multiple verification channels and cannot rely solely on visual or audio confirmation of identity.
Sources
Deepfake scammers walked off with $25 million in first-of-its-kind AI heist
CNN · Feb 8, 2024 · news
Hong Kong police investigate $25 million deepfake scam targeting finance worker
Reuters · Feb 9, 2024 · news