← Back to incidents

Deepfake Video Call Defrauds Arup Employee of $25 Million in Hong Kong

Critical

A finance worker at Arup was tricked into transferring $25 million after attending a video call where all other participants were deepfake recreations of real colleagues, including the CFO.

Category
Deepfake / Fraud
Industry
Other
Status
Under Investigation
Date Occurred
Jan 1, 2024
Date Reported
Feb 4, 2024
Jurisdiction
Hong Kong
AI Provider
Other/Unknown
Application Type
deepfake
Harm Type
financial
Estimated Cost
$25,000,000
People Affected
1
Human Review in Place
No
Litigation Filed
No
deepfakevideo_fraudfinancial_fraudsocial_engineeringhong_kongarupcfo_impersonationmulti_person_video_callcorporate_security

Full Description

In January 2024, a finance worker at British multinational engineering firm Arup fell victim to an unprecedented deepfake fraud scheme in Hong Kong, resulting in the transfer of $25 million (200 million Hong Kong dollars) to fraudster accounts. The incident represents one of the first documented cases of real-time deepfake technology being used in a multi-person video conference to execute large-scale financial fraud. The fraud began when the employee received what appeared to be a message from the company's Chief Financial Officer requesting a confidential transaction. Initially suspicious of the text message, the employee's concerns were alleviated when invited to join a video call that appeared to include multiple familiar colleagues, including senior executives. The video conference featured highly sophisticated deepfake recreations of real Arup personnel, created using publicly available footage and advanced AI technology. During the video call, the deepfake CFO and other fake participants convinced the employee that the large financial transfer was necessary for a confidential acquisition. The realistic nature of the video deepfakes, combined with the multi-person format that mimicked normal corporate decision-making processes, successfully overcame the employee's initial skepticism. The fraudsters had studied the company's internal processes and hierarchy sufficiently to conduct a convincing corporate meeting. Hong Kong police confirmed the incident in February 2024, with Senior Superintendent Baron Chan describing it as the first case in Hong Kong involving deepfake technology in a multi-person video conference fraud. The police investigation revealed that the scammers had used publicly available video footage of Arup executives to train their deepfake models, highlighting the vulnerability of organizations whose leadership maintains public profiles through conferences, interviews, and corporate communications. Arup confirmed the incident and stated that they had reported the matter to Hong Kong authorities and were cooperating fully with the police investigation. The company emphasized that they had robust financial controls in place but acknowledged that the sophisticated nature of the deepfake technology had circumvented their existing fraud detection measures. The engineering firm has since implemented additional security protocols for financial transactions and enhanced employee training on AI-enabled fraud techniques. The incident has prompted discussions among cybersecurity experts about the evolving threat landscape as AI technology becomes more accessible and sophisticated. The case demonstrates how deepfake technology can be weaponized to exploit trust in video communications, traditionally considered more secure than voice-only or text-based interactions. Financial institutions and corporations worldwide have begun reassessing their authentication protocols and fraud detection systems in response to this emerging threat vector.

Root Cause

Sophisticated deepfake technology created convincing real-time video recreations of senior executives during a video conference, bypassing traditional voice-only fraud detection and exploiting employee trust in video verification.

Mitigation Analysis

Multi-factor authentication for large transfers, mandatory in-person or separate channel verification for high-value transactions, deepfake detection software for video calls, and comprehensive training on AI-enabled fraud techniques could have prevented this incident. Real-time behavioral authentication and transaction approval workflows requiring multiple senior approvals would add critical layers of protection.

Lessons Learned

This incident demonstrates the urgent need for organizations to adapt their fraud prevention measures to address AI-enabled threats, particularly sophisticated deepfake technology that can convincingly impersonate multiple executives simultaneously. Traditional reliance on video verification is no longer sufficient protection against advanced social engineering attacks.