← Back to incidents

Deepfake Video Call Defrauds UK Engineering Firm of $25.6 Million

Critical

A finance worker at UK engineering firm Arup (known for designing the Sydney Opera House) was tricked into transferring $25.6 million to fraudsters who used deepfake technology to impersonate the company's CFO and other colleagues during a video conference call. Every other participant on the call was a deepfake.

Category
Deepfake / Fraud
Industry
engineering
Status
Under Investigation
Date Occurred
Jan 1, 2024
Date Reported
Feb 4, 2024
Jurisdiction
International
AI Provider
Other/Unknown
Model
Unknown Deepfake Model
Application Type
other
Harm Type
financial
Estimated Cost
$25,600,000
People Affected
1
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
ongoing
deepfakesocial_engineeringwire_fraud

Full Description

In early January 2024, a finance department employee at Arup's Hong Kong office received what appeared to be a message from the company's UK-based Chief Financial Officer requesting authorization for confidential financial transfers. Arup, the renowned British multinational engineering consultancy known for designing iconic structures including the Sydney Opera House, became the target of an elaborate deepfake fraud scheme. The employee initially expressed suspicion about the request, recognizing potential hallmarks of a phishing attempt. However, the fraudsters escalated their approach by organizing what appeared to be a legitimate video conference call with multiple company executives to address the employee's concerns. The fraudsters employed sophisticated deepfake technology to create convincing real-time video and audio representations of Arup's CFO and several other colleagues whom the victim personally recognized. These deepfakes were likely generated using publicly available footage from company presentations, media interviews, and social media content featuring the targeted executives. Every participant on the video call except the victim was an AI-generated impersonation, creating an elaborate digital theater designed to establish credibility and override the employee's initial skepticism. The technology demonstrated sufficient sophistication to maintain convincing interactions during a live video conference, representing a significant advancement in deepfake application for criminal purposes. Convinced by what appeared to be direct instructions from trusted colleagues during the video call, the finance worker executed 15 separate wire transfers across five different Hong Kong bank accounts between January 1-15, 2024. The total fraudulent transfers amounted to approximately HK$200 million, equivalent to US$25.6 million. The deception was only discovered when the employee subsequently contacted Arup's head office through official channels to confirm the transactions, at which point the company realized no such transfers had been authorized. The financial impact represented a substantial loss for the firm, though Arup has not disclosed whether any funds were recovered or if insurance coverage mitigated the damages. Following discovery of the fraud, Hong Kong police launched a comprehensive investigation that resulted in the arrest of six individuals connected to the sophisticated scam operation. Arup publicly confirmed the incident occurred, though the company provided limited details about internal security protocols or specific measures being implemented to prevent similar attacks. The case was reported to Hong Kong's Commercial Crime Bureau, which has been investigating the technical methods used and tracking the movement of the fraudulent transfers. Arup has not disclosed whether any of the transferred funds have been recovered through legal or banking intervention mechanisms. This incident marked one of the largest documented cases of deepfake-enabled financial fraud to date, demonstrating that AI-generated content has reached sufficient quality to deceive experienced professionals in real-time interactive scenarios. The case has been widely cited by cybersecurity experts and law enforcement agencies as evidence that deepfake technology poses an escalating threat to corporate financial security. Industry analysts noted that traditional verification methods based on visual and audio recognition are becoming insufficient against advanced AI impersonation techniques. The Arup incident has prompted widespread discussion among financial institutions, technology companies, and regulatory bodies about implementing enhanced verification protocols for high-value transactions that do not rely solely on visual or auditory identity confirmation. Several major corporations have since announced plans to implement multi-factor authentication systems and out-of-band verification procedures for significant financial authorizations, while cybersecurity firms have accelerated development of deepfake detection technologies specifically designed for corporate environments.

Root Cause

Fraudsters used publicly available video and audio of Arup executives to create convincing deepfake representations. They arranged a video conference call where every participant except the victim was a deepfake. The victim was instructed to make 15 transfers totaling HK$200 million ($25.6 million USD). The deepfakes were convincing enough to overcome the worker's initial suspicion — a phishing email had first raised concerns, but the "live" video call with apparent colleagues dispelled them.

Mitigation Analysis

Multi-factor verification for large financial transfers — independent of the communication channel — would have prevented this fraud. Provenance verification of video call participants (cryptographic identity attestation) is an emerging need. This case demonstrates that visual and auditory confirmation is no longer sufficient for identity verification in high-value transactions. Organizations need out-of-band confirmation protocols that cannot be spoofed by synthetic media.

Lessons Learned

Visual and auditory identity verification is no longer sufficient for high-value transactions. Organizations need multi-channel, out-of-band confirmation protocols. Deepfake technology has reached the point where real-time video impersonation can deceive trained professionals.