top of page

How Deepfake Technology Defrauded a Finance Worker of 25 Million in a Sophisticated Scam

  • Jacob Crowley
  • Dec 28, 2025
  • 3 min read

In early 2024, a finance employee at Arup’s Hong Kong office fell victim to a highly sophisticated scam involving deepfake technology. The fraudsters impersonated the company’s CFO and other executives through realistic video and audio during a conference call. This deception convinced the employee to transfer nearly $25 million (HK$200 million) to accounts controlled by the criminals. This incident highlights the growing risks that AI-powered tools pose to cybersecurity and financial safety.


Eye-level view of a computer screen showing a blurred video call interface with a single person
Deepfake video call used in a financial scam

How the Scam Unfolded


The scam began with the fraudsters gaining access to internal company information, including names and roles of key executives. They then used AI-driven deepfake technology to create convincing video and audio clips that mimicked the voices and faces of Arup’s CFO and other colleagues. During what appeared to be a routine conference call, the finance worker received instructions to transfer large sums of money urgently.


Despite initial suspicion of phishing, the employee was persuaded by the realistic nature of the deepfake call. The scammers exploited the trust and authority associated with senior executives, making the request seem legitimate. Over multiple transactions, the employee wired approximately $25 million to accounts controlled by the fraudsters.


What Makes Deepfake Scams So Dangerous


Deepfake technology uses artificial intelligence to manipulate or generate visual and audio content that appears authentic. This capability allows criminals to:


  • Impersonate trusted individuals with high accuracy

  • Bypass traditional verification methods relying on voice or video confirmation

  • Create urgency and pressure through realistic interactions

  • Exploit human trust and social engineering vulnerabilities


In this case, the combination of deepfake audio and video made it difficult for the finance worker to doubt the legitimacy of the request. Unlike typical phishing emails or phone scams, deepfake calls provide a convincing sensory experience that can override caution.


Lessons for Cybersecurity and AI Awareness


This incident serves as a warning for companies and individuals to strengthen their defenses against AI-driven social engineering attacks. Here are some practical steps to consider:


Implement Multi-Factor Verification


Relying on a single form of confirmation, such as a video call or voice message, is no longer enough. Companies should require multiple verification steps for large financial transactions, such as:


  • Written confirmation via official email channels

  • Direct phone calls to known numbers

  • Approval from multiple authorized personnel


Train Employees on AI Threats


Awareness training should include information about emerging AI threats like deepfakes. Employees need to recognize signs of manipulation and understand the importance of verifying unusual requests, even if they appear to come from senior executives.


Use AI Detection Tools


Some cybersecurity solutions now offer AI-based detection tools that analyze video and audio for signs of deepfake manipulation. Integrating these tools into communication platforms can provide an additional layer of protection.


Limit Information Exposure


Reducing the amount of sensitive information available publicly or internally can make it harder for fraudsters to create convincing deepfakes. This includes controlling access to executive schedules, contact details, and organizational charts.


The Broader Impact on Financial Security


The Arup case is not isolated. As AI technology advances, deepfake scams are likely to increase in frequency and sophistication. Financial institutions, corporations, and individuals face growing risks that require proactive measures.


This event also raises questions about legal and regulatory frameworks. Authorities may need to update laws to address crimes involving AI-generated content and provide clearer guidelines for liability and prevention.


Final Thoughts


The $25 million deepfake scam at Arup’s Hong Kong office illustrates how AI can be weaponized to bypass traditional cybersecurity defenses. It shows that technology alone cannot guarantee security—human vigilance and updated protocols are essential.


Comments


bottom of page