A Zoom at a cost of $25 million


Posted February 23, 2024 by LingYU

Recent $25 million deepfake heist targeting a finance professional at a multinational corporation in Hong Kong serves as a chilling reminder of the escalating threat that artificial intelligence-generated deception poses to organizations worldwide.

 
In cybersecurity, where the battle between defenders and attackers is relentless, the emergence of deepfake technology has ushered in a new era of deception and sophistication. The recent $25 million deepfake heist targeting a finance professional at a multinational corporation in Hong Kong serves as a chilling reminder of the escalating threat that artificial intelligence-generated deception poses to organizations worldwide.

Unveiled by Hong Kong police, this jaw-dropping incident highlighted the intricacies of a scheme employing deepfake technology to orchestrate a multi-person video conference, where every participant, except for the unsuspecting victim employee, exists solely as a meticulously crafted mirage of authentic individuals.

The question arises: Was there a chance for the finance professional in Hong Kong to realize they were falling victim to a fraud? I’d say that if they were not trained, the chances were minimal, especially considering the type of deepfake involved — simultaneously faking multiple personalities is a novelty.

Still, the role of individual vigilance is paramount. Individuals must exercise caution when sharing sensitive information, especially online, and verify the authenticity of any requests for information or payments. A healthy dose of skepticism toward suspicious or unsolicited messages is crucial, prompting further investigation before taking any action.

Staying informed about the latest deepfake trends and technologies is not just a recommendation but a necessity. Being vigilant about monitoring personal information for signs of fraud is essential in a world where seemingly innocent actions can lead to undesirable consequences.

As access to AI technology for creating deepfakes increases, the risk of both individuals and businesses—regardless of size—falling victim to these deceptive practices grows.

The results of the survey ran by Regula in 2023 paint a concerning picture: one-third of global businesses had already fallen victim to deepfake fraud. Deepfake voice fraud, a type involving the use of AI to create convincing fake voice recordings, affected 37 percent of all businesses globally, with nearly half of U.S. and UAE businesses experiencing this form of fraud. Video deepfakes impacted 40 percent of U.S. businesses, exceeding the global average.

The Banking sector emerges as the most vulnerable. Eight out of ten business leaders in this sector perceive deepfake voice fraud (83 percent) and video deepfakes (81 percent) as real threats to their organizations, with expectations of these threats growing in the near future. At the same time, the prevalence of deepfake video fraud is not exclusive to large enterprises; almost one-third of all small and medium-sized businesses have already experienced its detrimental impact.

In the past, traditional methods of identity theft involved using tools like Photoshop to alter IDs for manipulating data during verification processes. However, the advent of deepfake technology introduces far more advanced techniques, leveraging the capabilities of AI to surpass authentication checks.

Fraudsters now use AI to create deepfakes by leveraging sophisticated software that manipulates audio and video to mimic real people’s appearances and voices. This process involves feeding the AI system a large amount of data on the target — such as photos, videos, and voice recordings — to accurately replicate their mannerisms and speech patterns. As a result, these deepfakes can be incredibly convincing, serving as convincing “selfies” to bypass face matching checks. This poses a significant challenge to organizations relying on remote identity verification and fraud prevention methods that traditionally rely on document scans and biometric data.

As a response to this evolving landscape, companies need to adopt new technologies and techniques for verifying identity and detecting fraud.

Ability to protect from diverse types of presentation attacks such the use of electronic devices, printed photos, video replays, video injections, or realistic masks instead of a real person can be a game changer. I would highlight three key measures that companies should take to protect against deepfake fraud.

The traditional approach of merely requesting a photo of an ID is no longer effective. Companies need to shift to a liveness-centric approach, verifying physical objects such as faces and documents, along with their dynamic characteristics. This move is crucial as AI-generated images are often flawless and can evade detection by humans and technology alike. Verifying the liveliness and authenticity of both the document and the individual submitting the document provides a more robust verification solution.



Manipulating documents for an authentication process is significantly harder than swapping a face during a liveness session. The difficulty arises because criminals train their AI systems predominantly on scans of IDs, which are one-dimensional. Requiring live documents to be moved or rotated during authentication increases the likelihood of detecting anomalies. Furthermore, modern IDs often incorporate dynamic security features visible only through movement, presenting an additional layer of protection. The industry’s continuous innovation in this field makes creating realistic fake documents during a capture session with liveness validation nearly impossible.

Instead of relying on human experience, neural networks effectively step in to verify and authenticate identities. The broader the data set, the more advantageous an IDV solution is for fraud prevention. Full-scale automatic authenticity checks supported by verification systems should be trained to detect even the most subtle manipulations in video or image. It starts from the automatic document type detection and its liveness detection, leaving no room for tampering from the very beginning. For example, this may be challenging for the human eye to discern, but smart identity verification solutions can detect changes in movement or within the image itself pointing to fraudster attempts. As for biometric checks, neural networks can evaluate facial expressions, since it’s important for facial biometric examination to check that the face is neutral and not with the eyes unnaturally wide open. Given that most AI tools used by fraudsters are trained on static face images, they struggle to produce realistic results in liveness video sessions where a person is required to perform specifically requested actions, like turning their head.

Relying on a single method of identity verification is insufficient. Only the combination of authenticity checks, support for electronic documents verification, cross-validation of personal data and ability to re-verify data on the server side can protect you from fraud and address zero-trust-to-mobile issues.

Such a multi-layered approach is essential, combining thorough document verification with comprehensive biometric checks.

The $25 million heist serves as a stark reminder that the battle against deepfake fraud is ongoing. As organizations, individuals, and technology providers navigate this complex reality, the evolution of anti-fraud measures will play a pivotal role in staying ahead of the deepfake menace. And the deepfake dilemma is not merely a technological challenge; it is a multifaceted threat that requires a comprehensive and collaborative response.
-- END ---
Share Facebook Twitter
Print Friendly and PDF DisclaimerReport Abuse
Contact Email [email protected]
Issued By Rydteco
Business Address Room 706, Building 12, Huanan City,Tiedong Logistics Zone, Pinghu Community, Pinghu Street, Longgang District, Shenzhen, China
Country American Samoa
Categories Industry , Software , Technology
Tags biometric , biometric tablet , fingerprint , iris , nfc , barcode scanner , battery , memory
Last Updated February 23, 2024