
Deepfake corporate fraud has evolved once again. Unlike traditional cyberattacks focused on phishing and faking invoices, attackers are now targeting zoom calls.
In several recent cases, attackers used AI-generated video and voice to impersonate executives during live meetings. The finance teams saw familiar faces and heard familiar voices so they assumed all was alright. It wasn’t until later that security teams discovered the deception.
How Deepfake Zoom Attacks Work
Firstly, attackers gather materials from the internet. Public speaking videos, audio from interviews and social media clips are all a part of the tools used to create the deepfake. Then, they use generative AI tools to clone facial expressions and speech patterns.
Recently, there was a similar attack connected to a North Korean group. The hackers targeted a crypto company and sent a zoom link for a 30 minute call after originally contacting them via telegram. The victim was unsuspecting since they could see real executives on the screen. Then, the attackers used the pretense of having the victim fix their audio during the call to deploy several unique pieces of malware.
In many cases, the attacks follow a simple pattern. It usually starts with a compromised account or a deliberate act of impersonating an official. The attackers simulate a messenger call that drops constantly and blame it on the network. Then, they send a pre-generated low quality emergency video message.
As a result, the victim is convinced they’re legit and doesn’t question anything further. The attackers could make any type of request, from demanding credentials, confidential data or software installation.
Why Deepfake Fraud Succeeds
Deepfake fraud is successful because it exploits human trust. People are quick to respond to visuals and voice before they realize something is off.
For example, employees feel pressured to react quickly when an executive makes a request especially during a live call. This urgency completely kills doubt. At the same time, a lot of companies still rely on email confirmations to approve transactions and if attackers have access to these channels, they can further reinforce the illusion.
Moreover, some attacks now feature multiple AI-generated participants in one meeting. In 2025, there was a case in Singapore where attackers used deepfake to impersonate a company’s executives and secured a huge transfer. However, due to cross-border cooperation between Singapore and Hong Kong, the money was eventually recovered.
Consequently, the risks these deepfakes pose goes beyond the employees, it affects the entire approval chain.
The Future of Corporate Security
Right now, videos no longer suffice as a form of identity verification. Corporations need to redesign verification processes.
Security teams increasingly recommend multi-step verification processes. For instance, finance teams should require confirmation through a secure internal system or verified phone number that attackers don’t have access to.
At the same time, organizations must update their employee training. Staff need to treat urgent video meetings with the same scepticism as suspicious emails. In addition, layered authentication and behaviour monitoring can reduce exposure.
Deepfake attacks will continue to evolve. However, companies need to strengthen their security systems and identity verification to make these attacks harder to execute.
