January 24, 2025 By Jennifer Gregory 4 min read

Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud.

Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the meeting was over, they then dutifully followed their boss’s instructions to send $200 million Hong Kong dollars, which equals $25 million.

But it wasn’t actually their boss — just an AI video representation called a deepfake. Later that day, the employee realized their terrible mistake after checking with the corporate offices of their multinational firm. They had been a victim of a deepfake scheme that defrauded the organization out of $25 million.

Businesses are often deepfake targets

The term deepfake refers to AI-created content — video, image, audio or text — that contains false or altered information, such as Taylor Swift promoting cookware and the infamous fake Tom Cruise. Even the recent hurricanes hitting the U.S. led to multiple deepfake images, including fake flooded Disney World photos and heartbreaking AI-generated pictures of people with their pets in floodwaters.

While deepfakes, also referred to as synthetic media, targeted at individuals typically serve to manipulate people, cyber criminals targeting businesses are looking for monetary gain. According to the CISA Contextualizing Deepfake Threats to Organizations information sheet, threats targeting businesses tend to fall into one of three categories: executive impersonation for brand manipulation, impersonation for financial gain or impersonation to gain access.

But the recent incident in Hong Kong wasn’t just one employee making a mistake. Deepfake schemes are becoming increasingly common for businesses. A recent Medus survey found that the majority (53%) of finance professionals have been targeted by attempted deepfake schemes. Even more concerning is the fact that more than 43% admitted to ultimately falling victim to the attack.

Watch Unmask the Deepfake

Are deepfake attacks underreported?

The key word from the Medus research is “admitted.”  And it raises a big question. Do people fail to report being a victim of a deepfake attack because they are embarrassed? The answer is probably.  After the fact, it seems obvious it was a fake to other people. And it’s tough to admit that you fell for an AI-generated image.  But the underreporting only adds to the shame and makes it easier for cyber criminals to get away with it.

Most people assume that they could spot a deepfake. But that’s not the case. The Center for Humans and Machines and CREED found a wide gap between people’s confidence in identifying a deepfake and their actual performance. Because many people overestimate their ability to identify a deepfake, it adds to the shame when someone falls victim, which likely leads to underreporting.

Why people fall for deepfake schemes

The employee who was tricked by the deepfake of the CFO to the tune of $25 million later admitted that when they first got the email supposedly from his CFO, the mention of a secret transaction made them wonder if the email was actually a phishing email. But once he got on the video, they recognized other members of his department in the video and decided it was authentic. However, the employee later learned that the video images of his department members were also deepfakes.

Many people who are victims overlook their concerns, questions and doubts. But what makes people, even those educated on deepfakes, push their concerns to the side and choose to believe an image is real? That’s the $1 million — or $25 million — question that we need to answer to prevent costly and damaging deepfake schemes in the future.

Sage Journals asked the question about who was more likely to fall for deepfakes and didn’t find any pattern around age or gender. However, older individuals may be more vulnerable to the scheme and have a hard time detecting it. Additionally, the researchers found that while awareness is a good starting point, it appears to have limited effectiveness in preventing people from falling for deepfakes.

However, computational neuroscientist Tijl Grootswagers of Western Sydney University likely hit the nail on the head as to the challenge of spotting a deepfake: it’s a brand new skill for each of us. We’ve learned to be skeptical of news stories and bias, but questioning the authenticity of an image we can see goes against our thought processes. Grootswagers told Science Magazine “In our lives, we never have to think about who is a real or a fake person. It’s not a task we’ve been trained on.”

Interestingly, Grootswagers discovered that our brains are better at detection without our intervention. He discovered that when people looked at a picture of a deepfake, the image resulted in a different electrical signal to the brain’s visual cortex than a legitimate image or video. When asked why, he wasn’t quite sure — maybe the signal never reached our consciousness due to interference from other brain regions, or maybe humans don’t recognize the signals that an image is fake because it’s a new task.

This means that each of us must begin to train our brain to consider that any image or video that we view could possibly be a deepfake. By asking this question each and every time we begin to act on content, we may be able to begin detecting our brain signals that are spotting the fakes before we can. And most importantly, if we do fall victim to a deepfake, especially at work, it’s key that each of us reports all instances. Only then can experts and authorities begin to curb the creation and proliferation.

More from Artificial Intelligence

How to calculate your AI-powered cybersecurity’s ROI

4 min read - Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company's internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes.The organization's AI-powered cybersecurity solution, which continuously monitors network traffic and user behavior, detects several anomalies associated with the attack, blocks access to the suspicious domains…

ISC2 Cybersecurity Workforce Study: Shortage of AI skilled workers

4 min read - AI has made an impact everywhere else across the tech world, so it should surprise no one that the 2024 ISC2 Cybersecurity Workforce Study saw artificial intelligence (AI) jump into the top five list of security skills.It’s not just the need for workers with security-related AI skills. The Workforce Study also takes a deep dive into how the 16,000 respondents think AI will impact cybersecurity and job roles overall, from changing skills approaches to creating generative AI (gen AI) strategies.Budgets…

Preparing for the future of data privacy

4 min read - The focus on data privacy started to quickly shift beyond compliance in recent years and is expected to move even faster in the near future. Not surprisingly, the Thomson Reuters Risk & Compliance Survey Report found that 82% of respondents cited data and cybersecurity concerns as their organization’s greatest risk. However, the majority of organizations noticed a recent shift: that their organization has been moving from compliance as a “check the box” task to a strategic function.With this evolution in…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today