Cybercheck  >  Insights  >  Deepfake fraud: How to spot and prevent AI-powered scams
Deepfake fraud: How to spot and prevent AI-powered scams

Deepfake fraud: How to spot and prevent AI-powered scams

Deepfake fraud: How to spot and prevent AI-powered scams
Kasper ViioMon Sep 01 20255 min read

Deepfake technology uses artificial intelligence to generate fake images, video, and audio. As the technology improves, the results are becoming more realistic, making it increasingly difficult to distinguish deepfakes from the real thing.

Deepfakes create exciting possibilities for entertainment, education, and advertising. Unfortunately, however, the technologys dark side is increasingly apparent as fraudsters and cybercriminals exploit its potential for sophisticated trickery and deception.

Since the first reported case in 2019, deepfake fraud has become a serious threat to individuals and organisations, and its growing. Accenture reports that the trading of deepfake tools on the dark web increased 223% between 2023 and 2024.

What is deepfake fraud?

Deepfake fraud is a sinister form of social engineering in which cybercriminals use generative AI to impersonate someone. Their goals can vary. For example:

Deepfake financial and bank fraud

Deepfake attackers often target businesses, seeking to trick employees into handing over money or sensitive information.

In a 2024 survey by Deloitte, more than 25% of senior executives said their organisation had experienced a deepfake incident targeting financial and accounting data in the previous 12 months, and more than half of them expected the volume of attacks to increase.

Banks and finance teams are at high risk because theyre accustomed to processing high-value transactions. Attackers impersonate executives or clients to request and approve payments and money transfers, or trick staff into handing over access or account credentials.

For example, in 2024, an employee at the engineering firm Arup in Hong Kong joined a video call with what appeared to be a quorum of the companys senior executives, including its UK-based CFO. They had invited him to discuss aconfidential transaction.”

Following their instructions, the employee transferred HK$200 million (US$25 million) into various bank accounts in a series of 15 payments. When he followed up with the head office, it emerged that the company had been scammed. Every other participant in the video call had been a deepfake.

How deepfake fraud works

Generative Adversarial Networks (GANs) take data, such as publicly available images, voice recordings, or social media content, and generate replicas of them.

As the technology advances, less data is needed to produce convincing results. For example, McAfee found in 2024 that 3 seconds of audio were sufficient to clone a voice with 80% accuracy.

Attackers can also use social media and compromised personal information to research their targets, gather background details, and invent believable scenarios to trick their victims.

They then target and manipulate their victims using tactics similar to conventional phishing or whaling. For example:

  • Abuse of trust or authority: An employee receives a message or call that seems to come from a trustworthy source, such as a C-suite executive or a key client.
  • Pressure: A critical situation requires urgent action. For example, a big-money deal will collapse unless funds are transferred today.
  • Real or implied threat: Management is counting on the employee not to let them down. Mistakes or delays could have consequences.
  • Extraordinary demands: The employee is coerced into bypassing normal procedures or rules. After all, in a crisis, action is more important than protocol.

How to protect your organisation against deepfake fraud

Deepfake fraud is now a growing danger to businesses of all sizes and across industries and sectors. Here are some key tactics to defend your organisation against it.

Learn to spot potential deepfakes

Although deepfake technology is improving rapidly, its still far from perfect. Generative AI tools find it challenging to depict things such as faces, hair, three-dimensional spaces, or light and shadow with truly realistic precision.

So, for now, you can spot AI-generated images, video, or audio if you know what to look for. Tell-tale signs include:

  • Strange or unnatural body postures or movements.
  • Blurred or distorted faces or hair.
  • Lack of facial emotion or blinking.
  • Lips that dont move in sync with the words spoken.
  • Odd speech rhythms or unusual changes in tone of voice.
  • Inconsistent or unrealistic light or shadows.
  • Distorted or inconsistent background details.

Always verify requests for money or information

Learning to spot deepfake media is vital, but its not enough. Deepfake scams are designed to make people act hastily without thinking clearly.

Thats why you also need strict verification protocols and a safety-first culture.

Always verify requests for sensitive information and insist on four-eyes approvals for large transactions. Confirm requests by phone or in person and never trust voice or video calls alone. Cases such as Arup underline how important this can be.

In a world where its getting harder to tell whats real from whats fake, scepticism and double-checking are crucial, even (or especially) when a request seems to come from the very top.

Provide security awareness training for everyone in your organisation

Educate everyone in your organisation about the dangers of deepfake fraud.

Ensure that they understand the risks and know how to recognise and report suspicious messages and calls.

Use a cyber threat intelligence (CTI) and credential monitoring solution

CTI solutions such as Cybercheck help you to stay safe by continuously monitoring for exposed credentials and personal data, providing early warning to stop attacks before they breach your defences.

If cybercriminals are trading information about your organisation, employees, or senior executives, we alert you immediately.

Knowing that your personal data is in criminal hands means you can take proactive steps to prevent an attack. For example, changing passwords, blocking cards, or locking down access. That means you can shut out the attackers before they make you their next victim.

This wont stop cybercriminals from using generative AI, but it will wipe out their information advantage.

Cybercheck Intel

Stay ahead of cyber threats: get the latest threat intelligence, expert insights, and cybersecurity trends delivered straight to your inbox.

Stay informed, stay secure.