April 28, 2025

Deepfakes are a growing cybersecurity threat that blur the line between reality and fiction. According to the US Department of Homeland Security, deepfakes introduce serious implications for public and private sector institutions. Detecting these threats is increasingly difficult, something that the CSIRO, Australia’s national science agency has identified in recent research with Sungkynkwan University. There are, said researchers, significant flaws in existing deepfake detection tools and a growing demand for solutions that are more adaptable and resilient. These AI-generated synthetic media have evolved from technological curiosities to sophisticated weapons of digital deception, costing companies upwards of $603k

 

The 2024 Identify Fraud Report has revealed another disturbing trend – every five minutes a deepfake attack was perpetrated in 2024 while digital document forgeries increased by 244% year-on-year. And it is, as the Regula survey highlighted, financial services firms that sit firmly in the crosshairs. 

 

“Deepfakes use AI to create realistic but entirely fabricated videos, images and audio recordings,” explains Dr Bright Gameli Muwador, Cyber Security Specialist, Kenya. “While the technology has legitimate uses, it’s being weaponised for fraud, disinformation and cybercrime.”

 

What makes deepfakes particularly dangerous is their increasing accessibility. “Previously it was confided to AI researchers, but now freely available tools allow anyone to create highly convincing fakes,” says Caesar Tonkin, Director at Armata Cyber Security. “A recent iProove report found that 47% of companies have encountered deepfake attacks while 62% aren’t adequately prepared to counter them.”

 

The financial stakes are alarmingly high and knows no borders. In the United States, these sophisticated fakes were used to spread election misinformation and commit financial fraud. And they have been used to create videos of well-known celebrities – a case in point being Taylor Swift – to promote fraudulent cryptocurrency schemes. In Australia, voice-based deepfake attacks target corporations, while Kenya has experienced deepfake-driven misinformation campaigns which have also been aimed at influencing public opinion during elections.

 

The scale of the problem is staggering. “According to Bitget, a cryptocurrency exchange and Web3 company, there has been a sharp increase in the use of deepfakes for criminal purposes that has led to total losses of more than $79.1 billion since 2022,” says Craig du Plooy, Director at Cysec. 

 

As deepfake technology grows increasingly agile and intelligent, detecting it has become increasingly complex. Traditional security measures are proving inadequate. “Digital forensics has become a critical part of deepfake detection,” says Tonkin. “We need AI-driven forensic analysis to identify manipulated content. These techniques include reverse image searches, frame-by-frame analysis, and examining the metadata.”

 

There have been promising developments when it comes to detecting deepfakes. Forensic AI has been designed to analyse pixel-level inconsistencies and audio forensics are catching deepfake, AI-generated voices. “These voices often struggle with breath control and emotional nuance,” says Dr Mawudor. “Forensic specialists can use a spectrogram analysis to detect these unnatural sound patterns.”

 

While corporations and governments face significant risks, individuals also aren’t immune. AI-generated scams are using the voices of family members to ask for money – and these threats are increasing. Even voice calls can be faked and it’s easy to make the mistake of believing a family member is in trouble.

 

What lies ahead isn’t a clear-cut answer to the deepfake problem. These threats definitely require a multi-faceted approach that leverages real-time detection tools, strengthened authentication processes, and ongoing employee training. 

 

“Employees should be trained to verify unusual requests through secondary channels,” says Tonkin. “While the deepfake technology threat detection and prevention industry is rapidly evolving and maturing to reign in these threats, every other avenue needs to be prioritised to ensure companies and individuals are protected.”

 

On a regulatory level, governments need to enforce AI content labelling and require social media platforms to flag AI-generated videos. This, says Dr Mawudor, is critical alongside strengthening the legal consequences for deepfake abuse, especially for fraud and digital harassment. 

 

As the line between authentic and artificial content continues to blur, perhaps the key words for companies going forward are vigilance, education and technology. The countermeasures put in place by companies are crucial to ensuring their systems are protected and to maintaining trust and integrity in a digital-first world. 

Leave a Reply