As deepfake technology evolves, fraudsters are using increasingly sophisticated AI-powered clones to deceive businesses and consumers. While there are still some telltale signs to spot these fakes, the rapid advancement of the technology suggests that these indicators may soon disappear.
Deepfakes, created using artificial intelligence to produce realistic imitations of real people, have gained mainstream attention. Although cybersecurity experts have long warned about these counterfeit videos, it is only recently that they have become nearly indistinguishable from authentic footage.
Fraudsters previously employed spear phishing to target business leaders’ email inboxes. Now, they are digitally cloning those leaders. Mark Read, CEO of advertising giant WPP, highlighted this issue in an internal memo following an unsuccessful deepfake attack using his likeness.
Deepfake applications are widely available on the dark web, primarily because they automate much of the process for criminals. These apps are being used to devastating effect. In January, for example, an employee at a multinational firm in Hong Kong was deceived into transferring £20 million to fraudsters after a phony video call featuring their CFO and other colleagues.
Advances in Deepfake Technology
Deepfake technology is continually advancing, with significant improvements noted this year, according to Dr. Andrew Newell, Chief Scientific Officer at iProov, an authentication firm. “In the early days, deepfakes weren’t good at all,” he states. “Over the past four months, they have become very, very good. We think spotting these things is almost impossible.”
The most advanced deepfake technology now handles light and shadow effectively, making one of the traditional giveaways—misplaced or absent shadows—less common.
Criminals typically use injection attacks, covertly inserting a deepfake into a video stream to make it appear as though it is coming from a real camera. By mapping the target’s face onto their own and controlling facial movements and lighting, attackers can create a convincing clone, which is then run through an emulator or virtual webcam.
Several deepfake kits now offer comprehensive packages, including face-swapping software, a virtual camera emulator, and insertion tools. “In the past, you’d have needed a relatively high level of expertise to make the deepfake and inject it,” Newell explains. “Now, you can download these kits and, with the same tech, make a face swap and inject it in one go.”
Detecting Deepfakes
Despite the rapid advancement of deepfake technology, there are still some signs to look out for, though these may not remain effective for long. Simon Newman, CEO of the Cyber Resilience Centre for London, advises looking for unnatural details on the face, such as unusual lip colours, facial expressions, or strange shadows. Blurring inside the mouth can also be a giveaway, as criminals often neglect this area.
Newman suggests examining the head compared to the neck or other body parts for strange movements or inconsistencies. Look for natural synchronization between lip movements and speech and check if facial expressions appear genuine.
Dr. Martin Kraemer, a security awareness advocate at KnowBe4, notes that face swaps are currently easier to spot than fully synthetically generated video sequences, which have improved considerably this year. Kraemer recommends scrutinising the edges of the speaker’s face, the age appearance of the face compared to the rest of the head, and unnatural shadows around the eyebrows.
Fully synthesised videos are more challenging to detect, but Kraemer advises looking for appropriate body language. Eye movements generally support the statements being made, whereas deepfakes often display repetitive optical gestures. Additionally, precise enunciation can be a clue, as natural speech is rarely overly polished.
Future Challenges and Solutions
Newell emphasises that no one should be overly confident in their ability to spot deepfakes. His firm is developing an ID system similar to public key encryption. During calls, iProov’s verification technology will project a pattern of colours on the speaker’s face via the device camera, undetectable by the human eye. A match with the authenticator’s pattern indicates “liveness,” requiring no additional action from participants.
As technical flaws in deepfakes become harder to detect, maintaining high contextual awareness will be crucial for countering risks. Kraemer stresses the importance of developing critical thinking and emotional awareness to combat social engineering attempts.
Lucy Finlay, Client Delivery Director at ThinkCyber, advises focusing on the context of the call and the participants’ interactions. Attackers often create a sense of urgency to exploit intuitive, automatic responses, known as “system-one thinking,” rather than considered, logical responses, or “system-two thinking.”
Ultimately, avoiding deepfake scams may soon rely less on visual detection and more on intuitive judgment. Trusting your gut might become essential in identifying and mitigating the risks posed by these increasingly convincing digital deceptions.