Navigating the ‘Deepfake’ era – Understanding and addressing the risks

In an age where seeing is no longer believing, the emergence of deepfakes heralds a new chapter in the battle between fact and fabricated reality. Guest writer Matthew Newton discusses the insidious rise of this technology that can put words into the mouths of the unsuspecting, casting shadows of doubt across the pillars of truth
Picture of Elizabeth Jenkins-Smalley

Elizabeth Jenkins-Smalley

Editor In Chief at The Executive Magazine

In January 2024, an audio clip of US President Joe Biden circulated via a ‘robocall’ (see note) to primary voters in New Hampshire, discouraging Democrats from taking part. The major significance of this was not the message, but the fact the President never recorded the message – it was a deepfake and is unlikely to be the last in the US election. This incident echoed previous occurrences, including a deepfake featuring Ukrainian President Volodymyr Zelensky urging his soldiers to disarm, which reportedly surfaced in March 2022. Furthermore, in February 2024, London Mayor Sadiq Khan expressed concern over the impact of a deepfake audio clip supposedly depicting him making contentious remarks before Armistice Day. These incidents underscore the escalating prevalence of deepfake technology in shaping contemporary discourse.

As we journey into 2024, the phenomenon of deepfakes is rapidly gaining traction, permeating various aspects of our lives, and reshaping the digital content landscape. From entertainment to politics, the proliferation of deepfakes challenges established norms and perceptions of reality, necessitating a critical reassessment of existing legislative frameworks and societal trust. In this dynamic environment, the implications of deepfakes extend beyond entertainment, infiltrating education, leisure activities, and posing significant threats to security and democratic processes, notably in forthcoming global elections, with a particular emphasis on the United States.

At its core, deepfake technology uses artificial intelligence (AI) algorithms to fabricate or manipulate video and audio content, seamlessly generating events that never occurred. Of concern is the accessibility of this technology; numerous user-friendly applications enable the creation of deceptive content with minimal expertise. Initially associated with the creation of pornographic material featuring celebrities, deepfakes now encompass politically sensitive scenarios and defamatory acts targeting public figures. The barrier to entry is remarkably low; a mere three-second audio snippet, ubiquitous in the digital age, can serve as the foundation for a plethora of fabricated content, perpetuating misinformation and exploiting vulnerabilities in auditory and visual mediums.

Efforts by various companies to develop software for detecting deepfakes have yielded uncertain efficacy as the technology advances. A study revealed that while computer vision software can identify fake videos as effectively as humans, they make distinct types of errors. The evolving nature of deepfake technology, increasingly sophisticated, renders detection of manipulated content more challenging. Traditional indicators of manipulation, such as distorted body parts or unrealistic shadows, have largely been remedied by newer algorithms. Despite this progress, modern deepfakes often exhibit flawless characteristics, generating “average” faces and audio devoid of typical imperfections found in real-life recordings. Governments, particularly security services, are beginning to acknowledge the threat of generative AI to elections and are contemplating updates to election fraud regulations to combat deceptive AI.

Given these developments, fostering a comprehensive understanding of emerging technologies and their potential for exploitation is imperative for proactive safeguarding at individual and organisational levels. Consider the following potential ramifications of deepfakes:

  • Election Manipulation: Deepfake videos or audio recordings could sway public opinion or tarnish the reputations of candidates, potentially influencing electoral outcomes before the authenticity of the content can be verified.
  • Reputational Damage: High-profile individuals risk having their voices cloned and manipulated to falsely incriminate them or spread damaging falsehoods, opening avenues for extortion and blackmail.
  • Fraudulent Activities: AI-driven voice synthesis could facilitate sophisticated phone scams, such as impersonating trusted individuals or institutions to extract sensitive information or commit financial fraud.
  • Extortion and Virtual Threats: The proliferation of deepfakes exacerbates threats of extortion, sextortion, and virtual kidnapping, exploiting emotional vulnerabilities and amplifying the impact of coercive tactics through fabricated audiovisual evidence.

Deepfakes, as a critical component of the broader fake news landscape, intensify scepticism and blur the lines of truth, posing significant challenges to information integrity. As we progress further into 2024, the unabated prevalence of deepfakes signals a future rife with uncertainty and vulnerability, emphasising the critical need for heightened awareness and proactive measures to navigate the progressing digital deception landscape. Detection of deepfakes is inherently complex due to stripped metadata and inaccessible original content, necessitating proactive defence strategies. Swift response mechanisms and robust training initiatives are vital to empower individuals and institutions in recognising and mitigating risks effectively. Educating the public about deepfakes is crucial, yet concerns persist about maintaining vigilance against potential harm, especially in scenarios like robocalls. Continuous adaptation and vigilance remain paramount in safeguarding the integrity of our digital world amidst the ongoing arms race in detecting deepfake technology. Efforts to halt the advancement of AI technology overall are challenging, further emphasising the necessity for continuous adaptation in combating deepfakes.

NOTE: ‘Robocalls’ in deepfake activities involve using automated calls to distribute manipulated audio messages created through deepfake technology. This allows malicious actors to deceive people on a large scale by spreading convincing fake messages for various nefarious purposes.

Author/Valkyrie: Mathew Newton is the Director of Crisis Response at Valkyrie GB Limited, a boutique security company based in London that offers tailored security solutions for individuals, businesses, and government organisations. Specialising in cyber security, Technical Surveillance and Countermeasures, investigations, and physical/personal security measures, their services also include crisis management, physical penetration testing, covert surveillance, and security training. You can reach them at security@valkyrie.co.uk

Continue reading