Few developments in our contemporary digitized world have stirred as much intrigue and alarm as deepfakes. These AI-generated simulations are capable of creating hyper-realistic but entirely fabricated audio-visual content, as evidenced by those of Ukrainian President Volodymyr Zelensky conceding defeat to Russia shortly after Russia’s invasion in February 2022. Ironically, on December 14 2023, Russian President Vladimir Putin was taken aback when confronting an AI clone of himself, a deepfake, while taking calls from the Russian public.
Days later, an AI voice of Pakistan’s former Prime Minister gave an impassioned speech during a virtual rally. Both incidents made global headlines, fascinating some audiences and stoking fears among others as more than 60 countries will be going to the polls this year. It is deeply concerning that more than half of the world’s population will be voting in what is expected to be the biggest election year in history and, coincidentally, the first of the deepfake era.
The digital virus of disinformation has been evolving since the controversy around Cambridge Analytica that marred the 2016 US presidential election. Deepfakes can be considered as its latest mutation. Nina Schick was among the first to raise this concern during the peak of Covid-19 in her book Deepfakes: The Coming Infocalypse. However, the advancements in Generative AI over the past year have opened a digital Pandora’s Box with far-reaching, multifaceted implications. This is because deepfake technology is not just a marvel of modern computing and a testament to human ingenuity but a profound threat to the integrity of political discourse and societal cohesion.
Regarding the former, the infiltration of deepfakes into the political sphere signals a disturbing trend across continents. This is illustrated by examples of viral deepfakes in the US and Argentina, the UK and Slovakia, India and Bangladesh, which underscore how they are harbingers of an emerging norm in political warfare worldwide, where the authenticity of political discourse is perennially in question. Moreover, nuanced fabrications that are potent enough to incite scepticism are similarly concerning, for if anything can be deepfaked, then as a corollary, every claim can be dismissed as AI-generated propaganda, a phenomenon which has been termed as the ‘liars dividend’, akin to a reported saying of the infamous propagandist Joseph Goebbels that the constant repetition of a lie blurs the line between truth and falsehood.
The spectre of AI in elections hence raises fundamental concerns about the integrity of democratic processes as sophisticated deepfakes exploit human dependency on our senses to deceive us, challenging the age-old notion that seeing is believing. Additionally, they are not just a tool in the hands of political actors but a catalyst for societal polarization, where disbelief and doubt overshadow reasoned discourse and reinforce cognitive biases.
The danger extends beyond political propaganda in vulnerable societies as it cultivates a breeding ground for conflicts. As authentic and fabricated content becomes indistinguishable, the very basis of public discourse is undermined, and the societal erosion of trust is palpable. Moreover, as these AI-manufactured illusions become more sophisticated, they will have a corrosive societal impact in developing countries by exploiting the poor levels of digital literacy and fuelling a climate of scepticism and cynicism. Deepfakes can thus exacerbate communal divides and foster a violent milieu in fragile states. For instance, in Myanmar, disinformation on social media is considered to have been a catalyst for the genocide against the Rohingya Muslims.
The meteoric rise of short-form content platforms like TikTok and reels, declining attention spans among the youth and the ease of accessibility to the internet have thus combined to create a digital environment which is increasingly prone to being exploited by malevolent actors. Viral controversies surrounding the spread of explicit deepfakes of celebrities and ordinary women alike signal a disturbing trend and a threat to the rights and dignity of women worldwide. Furthermore, deepfakes have supercharged the criminal potential of cybercrimes as underscored by a noteworthy incident where fraudsters deceived a finance worker into transferring a staggering sum of $25 million during a multi-person video conference attended by deepfaked replications of company officials.
The development of countermeasures, such as AI-driven detection tools and digital watermarking, offers some hope. However, the future trajectory of deepfakes will likely see further advancements in realism and accessibility, reducing the barriers to entry, especially for non-state actors, as warned by Europol in a June 2023 report highlighting how deepfakes can be leveraged to augment terrorist propaganda. These developments have incited a technological tug-of-war between the creators and detectors of synthetic content. For instance, tech companies like Meta and Microsoft have recently initiated nascent policies to combat deepfakes but on the other hand, publically accessible tools like Open AI’s controversial Sora highlight that the herculean challenge to curb the misuse of AI underscores the imperative for innovative and adaptive legal frameworks that can keep pace with technological advancements.
Current legislative efforts have failed to address the magnitude of the crisis. The piecemeal approach observed in different states is symptomatic of a broader dilemma: how to effectively govern a rapidly evolving technology that transcends national boundaries and challenges traditional legal paradigms? The disparity in technological capabilities and normative and regulatory frameworks across countries has created a patchwork of fragmented and reactive responses in developed and developing states.
The need for international cooperation and harmonization of policies is critical to prevent the exploitation of regulatory loopholes by malicious actors. Cooperation among governments, technology companies, civil society, and international organizations is thus integral in effectively addressing the complexities of AI regulation.
This collaborative approach should focus on combating deepfakes and building an informed society capable of discerning and resisting manipulative content. The path forward, therefore, demands vigilance, innovation, and ethical stewardship from all stakeholders. As we stand at the crossroads of a technological revolution, it is incumbent upon us to balance innovation with ethical responsibility, ensuring that the digital future we create is anchored in truth and transparency, not lost in a digital mirage of our creation.
Mustafa Bilal
Mustafa Bilal is a researcher at the Centre for Aerospace and Security Studies (CASS), Lahore, Pakistan. He can be reached at info@casslhr.com
- This author does not have any more posts.