In 1938, American filmmaker Orson Welles’ narration of H.G. Wells’ alien invasion novel “The War of the Worlds” caused panic and pandemonium for listeners in the US who believed the tale to be a public broadcast by the government.
The next day, headlines across newspapers read “Radio Listeners in Panic, Taking War Drama as Fact.” Historical research, however, suggested that the actual panic itself was overstated by the media, as the broadcast itself had few listeners.
Fast-forward to 2021 with the long arms of social media and the internet, what would happen if a video showing US President Joe Biden sitting in the Oval Office announcing that he will be striking Iran imminently were to appear? Or if a video showing French President Emmanuel Macron crassly insulting Muslims surfaced?
Artificial Intelligence (AI) technology, called deep learning, which generates images of fake events, known as deepfakes, allows for the creation of a moving image that looks and sounds exactly like Biden or Macron, but isn’t them, to speak and say whatever the creator wants, with most observers unable to tell if it is fake.
“Even before deep fakes, social media has platforms, and the different services have led to some threats on users in our region, especially women and other vulnerable communities,” Mohamed Najem, executive director of SMEX, a digital-rights organization focusing on Arabic-speaking countries, told Arab News.
“Deep fakes bring more serious threats to the aforementioned groups, especially if (criminals) want to destroy someone’s reputation — women, especially, are at risk, with them having gained more freedom within different conservative communities, which could see them suffer real damage” he added.
Recently, a series of very convincing TikTok videos showing Actor Tom Cruise doing multiple activities has left millions confused as to whether or not it really is the famous actor. Other known deepfakes show former US President Barack Obama calling his successor Donald Trump a “dipsh*t” and Facebook co-founder and CEO Mark Zuckerberg speaking about stealing users’ private data.
According to a report published last year by University College London (UCL), deepfakes rank as the most serious AI crime threat.
“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives,” author Lewis Griffin stated in the report.