Legendary director James Cameron has warned of the dangers that deepfakes pose to society.
Deepfakes leverage machine learning and AI techniques to convincingly manipulate or generate visual and audio content. Their high potential to deceive makes them a powerful tool for spreading disinformation, committing fraud, trolling, and more.
“Every time we improve these tools, we’re actually in a sense building a toolset to create fake media — and we’re seeing it happening now,” said Cameron in a BBC video interview.
“Right now the tools are — the people just playing around on apps aren’t that great. But over time, those limitations will go away. Things that you see and fully believe you’re seeing could be faked.”
Have you ever said “I’ll believe it when I see it with my own eyes,” or similar? I certainly have. As humans, we’re subconsciously trained to believe what we can see (unless it’s quite obviously faked.)
The problem is amplified with today’s fast news cycle. It’s a well-known problem that many articles get shared based on their headline before moving on to the next story. Few people are going to stop to analyse images and videos for small imperfections.
Often the stories are shared with reactions to the headline without reading the story to get the full context. This can lead to a butterfly effect of people seeing their contacts’ reactions to the headline and feel they don’t need additional context—often just sharing in whatever emotional response the headline was designed to invoke (generally outrage.)
“News cycles happen so fast, and people respond so quickly, you could have a major incident take place between the interval between when the deepfake drops and when it’s exposed as a fake,” says Cameron.
“We’ve seen situations — you know, Arab Spring being a classic example — where with social media, the uprising was practically overnight.”
It’s a difficult problem to tackle as it is. We’ve all seen the amount of disinformation around things such as the COVID-19 vaccines. However, an article posted with convincing deepfake media will be almost impossible to stop from being posted and/or shared widely.
AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. However, researchers have found that current tools can easily be deceived.
Images and videos that can be verified as original and authentic using technologies like distributed ledgers could also be used to help give audiences confidence the media they’re consuming isn’t a manipulated version and they really can trust their own eyes.
In the meantime, Cameron suggest using Occam’s razor—a problem-solving principle that’s can be summarised as the simplest explanation is the likeliest.
“Conspiracy theories are all too complicated. People aren’t that good, human systems aren’t that good, people can’t keep a secret to save their lives, and most people in positions of power are bumbling stooges.
“The fact that we think that they could realistically pull off these — these complex plots? I don’t buy any of that crap! Bill Gates is not really trying to microchip you with the flu vaccine!”
However, Cameron admits his scepticism of new technology.
“Every single advancement in technology that’s ever been created has been weaponised. I say this to AI scientists all the time, and they go, ‘No, no, no, we’ve got this under control.’ You know, ‘We just give the AIs the right goals…’
“So who’s deciding what those goals are? The people that put up the money for the research, right? Which are all either big business or defense. So you’re going to teach these new sentient entities to be either greedy or murderous.”
Of course, Skynet gets an honourary mention.
“If Skynet wanted to take over and wipe us out, it would actually look a lot like what’s going on right now. It’s not going to have to — like, wipe out the entire, you know, biosphere and environment with nuclear weapons to do it. It’s going to be so much easier and less energy required to just turn our minds against ourselves.
“All Skynet would have to do is just deepfake a bunch of people, pit them against each other, stir up a lot of foment, and just run this giant deepfake on humanity.”
Russia’s infamous state-sponsored “troll farms” are one of the largest sources of disinformation and are used to conduct online influence campaigns.
In a January 2017 report issued by the United States Intelligence Community – Assessing Russian Activities and Intentions in Recent US Elections (PDF) – described the ‘Internet Research Agency’ as one such troll farm.
“The likely financier of the so-called Internet Research Agency of professional trolls located in Saint Petersburg is a close ally of [Vladimir] Putin with ties to Russian intelligence,” commenting that “they previously were devoted to supporting Russian actions in Ukraine.”
Western officials have warned that Russia may use disinformation campaigns – including claims of an attack from Ukrainian troops – to rally support and justify an invasion of Ukraine. It’s not out the realms of possibility that manipulated content will play a role, so it could be too late to counter the first large-scale disaster supported by deepfakes.
Related: University College London: Deepfakes are the ‘most serious’ AI crime threat
(Image Credit: Gage Skidmore. Image cropped. CC BY-SA 3.0 license)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.