Deepfake has Johnson and Corbyn advocating each other for Britain’s next PM

corbyn johnson deepfake ai video general election campaign 2019

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


A think tank has released two deepfake videos which appear to show election rivals Boris Johnson and Jeremy Corbyn advocating each other for Britain’s top role.

The clips are produced by Future Advocacy and intend to show that people can no longer necessarily trust what they see in videos, not just to question what they read and hear.

Here’s the Johnson video:

And here’s the Corbyn video:

In the era of fake news, people are becoming increasingly aware not to believe everything they read. Training the general population not to always believe what they can see with their own eyes is a lot more challenging.

At the same time, it’s also important in a democracy that media plurality is maintained and not too much influence is centralised to a handful of “trusted” outlets. Similarly, people cannot be allowed to just call something fake news to avoid scrutiny.

Future Advocacy highlights four key challenges:

  1. Detecting deepfakes – whether society can create the means for detecting a deepfake directly at the point of upload or once it has become widely disseminated.
  2. Liar’s dividend – a phenomenon in which genuine footage of controversial content can be dismissed by the subject as a deepfake, despite it being true.
  3. Regulation – what should the limitations be with regards to the creation of deepfakes and can these be practically enforced?
  4. Damage limitation – managing the impacts of deepfakes when regulation fails and the question of where responsibility should lie for damage limitation.

Areeq Chowdhury, Head of Think Tank at Future Advocacy, said:

“Deepfakes represent a genuine threat to democracy and society more widely. They can be used to fuel misinformation and totally undermine trust in audiovisual content.

Despite warnings over the past few years, politicians have so far collectively failed to address the issue of disinformation online. Instead, the response has been to defer to tech companies to do more. The responsibility for protecting our democracy lies in the corridors of Westminster not the boardrooms of Silicon Valley.

By releasing these deepfakes, we aim to use shock and humour to inform the public and put pressure on our lawmakers. This issue should be put above party politics. We urge all politicians to work together to update our laws and protect society from the threat of deepfakes, fake news, and micro-targeted political adverts online.”

Journalists are going to have to become experts in spotting fake content to maintain trust and integrity. Social media companies will also have to take some responsibility for the content they allow to spread on their platforms.

Social media moderation

Manual moderation of every piece of content that’s posted to a network like Facebook or Twitter is simply unfeasible, so automation is going to become necessary to at least flag potentially offending content.

But what constitutes offending content? That is the question social media giants are battling with in order to strike the right balance between free speech and expression while protecting their users from manipulation.

Just last night, Twitter released its draft policy on deepfakes and is currently accepting feedback on it.

The social network proposes the following steps for tweets it detects as featuring potentially manipulated content:

  • Place a notice next to tweets that share synthetic or manipulated media.
  • Warn people before they share or like tweets with synthetic or manipulated media.
  • Add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.

Twitter defines deepfakes as “any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.”

Twitter’s current definition sounds like it could end up flagging the internet’s favourite medium, memes, as deepfakes. However, there’s a compelling argument that memes often should at least be flagged as modified from their original intent.

Take the infamous “This is fine” meme that was actually part of a larger comic by KC Green before it was manipulated for individual purposes.

In this Vulture piece, Green gives his personal stance that he’s mostly fine with people using his work as a meme so long as they’re not monetising it for themselves or using it for political purposes.

On July 25th 2016, the official Republican Party Twitter account used Green’s work and added “Well ¯\_(ツ)_/¯ #DemsInPhilly #EnoughClinton”. Green later tweeted: “Everyone is in their right to use this is fine on social media posts, but man o man I personally would like @GOP to delete their stupid post.”

Raising awareness of deepfakes

Bill Posters is a UK artist known for creating subversive deepfakes of famous celebrities, including Donald Trump and Kim Kardashian. Posters was behind the viral deepfake of Mark Zuckerberg for the Spectre project which AI News reported on earlier this year.

Posters commented on his activism using deepfakes:

“We’ve used the biometric data of famous UK politicians to raise awareness to the fact that without greater controls and protections concerning personal data and powerful new technologies, misinformation poses a direct risk to everyone’s human rights including the rights of those in positions of power.

It’s staggering that after 3 years, the recommendations from the DCMS Select Committee enquiry into fake news or the Information Commissioner’s Office enquiry into the Cambridge Analytica scandals have not been applied to change UK laws to protect our liberty and democracy.

As a result, the conditions for computational forms of propaganda and misinformation campaigns to be amplified by social media platforms are still in effect today. We urge all political parties to come together and pass measures which safeguard future elections.”

As the UK heads towards its next major election, there is sure to be much debate around potential voter manipulation. Many have pointed towards Russian interference in Western democracies but there’s yet to be any solid evidence of that being the case.

Opposition parties, however, have criticised the incumbent government in the UK as refusing to release a report into Russian interference. Former US presidential candidate Hillary Clinton branded it “inexplicable and shameful” that the UK government has not yet published the report.

Allegations of interference and foul play will likely increase in the run-up to the election, but Future Advocacy is doing a great job in highlighting to the public that not everything you see can be believed.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: , , , , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply