deepfake Archives - AI News https://www.artificialintelligence-news.com/tag/deepfake/ Artificial Intelligence News Tue, 23 Jan 2024 17:04:05 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png deepfake Archives - AI News https://www.artificialintelligence-news.com/tag/deepfake/ 32 32 AI-generated Biden robocall urges Democrats not to vote https://www.artificialintelligence-news.com/2024/01/23/ai-generated-biden-robocall-urges-democrats-not-to-vote/ https://www.artificialintelligence-news.com/2024/01/23/ai-generated-biden-robocall-urges-democrats-not-to-vote/#respond Tue, 23 Jan 2024 17:04:04 +0000 https://www.artificialintelligence-news.com/?p=14253 An AI-generated robocall impersonating President Joe Biden has urged Democratic Party members not to vote in the upcoming primary on Tuesday. Kathy Sullivan – a prominent New Hampshire Democrat and former state party chair – is calling for the prosecution of those responsible, describing the incident as “an attack on democracy.” The call began with... Read more »

The post AI-generated Biden robocall urges Democrats not to vote appeared first on AI News.

]]>
An AI-generated robocall impersonating President Joe Biden has urged Democratic Party members not to vote in the upcoming primary on Tuesday.

Kathy Sullivan – a prominent New Hampshire Democrat and former state party chair – is calling for the prosecution of those responsible, describing the incident as “an attack on democracy.”

The call began with a dismissive “What a bunch of malarkey,” a phrase that’s become associated with the 81-year-old president. It then went on to discourage voting in the primary, suggesting that Democrats should save their votes for the November election.

Sullivan, an attorney, believes the call may violate several laws and is determined to uncover the individuals behind it. New Hampshire attorney general, John Formella, has urged voters to disregard the call’s contents.

The robocall controversy has sparked an investigation, with NBC News releasing a recording of the call. Sullivan’s phone number was included in the message, raising concerns about privacy and potential harassment.

This incident comes amid a wider debate about the use of AI in political campaigns. OpenAI recently suspended the developer of a ChatGPT-powered bot called Dean.Bot that mimicked Democratic candidate Dean Phillips.

As concerns about AI manipulation in elections grow, advocacy groups like Public Citizen are pushing for federal regulation. A petition from Public Citizen calls on the Federal Election Commission (FEC) to regulate AI use in campaign ads. The FEC chair, Sean Cooksey, acknowledged the issue but stated that resolving it might take until early summer.

The deepfake call and politician-impersonating chatbot has intensified calls for swift action to address the potential chaos AI could cause in elections. With state lawmakers also considering bills to tackle this practice, the incident raises questions about the vulnerability of democratic processes to AI manipulation in a crucial election year.

(Photo by Manny Becerra on Unsplash)

See also: OpenAI launches GPT Store for custom AI assistants

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI-generated Biden robocall urges Democrats not to vote appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/23/ai-generated-biden-robocall-urges-democrats-not-to-vote/feed/ 0
McAfee unveils AI-powered deepfake audio detection https://www.artificialintelligence-news.com/2024/01/08/mcafee-unveils-ai-powered-deepfake-audio-detection/ https://www.artificialintelligence-news.com/2024/01/08/mcafee-unveils-ai-powered-deepfake-audio-detection/#respond Mon, 08 Jan 2024 10:49:16 +0000 https://www.artificialintelligence-news.com/?p=14161 McAfee has revealed a pioneering AI-powered deepfake audio detection technology, Project Mockingbird, during CES 2024. This proprietary technology aims to defend consumers against the rising menace of cybercriminals employing fabricated, AI-generated audio for scams, cyberbullying, and manipulation of public figures’ images. Generative AI tools have enabled cybercriminals to craft convincing scams, including voice cloning to... Read more »

The post McAfee unveils AI-powered deepfake audio detection appeared first on AI News.

]]>
McAfee has revealed a pioneering AI-powered deepfake audio detection technology, Project Mockingbird, during CES 2024. This proprietary technology aims to defend consumers against the rising menace of cybercriminals employing fabricated, AI-generated audio for scams, cyberbullying, and manipulation of public figures’ images.

Generative AI tools have enabled cybercriminals to craft convincing scams, including voice cloning to impersonate family members seeking money or manipulating authentic videos with “cheapfakes.” These tactics manipulate content to deceive individuals, creating a heightened challenge for consumers to discern between real and manipulated information.

In response to this challenge, McAfee Labs developed an industry-leading AI model, part of the Project Mockingbird technology, to detect AI-generated audio. This technology employs contextual, behavioural, and categorical detection models, achieving an impressive 90 percent accuracy rate.

Steve Grobman, CTO at McAfee, said: “Much like a weather forecast indicating a 70 percent chance of rain helps you plan your day, our technology equips you with insights to make educated decisions about whether content is what it appears to be.”

Project Mockingbird offers diverse applications, from countering AI-generated scams to tackling disinformation. By empowering consumers to distinguish between authentic and manipulated content, McAfee aims to protect users from falling victim to fraudulent schemes and ensure a secure digital experience.

Deep concerns about deepfakes

As deepfake technology becomes more sophisticated, consumer concerns are on the rise. McAfee’s December 2023 Deepfakes Survey highlights:

  • 84% of Americans are concerned about deepfake usage in 2024
  • 68% are more concerned than a year ago
  • 33% have experienced or witnessed a deepfake scam, with 40% prevalent among 18–34 year-olds
  • Top concerns include election influence (52%), undermining public trust in media (48%), impersonation of public figures (49%), proliferation of scams (57%), cyberbullying (44%), and sexually explicit content creation (37%)

McAfee’s unveiling of Project Mockingbird marks a significant leap in the ongoing battle against AI-generated threats. As countries like the US and UK enter a pivotal election year, it’s crucial that consumers are given the best chance possible at grappling with the pervasive influence of deepfake technology.

(Photo by Markus Spiske on Unsplash)

See also: MyShell releases OpenVoice voice cloning AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post McAfee unveils AI-powered deepfake audio detection appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/08/mcafee-unveils-ai-powered-deepfake-audio-detection/feed/ 0
Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/ https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/#respond Wed, 27 Sep 2023 08:50:54 +0000 https://www.artificialintelligence-news.com/?p=13650 In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime. Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role... Read more »

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime.

Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role exposed him to the rise of the “Cyber Dragon” and Chinese cyberattacks, inspiring him to explore the offensive side of cybersecurity. During this time, he not only developed defence tools, but also created attack tools that would later be adopted by the Anonymous hacker collective.

“The perfect cyber weapon”

One of the most intriguing aspects of Raz’s presentation was his exploration of “the perfect cyber weapon.” He proposed that this weapon would need to operate in complete silence, without any command and control infrastructure, and would have to adapt and improvise in real-time. The ultimate objective would be to disrupt critical systems, potentially even at the nation-state level, while remaining undetected.

Raz’s vision for this weapon, though controversial, underscored the power of AI in the wrong hands. He highlighted the potential consequences of such technology falling into the hands of malicious actors and urged the audience to consider the implications seriously.

Real-world proof of concept

To illustrate the feasibility of his ideas, Raz shared the story of a consortium of banks in the Netherlands that embraced his concept. They embarked on a project to build a proof of concept for an AI-driven cyber agent capable of executing complex attacks. This agent demonstrated the potential power of AI in the world of cybercrime.

The demonstration served as a stark reminder that AI is no longer exclusive to nation-states. Common criminals, with access to AI-driven tools and tactics, can now carry out sophisticated cyberattacks with relative ease. This shift in the landscape presents a pressing challenge for organisations and governments worldwide.

The rise of AI-enhanced malicious activities

Raz further showcased how AI can be harnessed for malicious purposes. He discussed techniques such as phishing attacks and impersonation, where AI-powered agents can craft highly convincing messages and even deepfake voices to deceive individuals and organisations.

Additionally, he touched on the development of polymorphic malware—malware that continuously evolves to evade detection. This alarming capability means that cybercriminals can stay one step ahead of traditional cybersecurity measures.

Stark wake-up call

Raz’s presentation served as a stark wake-up call for the cybersecurity community. It highlighted the evolving threats posed by AI-driven cybercrime and emphasised the need for organisations to bolster their defences continually.

As AI continues to advance, both in terms of its capabilities and its accessibility, the line between nation-state and common criminal cyber activities becomes increasingly blurred.

In this new age of AI-driven cyber threats, organisations must remain vigilant, adopt advanced threat detection and prevention technologies, and prioritise cybersecurity education and training for their employees.

Raz’s insights underscored the urgency of this matter, reminding us that the only way to combat the evolving threat landscape is to evolve our defences in tandem. The future of cybersecurity demands nothing less than our utmost attention and innovation.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with AI & Big Data Expo Europe.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/feed/ 0
VisitDenmark brings iconic tourist attractions to life in AI-produced campaign https://www.artificialintelligence-news.com/2023/03/07/visitdenmark-brings-iconic-tourist-attractions-to-life-in-ai-produced-campaign/ https://www.artificialintelligence-news.com/2023/03/07/visitdenmark-brings-iconic-tourist-attractions-to-life-in-ai-produced-campaign/#respond Tue, 07 Mar 2023 09:29:26 +0000 https://www.artificialintelligence-news.com/?p=12792 A new activation campaign from tourism organization VisitDenmark wants to put the land of “hygge” on the map as the antidote to bucket list tourism. Using artificial intelligence, Mona Lisa, the Statue of Liberty, and other iconic tourist attractions come to life with a simple message: Don’t come see me – visit Denmark instead. Other... Read more »

The post VisitDenmark brings iconic tourist attractions to life in AI-produced campaign appeared first on AI News.

]]>
A new activation campaign from tourism organization VisitDenmark wants to put the land of “hygge” on the map as the antidote to bucket list tourism.

Using artificial intelligence, Mona Lisa, the Statue of Liberty, and other iconic tourist attractions come to life with a simple message: Don’t come see me – visit Denmark instead. Other than its cheeky approach, the campaign stands out by being completely written by artificial intelligence.

“Imagine that you are Mona Lisa. Write a speech on why people should visit Denmark instead of standing in line to see you.“

This was the prompt given to an artificial intelligence to create the script of one of a series of videos in which tourist attractions from all over the world turn against themselves and recommend visiting Denmark – rather than standing in line at the Louvre or seeing the Statue of Liberty in a sea of selfie-sticks. Executing on the brand campaign ‘Don’t be a tourist – be an Explorist’, VisitDenmark positions Denmark as the antidote to bucket list tourism.

Louis Pilmark, creative director at Danish advertisement agency Brandhouse/Subsero, said: “Having iconic attractions from popular tourist destinations turn on themselves is a good way to highlight the absurdity of doing and seeing the same things as everyone else. Who better to explain it, than the paintings and statues that see millions of tourists every year.  

Iconic art meets trending tech

Other than the slightly teasing approach, the campaign is unique in that both the scripts and the visuals are created by artificial intelligence. While the new techniques like deepfake and motion synthesis have been used to bring images to life in the last couple of years, the addition of scripts generated completely by AI makes it one of the first campaigns to combine the two technologies.

Kathrine Lind Gustavussen, senior PR at VisitDenmark, said: ”The scripts are 100% generated by AI – we didn’t write a single word, we only removed parts and bits that were too long or simply not true. While it felt somewhat risky to put our entire messaging in the hands of artificial intelligence, we’re excited to be at the forefront of the tourism industry, using cutting-edge technology to bring our creative visions and messages to life.” 

Tourist attractions aren’t so attractive anymore

The overall campaign, developed by London-based creative agency Fold7 builds on an insight that bucket list tourist has lost its lustre. A study conducted in the UK, Sweden and Germany validated our hypothesis that ‘feeling like a tourist’ would ruin a holiday. More than half of the respondents agreed that overcrowded tourist sites and landmark were a reason for holiday disappointment.

Yelena Gaufman, strategy partner at Fold7, said: “Denmark may not be a bucket list destination and the wonders there aren’t big and dramatic, but they are small and plentiful. We saw this as a huge opportunity to attract a different kind of traveller, the anti-tourist, the Explorist.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post VisitDenmark brings iconic tourist attractions to life in AI-produced campaign appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/07/visitdenmark-brings-iconic-tourist-attractions-to-life-in-ai-produced-campaign/feed/ 0
China’s deepfake laws come into effect today https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/ https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/#respond Tue, 10 Jan 2023 16:46:21 +0000 https://www.artificialintelligence-news.com/?p=12594 China will begin enforcing its strict new rules around the creation of deepfakes from today. Deepfakes are increasingly being used for manipulation and humiliation. We’ve seen deepfakes of figures like disgraced FTX founder Sam Bankman-Fried to commit fraud, Ukrainian President Volodymyr Zelenskyy to spread disinformation, and US House Speaker Nancy Pelosi to make her appear... Read more »

The post China’s deepfake laws come into effect today appeared first on AI News.

]]>
China will begin enforcing its strict new rules around the creation of deepfakes from today.

Deepfakes are increasingly being used for manipulation and humiliation. We’ve seen deepfakes of figures like disgraced FTX founder Sam Bankman-Fried to commit fraud, Ukrainian President Volodymyr Zelenskyy to spread disinformation, and US House Speaker Nancy Pelosi to make her appear drunk.

Last month, the Cyberspace Administration of China (CAC) announced rules to clampdown on deepfakes.

“In recent years, in-depth synthetic technology has developed rapidly. While serving user needs and improving user experiences, it has also been used by some criminals to produce, copy, publish, and disseminate illegal and bad information, defame, detract from the reputation and honour of others, and counterfeit others,” explains the CAC.

Providers of services for creating synthetic content will be obligated to ensure their AIs aren’t misused for illegal and/or harmful purposes. Furthermore, any content that was created using an AI must be clearly labelled with a watermark.

China’s new rules come into force today (10 January 2023) and will also require synthetic service providers to:

  • Not illegally process personal information
  • Periodically review, evaluate, and verify algorithms
  • Establish management systems and technical safeguards
  • Authenticate users with real identity information
  • Establish mechanisms for complaints and reporting

The CAC notes that effective governance of synthetic technologies is a multi-entity effort that will require the participation of government, enterprises, and citizens. Such participation, the CAC says, will promote the legal and responsible use of deep synthetic technologies while minimising the associated risks.

(Photo by Henry Chen on Unsplash)

Related: AI & Big Data Expo: Exploring ethics in AI and the guardrails required

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post China’s deepfake laws come into effect today appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/feed/ 0
Deepfakes are now being used to help solve crimes https://www.artificialintelligence-news.com/2022/05/25/deepfakes-are-now-being-used-to-help-solve-crimes/ https://www.artificialintelligence-news.com/2022/05/25/deepfakes-are-now-being-used-to-help-solve-crimes/#respond Wed, 25 May 2022 16:03:12 +0000 https://www.artificialintelligence-news.com/?p=11998 A deepfake video created by Dutch police could help to change the often negative perception of the technology. Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content. The technology is already being used for malicious purposes including generating sexual content... Read more »

The post Deepfakes are now being used to help solve crimes appeared first on AI News.

]]>
A deepfake video created by Dutch police could help to change the often negative perception of the technology.

Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

The technology is already being used for malicious purposes including generating sexual content of individuals without their consent, fraud, and the creation of deceptive content aimed at changing views and influencing democratic processes.

However, authorities in Rotterdam have proven the technology can be put to use for good.

Dutch police have created a deepfake video of 13-year-old Sedar Soares – a young footballer who was shot dead in 2003 while throwing snowballs with his friends in the car park of a Rotterdam metro station – in an appeal for information to finally solve his murder.

The video depicts Soares picking up a football in front of the camera and walking through a guard of honour on the field that comprises his relatives, friends, and former teachers.

“Somebody must know who murdered my darling brother. That’s why he has been brought back to life for this film,” says a voice in the video, before Soares drops his ball.

“Do you know more? Then speak,” his relatives and friends say, before his image disappears from the field. The video then gives the police contact details.

It’s hoped the stirring video and a reminder of what Soares would have looked like at the time will help to jog memories and lead to the case finally being solved.

Daan Annegarn, a detective with the National Investigation Communications Team, said:

“We know better and better how cold cases can be solved. Science shows that it works to hit witnesses and the perpetrator in the heart—with a personal call to share information. What better way to do that than to let Sedar and his family do the talking? 

We had to cross a threshold. It is not nothing to ask relatives: ‘Can I bring your loved one to life in a deepfake video? We are convinced that it contributes to the detection, but have not done it before.

The family has to fully support it.”

So far, it seems to have had an impact. The police claim to have already received dozens of tips but they need to see whether they’re credible. In the meantime, anyone that may have any information is encouraged to come forward.

“The deployment of deepfake is not just a lucky shot. We are convinced that it can touch hearts in the criminal environment—that witnesses and perhaps the perpetrator can come forward,” Annegarn concludes.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepfakes are now being used to help solve crimes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/25/deepfakes-are-now-being-used-to-help-solve-crimes/feed/ 0
President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/ https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/#respond Thu, 17 Mar 2022 09:43:22 +0000 https://artificialintelligence-news.com/?p=11774 A deepfake of President Zelenskyy calling on citizens to “lay down arms” was posted to a hacked Ukrainian news website and shared across social networks. The deepfake purports to show Zelenskyy declaring that Ukraine has “decided to return Donbas” to Russia and that his nation’s efforts had failed. Following an alleged hack, the deepfake was... Read more »

The post President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ appeared first on AI News.

]]>
A deepfake of President Zelenskyy calling on citizens to “lay down arms” was posted to a hacked Ukrainian news website and shared across social networks.

The deepfake purports to show Zelenskyy declaring that Ukraine has “decided to return Donbas” to Russia and that his nation’s efforts had failed.

Following an alleged hack, the deepfake was first posted to a Ukrainian news site for TV24. The deepfake was then shared across social networks, including Facebook and Twitter.

Nathaniel Gleicher, Head of Security Policy for Facebook owner Meta, wrote in a tweet:

“Earlier today, our teams identified and removed a deepfake video claiming to show President Zelensky issuing a statement he never did.

It appeared on a reportedly compromised website and then started showing across the internet.”

The deepfake itself is poor by today’s standards, with fake Zelenskyy having a comically large and noticeably pixelated head compared to the rest of his body.

It shouldn’t have fooled anyone, but Zelenskyy posted a video to his Instagram to call out the fake anyway.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in his official video. “We are at home and defending Ukraine.”

Earlier this month, the Ukrainian government posted a statement warning soldiers and civilians not to believe any videos of Zelenskyy claiming to surrender:

“​​Imagine seeing Vladimir Zelensky on TV making a surrender statement. You see it, you hear it – so it’s true. But this is not the truth. This is deepfake technology.

This will not be a real video, but created through machine learning algorithms.

Videos made through such technologies are almost impossible to distinguish from the real ones.

Be aware – this is a fake! The goal is to disorient, sow panic, disbelieve citizens, and incite our troops to retreat.”

Fortunately, this deepfake was quite easy to distinguish – despite humans now often finding it impossible – and could actually help to raise awareness of how such content is used to influence and manipulate.

Earlier this month, AI News reported on how Facebook and Twitter removed two anti-Ukraine disinformation campaigns linked to Russia and Belarus. One of the campaigns even used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website.

Both cases in the past month show the danger of deepfakes and the importance of raising public awareness and developing tools for countering such content before it’s able to spread.

(Image Credit: President.gov.ua used without changes under CC BY 4.0 license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/feed/ 0
James Cameron warns of the dangers of deepfakes https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/ https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/#respond Mon, 24 Jan 2022 18:40:34 +0000 https://artificialintelligence-news.com/?p=11603 Legendary director James Cameron has warned of the dangers that deepfakes pose to society. Deepfakes leverage machine learning and AI techniques to convincingly manipulate or generate visual and audio content. Their high potential to deceive makes them a powerful tool for spreading disinformation, committing fraud, trolling, and more. “Every time we improve these tools, we’re... Read more »

The post James Cameron warns of the dangers of deepfakes appeared first on AI News.

]]>
Legendary director James Cameron has warned of the dangers that deepfakes pose to society.

Deepfakes leverage machine learning and AI techniques to convincingly manipulate or generate visual and audio content. Their high potential to deceive makes them a powerful tool for spreading disinformation, committing fraud, trolling, and more.

“Every time we improve these tools, we’re actually in a sense building a toolset to create fake media — and we’re seeing it happening now,” said Cameron in a BBC video interview.

“Right now the tools are — the people just playing around on apps aren’t that great. But over time, those limitations will go away. Things that you see and fully believe you’re seeing could be faked.”

Have you ever said “I’ll believe it when I see it with my own eyes,” or similar? I certainly have. As humans, we’re subconsciously trained to believe what we can see (unless it’s quite obviously faked.)

The problem is amplified with today’s fast news cycle. It’s a well-known problem that many articles get shared based on their headline before moving on to the next story. Few people are going to stop to analyse images and videos for small imperfections.

Often the stories are shared with reactions to the headline without reading the story to get the full context. This can lead to a butterfly effect of people seeing their contacts’ reactions to the headline and feel they don’t need additional context—often just sharing in whatever emotional response the headline was designed to invoke (generally outrage.)

“News cycles happen so fast, and people respond so quickly, you could have a major incident take place between the interval between when the deepfake drops and when it’s exposed as a fake,” says Cameron.

“We’ve seen situations — you know, Arab Spring being a classic example — where with social media, the uprising was practically overnight.”

It’s a difficult problem to tackle as it is. We’ve all seen the amount of disinformation around things such as the COVID-19 vaccines. However, an article posted with convincing deepfake media will be almost impossible to stop from being posted and/or shared widely.

AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. However, researchers have found that current tools can easily be deceived.

Images and videos that can be verified as original and authentic using technologies like distributed ledgers could also be used to help give audiences confidence the media they’re consuming isn’t a manipulated version and they really can trust their own eyes.

In the meantime, Cameron suggest using Occam’s razor—a problem-solving principle that’s can be summarised as the simplest explanation is the likeliest.

“Conspiracy theories are all too complicated. People aren’t that good, human systems aren’t that good, people can’t keep a secret to save their lives, and most people in positions of power are bumbling stooges.

“The fact that we think that they could realistically pull off these — these complex plots? I don’t buy any of that crap! Bill Gates is not really trying to microchip you with the flu vaccine!”

However, Cameron admits his scepticism of new technology.

“Every single advancement in technology that’s ever been created has been weaponised. I say this to AI scientists all the time, and they go, ‘No, no, no, we’ve got this under control.’ You know, ‘We just give the AIs the right goals…’

“So who’s deciding what those goals are? The people that put up the money for the research, right? Which are all either big business or defense. So you’re going to teach these new sentient entities to be either greedy or murderous.”

Of course, Skynet gets an honourary mention.

“If Skynet wanted to take over and wipe us out, it would actually look a lot like what’s going on right now. It’s not going to have to — like, wipe out the entire, you know, biosphere and environment with nuclear weapons to do it. It’s going to be so much easier and less energy required to just turn our minds against ourselves.

“All Skynet would have to do is just deepfake a bunch of people, pit them against each other, stir up a lot of foment, and just run this giant deepfake on humanity.”

Russia’s infamous state-sponsored “troll farms” are one of the largest sources of disinformation and are used to conduct online influence campaigns.

In a January 2017 report issued by the United States Intelligence Community – Assessing Russian Activities and Intentions in Recent US Elections (PDF) – described the ‘Internet Research Agency’ as one such troll farm.

“The likely financier of the so-called Internet Research Agency of professional trolls located in Saint Petersburg is a close ally of [Vladimir] Putin with ties to Russian intelligence,” commenting that “they previously were devoted to supporting Russian actions in Ukraine.”

Western officials have warned that Russia may use disinformation campaigns – including claims of an attack from Ukrainian troops – to rally support and justify an invasion of Ukraine. It’s not out the realms of possibility that manipulated content will play a role, so it could be too late to counter the first large-scale disaster supported by deepfakes.

Related: University College London: Deepfakes are the ‘most serious’ AI crime threat

(Image Credit: Gage Skidmore. Image cropped. CC BY-SA 3.0 license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post James Cameron warns of the dangers of deepfakes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/feed/ 0
Researchers find systems to counter deepfakes can be deceived https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/ https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/#comments Wed, 10 Feb 2021 17:26:35 +0000 http://artificialintelligence-news.com/?p=10256 Researchers have found that systems designed to counter the increasing prevalence of deepfakes can be deceived. The researchers, from the University of California – San Diego, first presented their findings at the WACV 2021 conference. Shehzeen Hussain, a UC San Diego computer engineering PhD student and co-author on the paper, said: “Our work shows that... Read more »

The post Researchers find systems to counter deepfakes can be deceived appeared first on AI News.

]]>
Researchers have found that systems designed to counter the increasing prevalence of deepfakes can be deceived.

The researchers, from the University of California – San Diego, first presented their findings at the WACV 2021 conference.

Shehzeen Hussain, a UC San Diego computer engineering PhD student and co-author on the paper, said:

“Our work shows that attacks on deepfake detectors could be a real-world threat.

More alarmingly, we demonstrate that it’s possible to craft robust adversarial deepfakes even when an adversary may not be aware of the inner-workings of the machine learning model used by the detector.”

Two scenarios were tested as part of the research:

  1. The attackers have complete access to the detector model, including the face extraction pipeline and the architecture and parameters of the classification model
  2. The attackers can only query the machine learning model to figure out the probabilities of a frame being classified as real or fake.

In the first scenario, the attack’s success rate is above 99 percent for uncompressed videos. For compressed videos, it was 84.96 percent. In the second scenario, the success rate was 86.43 percent for uncompressed and 78.33 percent for compressed videos.

“We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector,” the researchers wrote.

Deepfakes use a Generative Adversarial Network (GAN) to create fake imagery and even videos with increasingly convincing results. So-called ‘DeepPorn’ has been used to cause embarrassment and even blackmail.

There’s the old saying “I won’t believe it until I see it with my own eyes,” which is why convincing fake content is such a concern. As humans, we’re rather hard-wired to believe what we (think) we can see with our eyes.

In an age of disinformation, people are gradually learning not to believe everything they readespecially when it comes from unverified sources. Teaching people not to necessarily believe the images and video they see is going to pose a serious challenge.

Some hope has been placed on systems to detect and counter deepfakes before they cause harm. Unfortunately, the UC San Diego researchers’ findings somewhat dash those hopes.

“If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it,” ” said Paarth Neekhara, another co-author on the paper.

In separate research from University College London (UCL) last year, experts ranked what they believe to be the most serious AI threats. Deepfakes ranked top of the list.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” said Dr Matthew Caldwell of UCL Computer Science.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words.

The video of Pelosi was likely created with the intention of being amusing rather than particularly maliciousbut shows how deepfakes could be used to cause disrepute and even influence democratic processes.

As part of a bid to persuade Facebook to change its policies on deepfakes, last year Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Now imagine the precise targeting of content provided by platforms like Facebook combined with deepfakes which can’t be detected… actually, perhaps don’t, it’s a rather squeaky bum thought.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Researchers find systems to counter deepfakes can be deceived appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/feed/ 1
Deepfake shows Nixon announcing the moon landing failed https://www.artificialintelligence-news.com/2020/02/06/deepfake-nixon-moon-landing-failed/ https://www.artificialintelligence-news.com/2020/02/06/deepfake-nixon-moon-landing-failed/#respond Thu, 06 Feb 2020 16:42:59 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6403 In the latest creepy deepfake, former US President Nixon is shown to announce that the first moon landing failed. Nixon was known to be a divisive figure but certainly recognisable. The video shows Nixon in the Oval Office, surrounded by flags, giving a presidential address to an eagerly awaiting world. However, unlike the actual first... Read more »

The post Deepfake shows Nixon announcing the moon landing failed appeared first on AI News.

]]>

In the latest creepy deepfake, former US President Nixon is shown to announce that the first moon landing failed.

Nixon was known to be a divisive figure but certainly recognisable. The video shows Nixon in the Oval Office, surrounded by flags, giving a presidential address to an eagerly awaiting world.

However, unlike the actual first moon landing – unless you’re a subscriber to conspiracy theories – this one failed.

“These brave men, Neil Armstrong and Edwin Aldrin, know that there is no hope for their recovery,” Nixon says in his trademark growl. “But they also know that there is hope for mankind in their sacrifice.”

Here are some excerpts from the full video:

What makes the video more haunting is that the speech itself is real. Although never broadcast, it was written for Nixon by speechwriter William Safire in the eventuality the moon landing did fail.

The deepfake was created by a team from MIT’s Center for Advanced Virtuality and put on display at the IDFA documentary festival in Amsterdam.

In order to recreate Nixon’s famous voice, the MIT team partnered with technicians from Ukraine and Israel and used advanced machine learning techniques.

We’ve covered many deepfakes here on AI News. While many are amusing, there are serious concerns that deepfakes could be used for malicious purposes such as blackmail or manipulation.

Ahead of the US presidential elections, some campaigners have worked to increase the awareness of deepfakes and get social media platforms to help tackle any dangerous videos.

Back in 2018, speaker Nancy Pelosi was the victim of a deepfake that went viral across social media which made her appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

As part of a bid to persuade the social media giant to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg – making it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Last month, Facebook pledged to crack down on deepfakes ahead of the US presidential elections. However, the new rules don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to those wanting a firm stance against potential voter manipulation.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Deepfake shows Nixon announcing the moon landing failed appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/02/06/deepfake-nixon-moon-landing-failed/feed/ 0