deepfakes Archives - AI News https://www.artificialintelligence-news.com/tag/deepfakes/ Artificial Intelligence News Mon, 03 Jun 2024 12:44:46 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png deepfakes Archives - AI News https://www.artificialintelligence-news.com/tag/deepfakes/ 32 32 X now permits AI-generated adult content https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/ https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/#respond Mon, 03 Jun 2024 12:44:45 +0000 https://www.artificialintelligence-news.com/?p=14927 Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities. “We believe that users should be able to create, distribute, and consume material related... Read more »

The post X now permits AI-generated adult content appeared first on AI News.

]]>
Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities.

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed. Sexual expression, visual or written, can be a legitimate form of artistic expression,” X’s updated ‘adult content’ policy states.

The policy further elaborates: “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality. We balance this freedom by restricting exposure to adult content for children or adult users who choose not to see it.”

Users can mark their posts as containing sensitive media, ensuring that such content is restricted from users under 18 or those who haven’t provided their birth dates.

While X’s violent content rules have similar guidelines, the platform maintains a strict stance against excessively gory content and depictions of sexual violence. Explicit threats or content inciting or glorifying violence remain prohibited.

X’s decision to allow graphic content is aimed at enabling users to participate in discussions about current events, including sharing relevant images and videos. 

Although X has never outright banned porn, these new clauses could pave the way for developing services centred around adult content, potentially creating a competitor to services like OnlyFans and enhancing its revenue streams. This would further Musk’s vision of X becoming an “everything app,” similar to China’s WeChat.

A 2022 Reuters report, citing internal company documents, indicated that approximately 13% of posts on the platform contained adult content. This percentage has likely increased, especially with the proliferation of porn bots on X.

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post X now permits AI-generated adult content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/feed/ 0
Microsoft: China plans to disrupt elections with AI-generated disinformation https://www.artificialintelligence-news.com/2024/04/05/microsoft-china-plans-disrupt-elections-ai-generated-disinformation/ https://www.artificialintelligence-news.com/2024/04/05/microsoft-china-plans-disrupt-elections-ai-generated-disinformation/#respond Fri, 05 Apr 2024 10:08:46 +0000 https://www.artificialintelligence-news.com/?p=14650 Beijing is expected to ramp up sophisticated AI-generated disinformation campaigns to influence several high-profile elections in 2024, according to Microsoft’s threat intelligence team. Microsoft warned that state-backed Chinese cyber groups – with assistance from North Korean actors – “are likely to target” the presidential and legislative elections in countries such as the US, South Korea,... Read more »

The post Microsoft: China plans to disrupt elections with AI-generated disinformation appeared first on AI News.

]]>
Beijing is expected to ramp up sophisticated AI-generated disinformation campaigns to influence several high-profile elections in 2024, according to Microsoft’s threat intelligence team.

Microsoft warned that state-backed Chinese cyber groups – with assistance from North Korean actors – “are likely to target” the presidential and legislative elections in countries such as the US, South Korea, and India this year. Their primary tactic is projected to be the creation and dissemination on social media of AI-generated content skewed to “benefit their positions” in these races.

“While the impact of such content in swaying audiences remains low, China’s increasing experimentation in augmenting memes, videos, and audio will continue – and may prove effective down the line,” Microsoft cautioned in the report released Friday.

The company cited China’s recent “dry run” utilising AI-synthesised disinformation during Taiwan’s January presidential election as a harbinger of this emerging threat. Microsoft assessed that a pro-Beijing group known as Storm 1376 or Spamouflage Dragon made the first documented attempt by a state actor to influence a foreign vote using AI-manufactured content.

Tactics deployed by the Chinese-backed operatives included posting fake audio clips likely “generated by AI” that depicted a former presidential candidate endorsing a rival, as well as AI-generated memes leveling unfounded corruption allegations against the ultimately victorious pro-sovereignty candidate William Lai. The group also created AI-rendered “news anchors” to broadcast disinformation about Lai’s personal life.

“As populations in India, South Korea, and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections,” the Microsoft report stated.

The company added that Chinese groups are already attempting to map divisive issues and voting blocs in the US through orchestrated social media campaigns, potentially “to gather intelligence and precision on key voting demographics ahead of the US Presidential election.”

While flagging the risk, Microsoft acknowledged that AI-enabled disinformation has so far achieved limited success in shaping public opinion globally. But it warned that Beijing’s growing investment and increasing sophistication with the technology poses an escalating threat to the integrity of democratic elections worldwide.

(Photo by Element5 Digital)

See also: How to safeguard your business from AI-generated deepfakes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft: China plans to disrupt elections with AI-generated disinformation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/05/microsoft-china-plans-disrupt-elections-ai-generated-disinformation/feed/ 0
How to safeguard your business from AI-generated deepfakes  https://www.artificialintelligence-news.com/2024/04/03/how-to-safeguard-your-business-ai-generated-deepfakes/ https://www.artificialintelligence-news.com/2024/04/03/how-to-safeguard-your-business-ai-generated-deepfakes/#respond Wed, 03 Apr 2024 15:43:03 +0000 https://www.artificialintelligence-news.com/?p=14633 Recently, cybercriminals used ‘deepfake’ videos of the executives of a multinational company to convince the company’s Hong Kong-based employees to wire out US $25.6 million. Based on a video conference call featuring multiple deepfakes, the employees believed that their UK-based chief financial officer had requested that the funds be transferred. Police have reportedly arrested six... Read more »

The post How to safeguard your business from AI-generated deepfakes  appeared first on AI News.

]]>
Recently, cybercriminals used ‘deepfake’ videos of the executives of a multinational company to convince the company’s Hong Kong-based employees to wire out US $25.6 million. Based on a video conference call featuring multiple deepfakes, the employees believed that their UK-based chief financial officer had requested that the funds be transferred. Police have reportedly arrested six people in connection with the scam. This use of AI technology is dangerous and manipulative. Without proper guidelines and frameworks in place, more organizations risk falling victim to AI scams like deepfakes. 

Deepfakes 101 and their rising threat 

Deepfakes are forms of digitally altered media — including photos, videos and audio clips — that seem to depict a real person. They are created by training an AI system on real clips featuring a person, and then using that AI system to generate realistic (yet inauthentic) new media. Deepfake use is becoming more common. The Hong Kong case was the latest in a series of high-profile deepfake incidents in recent weeks. Fake, explicit images of Taylor Swift circulated on social media, the political party of an imprisoned election candidate in Pakistan used a deepfake video of him to deliver a speech and a deepfake ‘voice clone’ of President Biden called primary voters to tell them not to vote. 

Less high-profile cases of deepfake use by cybercriminals have also been rising in both scale and sophistication. In the banking sector, cybercriminals are now attempting to overcome voice authentication by using voice clones of people to impersonate users and gain access to their funds. Banks have responded by improving their abilities to identify deepfake use and increasing authentication requirements. 

Cybercriminals have also targeted individuals with ‘spear phishing’ attacks that use deepfakes. A common approach is to deceive a person’s family members and friends by using a voice clone to impersonate someone in a phone call and ask for funds to be transferred to a third-party account. Last year, a survey by McAfee found that 70% of surveyed people were not confident that they could distinguish between people and their voice clones and that nearly half of surveyed people would respond to requests for funds if the family member or friend making the call claimed to have been robbed or in a car accident.

Cybercriminals have also called people pretending to be tax authorities, banks, healthcare providers and insurers in efforts to gain financial and personal details. 

In February, the Federal Communications Commission ruled that phone calls using AI-generated human voices are illegal unless made with prior express consent of the called party. The Federal Trade Commission also finalized a rule prohibiting AI impersonation of government organizations and businesses and proposed a similar rule prohibiting AI impersonation of individuals. This adds to a growing list of legal and regulatory measures being put in place around the world to combat deepfakes. 

Stay protected against deepfakes 

To protect employees and brand reputation against deepfakes, leaders should adhere to the following steps:

  1. Educate employees on an ongoing basis, both about AI-enabled scams and, more generally, about new AI capabilities and their risks. 
  2. Upgrade phishing guidance to include deepfake threats. Many companies have already educated employees about phishing emails and urged caution when receiving suspicious requests via unsolicited emails. Such phishing guidance should incorporate AI deepfake scams and note that it may use not just text and email, but also video, images and audio. 
  3. Appropriately increase or calibrate authentication of employees, business partners and customers. For example, using more than one mode of authentication depending on the sensitivity and risk of a decision or transaction. 
  4. Consider the impacts of deepfakes on company assets, like logos, advertising characters and advertising campaigns. Such company assets can easily be replicated using deepfakes and spread quickly via social media and other internet channels. Consider how your company will mitigate these risks and educate stakeholders. 
  5. Expect more and better deepfakes, given the pace of improvement in generative AI, the number of major election processes underway in 2024, and the ease with which deepfakes can propagate between people and across borders. 

Though deepfakes are a cybersecurity concern, companies should also think of them as complex and emerging phenomena with broader repercussions. A proactive and thoughtful approach to addressing deepfakes can help educate stakeholders and ensure that measures to combat them are responsible, proportionate and appropriate.

(Photo by Markus Spiske)

See also: UK and US sign pact to develop AI safety tests

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post How to safeguard your business from AI-generated deepfakes  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/03/how-to-safeguard-your-business-ai-generated-deepfakes/feed/ 0
UK Home Secretary sounds alarm over deepfakes ahead of elections https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/ https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/#respond Mon, 26 Feb 2024 16:46:48 +0000 https://www.artificialintelligence-news.com/?p=14448 Criminals and hostile state actors could hijack Britain’s democratic process by deploying AI-generated “deepfakes” to mislead voters, UK Home Secretary James Cleverly cautioned in remarks ahead of meetings with major tech companies.  Speaking to The Times, Cleverly emphasised the rapid advancement of AI technology and its potential to undermine elections not just in the UK... Read more »

The post UK Home Secretary sounds alarm over deepfakes ahead of elections appeared first on AI News.

]]>
Criminals and hostile state actors could hijack Britain’s democratic process by deploying AI-generated “deepfakes” to mislead voters, UK Home Secretary James Cleverly cautioned in remarks ahead of meetings with major tech companies. 

Speaking to The Times, Cleverly emphasised the rapid advancement of AI technology and its potential to undermine elections not just in the UK but globally. He warned that malign actors working on behalf of nations like Russia and Iran could generate thousands of highly realistic deepfake images and videos to disrupt the democratic process.

“Increasingly today the battle of ideas and policies takes place in the ever-changing and expanding digital sphere,” Cleverly told the newspaper. “The era of deepfake and AI-generated content to mislead and disrupt is already in play.”

The Home Secretary plans to urge collective action from Silicon Valley giants like Google, Meta, Apple, and YouTube when he meets with them this week. His aim is to implement “rules, transparency, and safeguards” to protect democracy from deepfake disinformation.

Cleverly’s warnings come after a series of deepfake audios imitating Labour leader Keir Starmer and London Mayor Sadiq Khan circulated online last year. Fake BBC News videos purporting to examine PM Rishi Sunak’s finances have also surfaced.

The tech meetings follow a recent pact signed by major AI companies like Adobe, Amazon, Google, and Microsoft during the Munich Security Conference to take “reasonable precautions” against disruptions caused by deepfake content during elections worldwide.

As concerns over the proliferation of deepfakes continue to grow, the world must confront the challenges they pose in shaping public discourse and potentially influencing electoral outcomes.

(Image Credit: Lauren Hurley / No 10 Downing Street under OGL 3 license)

See also: Stability AI previews Stable Diffusion 3 text-to-image model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK Home Secretary sounds alarm over deepfakes ahead of elections appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/feed/ 0
McAfee unveils AI-powered deepfake audio detection https://www.artificialintelligence-news.com/2024/01/08/mcafee-unveils-ai-powered-deepfake-audio-detection/ https://www.artificialintelligence-news.com/2024/01/08/mcafee-unveils-ai-powered-deepfake-audio-detection/#respond Mon, 08 Jan 2024 10:49:16 +0000 https://www.artificialintelligence-news.com/?p=14161 McAfee has revealed a pioneering AI-powered deepfake audio detection technology, Project Mockingbird, during CES 2024. This proprietary technology aims to defend consumers against the rising menace of cybercriminals employing fabricated, AI-generated audio for scams, cyberbullying, and manipulation of public figures’ images. Generative AI tools have enabled cybercriminals to craft convincing scams, including voice cloning to... Read more »

The post McAfee unveils AI-powered deepfake audio detection appeared first on AI News.

]]>
McAfee has revealed a pioneering AI-powered deepfake audio detection technology, Project Mockingbird, during CES 2024. This proprietary technology aims to defend consumers against the rising menace of cybercriminals employing fabricated, AI-generated audio for scams, cyberbullying, and manipulation of public figures’ images.

Generative AI tools have enabled cybercriminals to craft convincing scams, including voice cloning to impersonate family members seeking money or manipulating authentic videos with “cheapfakes.” These tactics manipulate content to deceive individuals, creating a heightened challenge for consumers to discern between real and manipulated information.

In response to this challenge, McAfee Labs developed an industry-leading AI model, part of the Project Mockingbird technology, to detect AI-generated audio. This technology employs contextual, behavioural, and categorical detection models, achieving an impressive 90 percent accuracy rate.

Steve Grobman, CTO at McAfee, said: “Much like a weather forecast indicating a 70 percent chance of rain helps you plan your day, our technology equips you with insights to make educated decisions about whether content is what it appears to be.”

Project Mockingbird offers diverse applications, from countering AI-generated scams to tackling disinformation. By empowering consumers to distinguish between authentic and manipulated content, McAfee aims to protect users from falling victim to fraudulent schemes and ensure a secure digital experience.

Deep concerns about deepfakes

As deepfake technology becomes more sophisticated, consumer concerns are on the rise. McAfee’s December 2023 Deepfakes Survey highlights:

  • 84% of Americans are concerned about deepfake usage in 2024
  • 68% are more concerned than a year ago
  • 33% have experienced or witnessed a deepfake scam, with 40% prevalent among 18–34 year-olds
  • Top concerns include election influence (52%), undermining public trust in media (48%), impersonation of public figures (49%), proliferation of scams (57%), cyberbullying (44%), and sexually explicit content creation (37%)

McAfee’s unveiling of Project Mockingbird marks a significant leap in the ongoing battle against AI-generated threats. As countries like the US and UK enter a pivotal election year, it’s crucial that consumers are given the best chance possible at grappling with the pervasive influence of deepfake technology.

(Photo by Markus Spiske on Unsplash)

See also: MyShell releases OpenVoice voice cloning AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post McAfee unveils AI-powered deepfake audio detection appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/08/mcafee-unveils-ai-powered-deepfake-audio-detection/feed/ 0
China’s deepfake laws come into effect today https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/ https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/#respond Tue, 10 Jan 2023 16:46:21 +0000 https://www.artificialintelligence-news.com/?p=12594 China will begin enforcing its strict new rules around the creation of deepfakes from today. Deepfakes are increasingly being used for manipulation and humiliation. We’ve seen deepfakes of figures like disgraced FTX founder Sam Bankman-Fried to commit fraud, Ukrainian President Volodymyr Zelenskyy to spread disinformation, and US House Speaker Nancy Pelosi to make her appear... Read more »

The post China’s deepfake laws come into effect today appeared first on AI News.

]]>
China will begin enforcing its strict new rules around the creation of deepfakes from today.

Deepfakes are increasingly being used for manipulation and humiliation. We’ve seen deepfakes of figures like disgraced FTX founder Sam Bankman-Fried to commit fraud, Ukrainian President Volodymyr Zelenskyy to spread disinformation, and US House Speaker Nancy Pelosi to make her appear drunk.

Last month, the Cyberspace Administration of China (CAC) announced rules to clampdown on deepfakes.

“In recent years, in-depth synthetic technology has developed rapidly. While serving user needs and improving user experiences, it has also been used by some criminals to produce, copy, publish, and disseminate illegal and bad information, defame, detract from the reputation and honour of others, and counterfeit others,” explains the CAC.

Providers of services for creating synthetic content will be obligated to ensure their AIs aren’t misused for illegal and/or harmful purposes. Furthermore, any content that was created using an AI must be clearly labelled with a watermark.

China’s new rules come into force today (10 January 2023) and will also require synthetic service providers to:

  • Not illegally process personal information
  • Periodically review, evaluate, and verify algorithms
  • Establish management systems and technical safeguards
  • Authenticate users with real identity information
  • Establish mechanisms for complaints and reporting

The CAC notes that effective governance of synthetic technologies is a multi-entity effort that will require the participation of government, enterprises, and citizens. Such participation, the CAC says, will promote the legal and responsible use of deep synthetic technologies while minimising the associated risks.

(Photo by Henry Chen on Unsplash)

Related: AI & Big Data Expo: Exploring ethics in AI and the guardrails required

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post China’s deepfake laws come into effect today appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/feed/ 0
Google no longer accepts deepfake projects on Colab https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/ https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/#respond Tue, 31 May 2022 14:01:05 +0000 https://www.artificialintelligence-news.com/?p=12025 Google has added “creating deepfakes” to its list of projects that are banned from its Colab service. Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers. With little fanfare, Google added deepfakes to its list of banned projects. Deepfakes use generative... Read more »

The post Google no longer accepts deepfake projects on Colab appeared first on AI News.

]]>
Google has added “creating deepfakes” to its list of projects that are banned from its Colab service.

Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers.

With little fanfare, Google added deepfakes to its list of banned projects.

Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

The technology is often used for malicious purposes such as generating sexual content of individuals without their consent, fraud, and the creation of deceptive content aimed at changing views and influencing democratic processes.

Such concerns around the use of deepfakes is likely the reason behind Google’s decision to ban relevant projects.

It’s a controversial decision. Banning such projects isn’t going to stop anyone from developing them and may also hinder efforts to build tools for countering deepfakes at a time when they’re most needed.

In March, a deepfake purportedly showing Ukrainian President Volodymyr Zelenskyy asking troops to lay down their arms in their fight to defend their homeland from Russia’s invasion was posted to a hacked news website.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in an official video to refute the fake. “We are at home and defending Ukraine.”

Fortunately, the deepfake was of low quality by today’s standards. The fake Zelenskyy had a comically large and noticeably pixelated head compared to the rest of his body. The video probably didn’t fool anyone, but it could have had serious consequences if people did believe it.

One Russia-linked influence campaign – removed by Facebook and Twitter in March – used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website. That one was more believable and likely fooled some people.

However, not all deepfakes are malicious. They’re also used for music, activism, satire, and even helping police solve crimes.

Historical data from archive.org suggests Google silently added deepfakes to its list of projects banned from Colab sometime between 14-24 July 2022.

(Photo by Markus Spiske on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google no longer accepts deepfake projects on Colab appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/feed/ 0
Kendrick Lamar uses deepfakes in latest music video https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/ https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/#respond Mon, 09 May 2022 12:10:02 +0000 https://www.artificialintelligence-news.com/?p=11943 American rapper Kendrick Lamar has made use of deepfakes for his latest music video. Deepfakes use generative neural network architectures –  such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content. Lamar is widely considered one of the greatest rappers of all time. However, he’s regularly proved his... Read more »

The post Kendrick Lamar uses deepfakes in latest music video appeared first on AI News.

]]>
American rapper Kendrick Lamar has made use of deepfakes for his latest music video.

Deepfakes use generative neural network architectures –  such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

Lamar is widely considered one of the greatest rappers of all time. However, he’s regularly proved his creative mind isn’t limited to his rapping talent.

For his track ‘The Heart Part 5’, Lamar has made use of deepfake technology to seamlessly morph his face into various celebrities including Kanye West, Nipsey Hussle, Will Smith, and even O.J. Simpson.

You can view the music video below:

For due credit, the deepfake element was created by a studio called Deep Voodoo.

Deepfakes are often used for entertainment purposes, including for films and satire. However, they’re also being used for nefarious purposes like the creation of ‘deep porn’ videos without the consent of those portrayed.

The ability to deceive has experts concerned about the social implications. Deepfakes could be used for fraud, misinformation, influencing public opinion, and interfering in democratic processes.

In March, a deepfake purportedly showing Ukrainian President Volodymyr Zelenskyy asking troops to lay down their arms in their fight to defend their homeland from Russia’s invasion was posted to a hacked news website.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in an official video to refute the fake. “We are at home and defending Ukraine.”

Fortunately, the deepfake was of very low quality by today’s standards. The fake Zelenskyy had a comically large and noticeably pixelated head compared to the rest of his body. The video probably didn’t fool anyone, but it could have had major consequences if people did believe it.

One Russia-linked influence campaign – removed by Facebook and Twitter in March – used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website.

The more deepfakes that are exposed will increase public awareness. Artists like Kendrick Lamar using them for entertainment purposes will also help to spread awareness that you can no longer necessarily believe what you can see with your own eyes.

Related: Humans struggle to distinguish between real and AI-generated faces

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Kendrick Lamar uses deepfakes in latest music video appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/feed/ 0
Deepfakes are being used to push anti-Ukraine disinformation https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/ https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/#respond Tue, 01 Mar 2022 18:01:38 +0000 https://artificialintelligence-news.com/?p=11719 Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation. Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces. As humans, we’re somewhat trained to believe what we see with our eyes.... Read more »

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation.

Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces.

As humans, we’re somewhat trained to believe what we see with our eyes. Many believed that it was only a matter of time before Russia took advantage of deepfakes and our human psychology to take its vast disinformation campaigns to the next level.

Facebook and Twitter removed two anti-Ukraine “covert influence operations” over the weekend. One had ties to Russia, while the other was connected to Belarus.

As we’ve often seen around things like Covid-19 disinformation, the Russian propaganda operation included websites aimed at pushing readers towards anti-Ukraine views. The campaign had links with the News Front and South Front websites which the US government has linked to Russian intelligence disinformation efforts.

However, Facebook said this particular campaign used AI-generated faces to give the idea that it was posted by credible columnists. Here’s one “columnist” and the “editor-in-chief” of one propaganda website:

Ears are often still a giveaway with AI-generated faces like those created on ‘This Person Does Not Exist’. The fictional woman’s mismatched earrings are one indicator while the man’s right ear is clearly not quite right.

Part of the campaign was to promote the idea that Russia’s military operation is going well and Ukraine’s efforts are going poorly. We know that Russia’s state broadcasters have only acknowledged ludicrously small losses—including just one Russian soldier fatality.

On Saturday, state-owned news agency RIA-Novosti even accidentally published and then deleted an article headlined “The arrival of Russia in a new world” in what appeared to be a pre-prepared piece expecting a swift victory. The piece piled praise on Putin’s regime and claims that Russia is returning to lead a new world order to rectify the “terrible catastrophe” that was the collapse of the Soviet Union.

So far, Russia is expected to have lost around 5,300 troops, 816 armoured combat vehicles, 101 tanks, 74 guns, 29 warplanes, 29 helicopters, and two ships/motorboats, as part of its decision to invade Ukraine.

The slow progress and mounting losses appear to have angered Russia with its military now conducting what appears to be very clear war crimes—targeting civilian areas, bombing hospitals and kindergartens, and using thermobaric and cluster munitions indiscriminately. Putin has even hinted at using nuclear weapons offensively rather than defensively in an unprecedented escalation.

Many ordinary Russian citizens are becoming outraged at what their government is doing to Ukraine, where many have family, friends, and share deep cultural ties. Russia appears to be ramping up its propaganda to counter as the country finds itself increasingly isolated.

Western governments and web giants have clamped down on Russia’s state propagandists in recent days.

British telecoms regulator Ofcom has launched 15 investigations into state broadcaster RT after observing “a significant increase in the number of programmes on the RT service that warrant investigation under our Broadcasting Code.”

Facebook has decided to block access to RT and Sputnik across the EU following “a number” of government requests from within the EU. Twitter, for its part, has announced that it would label tweets from Russian state media accounts.

Hacker collective Anonymous claims to have carried out over 1,500 cyberattacks against Russian government sites, transport infrastructure, banks, and state media to counter their falsehoods and broadcast the truth about the invasion to Russian citizens.

Russia’s media regulator Roskomnadzor, for its part, has restricted Russian users’ access to Facebook and Twitter.

(Photo by Max Kukurudziak on Unsplash)

Related: Ukraine is using Starlink to maintain global connectivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/feed/ 0
Humans struggle to distinguish between real and AI-generated faces https://www.artificialintelligence-news.com/2022/02/21/humans-struggle-distinguish-real-and-ai-generated-faces/ https://www.artificialintelligence-news.com/2022/02/21/humans-struggle-distinguish-real-and-ai-generated-faces/#respond Mon, 21 Feb 2022 18:19:36 +0000 https://artificialintelligence-news.com/?p=11696 According to a new paper, AI-generated faces have become so advanced that humans now cannot distinguish between real and fake more often than not. “Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces,”... Read more »

The post Humans struggle to distinguish between real and AI-generated faces appeared first on AI News.

]]>
According to a new paper, AI-generated faces have become so advanced that humans now cannot distinguish between real and fake more often than not.

“Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces,” the researchers explained.

The researchers – Sophie J. Nightingale, Department of Psychology, Lancaster University, and Hanry Farid, Department of Electrical Engineering and Computer Sciences, University of California – highlight the worrying trend of “deepfakes” being weaponised.

Video, audio, text, and imagery generated by generative adversarial networks (GANs) are increasingly being used for nonconsensual intimate imagery, financial fraud, and disinformation campaigns.

GANs work by pitting two neural networks – a generator and a discriminator – against each other. The generator will start with random pixels and will keep improving the image to avoid penalisation from the discriminator. This process continues until the discriminator can no longer distinguish a synthesised face from a real one.

Just as the discriminator could no longer distinguish a synthesised face from a real one, neither could human participants. In the study, the human participants identified fake images just 48.2 percent of the time. 

Accuracy was found to be higher for correctly identifying real East Asian and White male faces than females. However, for both male and female synthetic faces, White faces were least accurately identified and White males less than White females.

The researchers hypothesised that “White faces are more difficult to classify because they are overrepresented in the StyleGAN2 training dataset and are therefore more realistic.”

Here are the most (top and upper-middle lines) and least (bottom and lower-middle) accurately classified real (R) and synthetic (S) faces:

There’s a glimmer of hope for humans with participants being able to distinguish real faces 59 percent of the time after being given training on how to spot fakes. That’s not a particularly comfortable percentage, but it at least tips the scales towards humans spotting fakes more often than not.

What sets the alarm bells ringing again is that synthetic faces were rated more “trustworthy” than real ones. On a scale of 1 (very untrustworthy) to 7 (very trustworthy), the average rating for real faces (blue bars) of 4.48 is less than the rating of 4.82 for synthetic.

“A smiling face is more likely to be rated as trustworthy, but 65.5 per cent of our real faces and 58.8 per cent of synthetic faces are smiling, so facial expression alone cannot explain why synthetic faces are rated as more trustworthy,” wrote the researchers.

The results of the paper show the importance of developing tools that can spot the increasingly small differences that distinguish the real from synthetic because humans will struggle even if everyone was specifically trained.

With Western intelligence agencies calling out fake content allegedly from Russian authorities to justify an invasion of Ukraine, the increasing ease in which such media can be generated in mass poses a serious threat that’s no longer the work of fiction.

(Photo by NeONBRAND on Unsplash)

Related: James Cameron warns of the dangers of deepfakes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Humans struggle to distinguish between real and AI-generated faces appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/21/humans-struggle-distinguish-real-and-ai-generated-faces/feed/ 0