misinformation Archives - AI News https://www.artificialintelligence-news.com/tag/misinformation/ Artificial Intelligence News Fri, 05 Apr 2024 10:08:47 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png misinformation Archives - AI News https://www.artificialintelligence-news.com/tag/misinformation/ 32 32 Microsoft: China plans to disrupt elections with AI-generated disinformation https://www.artificialintelligence-news.com/2024/04/05/microsoft-china-plans-disrupt-elections-ai-generated-disinformation/ https://www.artificialintelligence-news.com/2024/04/05/microsoft-china-plans-disrupt-elections-ai-generated-disinformation/#respond Fri, 05 Apr 2024 10:08:46 +0000 https://www.artificialintelligence-news.com/?p=14650 Beijing is expected to ramp up sophisticated AI-generated disinformation campaigns to influence several high-profile elections in 2024, according to Microsoft’s threat intelligence team. Microsoft warned that state-backed Chinese cyber groups – with assistance from North Korean actors – “are likely to target” the presidential and legislative elections in countries such as the US, South Korea,... Read more »

The post Microsoft: China plans to disrupt elections with AI-generated disinformation appeared first on AI News.

]]>
Beijing is expected to ramp up sophisticated AI-generated disinformation campaigns to influence several high-profile elections in 2024, according to Microsoft’s threat intelligence team.

Microsoft warned that state-backed Chinese cyber groups – with assistance from North Korean actors – “are likely to target” the presidential and legislative elections in countries such as the US, South Korea, and India this year. Their primary tactic is projected to be the creation and dissemination on social media of AI-generated content skewed to “benefit their positions” in these races.

“While the impact of such content in swaying audiences remains low, China’s increasing experimentation in augmenting memes, videos, and audio will continue – and may prove effective down the line,” Microsoft cautioned in the report released Friday.

The company cited China’s recent “dry run” utilising AI-synthesised disinformation during Taiwan’s January presidential election as a harbinger of this emerging threat. Microsoft assessed that a pro-Beijing group known as Storm 1376 or Spamouflage Dragon made the first documented attempt by a state actor to influence a foreign vote using AI-manufactured content.

Tactics deployed by the Chinese-backed operatives included posting fake audio clips likely “generated by AI” that depicted a former presidential candidate endorsing a rival, as well as AI-generated memes leveling unfounded corruption allegations against the ultimately victorious pro-sovereignty candidate William Lai. The group also created AI-rendered “news anchors” to broadcast disinformation about Lai’s personal life.

“As populations in India, South Korea, and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections,” the Microsoft report stated.

The company added that Chinese groups are already attempting to map divisive issues and voting blocs in the US through orchestrated social media campaigns, potentially “to gather intelligence and precision on key voting demographics ahead of the US Presidential election.”

While flagging the risk, Microsoft acknowledged that AI-enabled disinformation has so far achieved limited success in shaping public opinion globally. But it warned that Beijing’s growing investment and increasing sophistication with the technology poses an escalating threat to the integrity of democratic elections worldwide.

(Photo by Element5 Digital)

See also: How to safeguard your business from AI-generated deepfakes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft: China plans to disrupt elections with AI-generated disinformation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/05/microsoft-china-plans-disrupt-elections-ai-generated-disinformation/feed/ 0
UK Home Secretary sounds alarm over deepfakes ahead of elections https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/ https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/#respond Mon, 26 Feb 2024 16:46:48 +0000 https://www.artificialintelligence-news.com/?p=14448 Criminals and hostile state actors could hijack Britain’s democratic process by deploying AI-generated “deepfakes” to mislead voters, UK Home Secretary James Cleverly cautioned in remarks ahead of meetings with major tech companies.  Speaking to The Times, Cleverly emphasised the rapid advancement of AI technology and its potential to undermine elections not just in the UK... Read more »

The post UK Home Secretary sounds alarm over deepfakes ahead of elections appeared first on AI News.

]]>
Criminals and hostile state actors could hijack Britain’s democratic process by deploying AI-generated “deepfakes” to mislead voters, UK Home Secretary James Cleverly cautioned in remarks ahead of meetings with major tech companies. 

Speaking to The Times, Cleverly emphasised the rapid advancement of AI technology and its potential to undermine elections not just in the UK but globally. He warned that malign actors working on behalf of nations like Russia and Iran could generate thousands of highly realistic deepfake images and videos to disrupt the democratic process.

“Increasingly today the battle of ideas and policies takes place in the ever-changing and expanding digital sphere,” Cleverly told the newspaper. “The era of deepfake and AI-generated content to mislead and disrupt is already in play.”

The Home Secretary plans to urge collective action from Silicon Valley giants like Google, Meta, Apple, and YouTube when he meets with them this week. His aim is to implement “rules, transparency, and safeguards” to protect democracy from deepfake disinformation.

Cleverly’s warnings come after a series of deepfake audios imitating Labour leader Keir Starmer and London Mayor Sadiq Khan circulated online last year. Fake BBC News videos purporting to examine PM Rishi Sunak’s finances have also surfaced.

The tech meetings follow a recent pact signed by major AI companies like Adobe, Amazon, Google, and Microsoft during the Munich Security Conference to take “reasonable precautions” against disruptions caused by deepfake content during elections worldwide.

As concerns over the proliferation of deepfakes continue to grow, the world must confront the challenges they pose in shaping public discourse and potentially influencing electoral outcomes.

(Image Credit: Lauren Hurley / No 10 Downing Street under OGL 3 license)

See also: Stability AI previews Stable Diffusion 3 text-to-image model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK Home Secretary sounds alarm over deepfakes ahead of elections appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/26/uk-home-secretary-alarm-deepfakes-ahead-elections/feed/ 0
President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/ https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/#respond Thu, 17 Mar 2022 09:43:22 +0000 https://artificialintelligence-news.com/?p=11774 A deepfake of President Zelenskyy calling on citizens to “lay down arms” was posted to a hacked Ukrainian news website and shared across social networks. The deepfake purports to show Zelenskyy declaring that Ukraine has “decided to return Donbas” to Russia and that his nation’s efforts had failed. Following an alleged hack, the deepfake was... Read more »

The post President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ appeared first on AI News.

]]>
A deepfake of President Zelenskyy calling on citizens to “lay down arms” was posted to a hacked Ukrainian news website and shared across social networks.

The deepfake purports to show Zelenskyy declaring that Ukraine has “decided to return Donbas” to Russia and that his nation’s efforts had failed.

Following an alleged hack, the deepfake was first posted to a Ukrainian news site for TV24. The deepfake was then shared across social networks, including Facebook and Twitter.

Nathaniel Gleicher, Head of Security Policy for Facebook owner Meta, wrote in a tweet:

“Earlier today, our teams identified and removed a deepfake video claiming to show President Zelensky issuing a statement he never did.

It appeared on a reportedly compromised website and then started showing across the internet.”

The deepfake itself is poor by today’s standards, with fake Zelenskyy having a comically large and noticeably pixelated head compared to the rest of his body.

It shouldn’t have fooled anyone, but Zelenskyy posted a video to his Instagram to call out the fake anyway.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in his official video. “We are at home and defending Ukraine.”

Earlier this month, the Ukrainian government posted a statement warning soldiers and civilians not to believe any videos of Zelenskyy claiming to surrender:

“​​Imagine seeing Vladimir Zelensky on TV making a surrender statement. You see it, you hear it – so it’s true. But this is not the truth. This is deepfake technology.

This will not be a real video, but created through machine learning algorithms.

Videos made through such technologies are almost impossible to distinguish from the real ones.

Be aware – this is a fake! The goal is to disorient, sow panic, disbelieve citizens, and incite our troops to retreat.”

Fortunately, this deepfake was quite easy to distinguish – despite humans now often finding it impossible – and could actually help to raise awareness of how such content is used to influence and manipulate.

Earlier this month, AI News reported on how Facebook and Twitter removed two anti-Ukraine disinformation campaigns linked to Russia and Belarus. One of the campaigns even used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website.

Both cases in the past month show the danger of deepfakes and the importance of raising public awareness and developing tools for countering such content before it’s able to spread.

(Image Credit: President.gov.ua used without changes under CC BY 4.0 license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/feed/ 0
Deepfakes are being used to push anti-Ukraine disinformation https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/ https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/#respond Tue, 01 Mar 2022 18:01:38 +0000 https://artificialintelligence-news.com/?p=11719 Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation. Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces. As humans, we’re somewhat trained to believe what we see with our eyes.... Read more »

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation.

Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces.

As humans, we’re somewhat trained to believe what we see with our eyes. Many believed that it was only a matter of time before Russia took advantage of deepfakes and our human psychology to take its vast disinformation campaigns to the next level.

Facebook and Twitter removed two anti-Ukraine “covert influence operations” over the weekend. One had ties to Russia, while the other was connected to Belarus.

As we’ve often seen around things like Covid-19 disinformation, the Russian propaganda operation included websites aimed at pushing readers towards anti-Ukraine views. The campaign had links with the News Front and South Front websites which the US government has linked to Russian intelligence disinformation efforts.

However, Facebook said this particular campaign used AI-generated faces to give the idea that it was posted by credible columnists. Here’s one “columnist” and the “editor-in-chief” of one propaganda website:

Ears are often still a giveaway with AI-generated faces like those created on ‘This Person Does Not Exist’. The fictional woman’s mismatched earrings are one indicator while the man’s right ear is clearly not quite right.

Part of the campaign was to promote the idea that Russia’s military operation is going well and Ukraine’s efforts are going poorly. We know that Russia’s state broadcasters have only acknowledged ludicrously small losses—including just one Russian soldier fatality.

On Saturday, state-owned news agency RIA-Novosti even accidentally published and then deleted an article headlined “The arrival of Russia in a new world” in what appeared to be a pre-prepared piece expecting a swift victory. The piece piled praise on Putin’s regime and claims that Russia is returning to lead a new world order to rectify the “terrible catastrophe” that was the collapse of the Soviet Union.

So far, Russia is expected to have lost around 5,300 troops, 816 armoured combat vehicles, 101 tanks, 74 guns, 29 warplanes, 29 helicopters, and two ships/motorboats, as part of its decision to invade Ukraine.

The slow progress and mounting losses appear to have angered Russia with its military now conducting what appears to be very clear war crimes—targeting civilian areas, bombing hospitals and kindergartens, and using thermobaric and cluster munitions indiscriminately. Putin has even hinted at using nuclear weapons offensively rather than defensively in an unprecedented escalation.

Many ordinary Russian citizens are becoming outraged at what their government is doing to Ukraine, where many have family, friends, and share deep cultural ties. Russia appears to be ramping up its propaganda to counter as the country finds itself increasingly isolated.

Western governments and web giants have clamped down on Russia’s state propagandists in recent days.

British telecoms regulator Ofcom has launched 15 investigations into state broadcaster RT after observing “a significant increase in the number of programmes on the RT service that warrant investigation under our Broadcasting Code.”

Facebook has decided to block access to RT and Sputnik across the EU following “a number” of government requests from within the EU. Twitter, for its part, has announced that it would label tweets from Russian state media accounts.

Hacker collective Anonymous claims to have carried out over 1,500 cyberattacks against Russian government sites, transport infrastructure, banks, and state media to counter their falsehoods and broadcast the truth about the invasion to Russian citizens.

Russia’s media regulator Roskomnadzor, for its part, has restricted Russian users’ access to Facebook and Twitter.

(Photo by Max Kukurudziak on Unsplash)

Related: Ukraine is using Starlink to maintain global connectivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/feed/ 0
James Cameron warns of the dangers of deepfakes https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/ https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/#respond Mon, 24 Jan 2022 18:40:34 +0000 https://artificialintelligence-news.com/?p=11603 Legendary director James Cameron has warned of the dangers that deepfakes pose to society. Deepfakes leverage machine learning and AI techniques to convincingly manipulate or generate visual and audio content. Their high potential to deceive makes them a powerful tool for spreading disinformation, committing fraud, trolling, and more. “Every time we improve these tools, we’re... Read more »

The post James Cameron warns of the dangers of deepfakes appeared first on AI News.

]]>
Legendary director James Cameron has warned of the dangers that deepfakes pose to society.

Deepfakes leverage machine learning and AI techniques to convincingly manipulate or generate visual and audio content. Their high potential to deceive makes them a powerful tool for spreading disinformation, committing fraud, trolling, and more.

“Every time we improve these tools, we’re actually in a sense building a toolset to create fake media — and we’re seeing it happening now,” said Cameron in a BBC video interview.

“Right now the tools are — the people just playing around on apps aren’t that great. But over time, those limitations will go away. Things that you see and fully believe you’re seeing could be faked.”

Have you ever said “I’ll believe it when I see it with my own eyes,” or similar? I certainly have. As humans, we’re subconsciously trained to believe what we can see (unless it’s quite obviously faked.)

The problem is amplified with today’s fast news cycle. It’s a well-known problem that many articles get shared based on their headline before moving on to the next story. Few people are going to stop to analyse images and videos for small imperfections.

Often the stories are shared with reactions to the headline without reading the story to get the full context. This can lead to a butterfly effect of people seeing their contacts’ reactions to the headline and feel they don’t need additional context—often just sharing in whatever emotional response the headline was designed to invoke (generally outrage.)

“News cycles happen so fast, and people respond so quickly, you could have a major incident take place between the interval between when the deepfake drops and when it’s exposed as a fake,” says Cameron.

“We’ve seen situations — you know, Arab Spring being a classic example — where with social media, the uprising was practically overnight.”

It’s a difficult problem to tackle as it is. We’ve all seen the amount of disinformation around things such as the COVID-19 vaccines. However, an article posted with convincing deepfake media will be almost impossible to stop from being posted and/or shared widely.

AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. However, researchers have found that current tools can easily be deceived.

Images and videos that can be verified as original and authentic using technologies like distributed ledgers could also be used to help give audiences confidence the media they’re consuming isn’t a manipulated version and they really can trust their own eyes.

In the meantime, Cameron suggest using Occam’s razor—a problem-solving principle that’s can be summarised as the simplest explanation is the likeliest.

“Conspiracy theories are all too complicated. People aren’t that good, human systems aren’t that good, people can’t keep a secret to save their lives, and most people in positions of power are bumbling stooges.

“The fact that we think that they could realistically pull off these — these complex plots? I don’t buy any of that crap! Bill Gates is not really trying to microchip you with the flu vaccine!”

However, Cameron admits his scepticism of new technology.

“Every single advancement in technology that’s ever been created has been weaponised. I say this to AI scientists all the time, and they go, ‘No, no, no, we’ve got this under control.’ You know, ‘We just give the AIs the right goals…’

“So who’s deciding what those goals are? The people that put up the money for the research, right? Which are all either big business or defense. So you’re going to teach these new sentient entities to be either greedy or murderous.”

Of course, Skynet gets an honourary mention.

“If Skynet wanted to take over and wipe us out, it would actually look a lot like what’s going on right now. It’s not going to have to — like, wipe out the entire, you know, biosphere and environment with nuclear weapons to do it. It’s going to be so much easier and less energy required to just turn our minds against ourselves.

“All Skynet would have to do is just deepfake a bunch of people, pit them against each other, stir up a lot of foment, and just run this giant deepfake on humanity.”

Russia’s infamous state-sponsored “troll farms” are one of the largest sources of disinformation and are used to conduct online influence campaigns.

In a January 2017 report issued by the United States Intelligence Community – Assessing Russian Activities and Intentions in Recent US Elections (PDF) – described the ‘Internet Research Agency’ as one such troll farm.

“The likely financier of the so-called Internet Research Agency of professional trolls located in Saint Petersburg is a close ally of [Vladimir] Putin with ties to Russian intelligence,” commenting that “they previously were devoted to supporting Russian actions in Ukraine.”

Western officials have warned that Russia may use disinformation campaigns – including claims of an attack from Ukrainian troops – to rally support and justify an invasion of Ukraine. It’s not out the realms of possibility that manipulated content will play a role, so it could be too late to counter the first large-scale disaster supported by deepfakes.

Related: University College London: Deepfakes are the ‘most serious’ AI crime threat

(Image Credit: Gage Skidmore. Image cropped. CC BY-SA 3.0 license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post James Cameron warns of the dangers of deepfakes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/feed/ 0
Social media algorithms are still failing to counter misleading content https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/ https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/#respond Tue, 17 Aug 2021 14:23:52 +0000 http://artificialintelligence-news.com/?p=10917 As the Afghanistan crisis continues to unfold, it’s clear that social media algorithms are unable to counter enough misleading and/or fake content. While it’s unreasonable to expect that no disingenuous content will slip through the net, the sheer amount that continues to plague social networks shows that platform-holders still have little grip on the issue.... Read more »

The post Social media algorithms are still failing to counter misleading content appeared first on AI News.

]]>
As the Afghanistan crisis continues to unfold, it’s clear that social media algorithms are unable to counter enough misleading and/or fake content.

While it’s unreasonable to expect that no disingenuous content will slip through the net, the sheer amount that continues to plague social networks shows that platform-holders still have little grip on the issue.

When content is removed, it should either be prevented from being reuploaded or at least flagged as potentially misleading when displayed to other users. Too often, another account – whether real or fake – simply reposts the removed content so that it can continue spreading without limitation.

The damage is only stopped when the vast amount of content that makes it AI-powered moderation efforts like object detection and scene recognition is flagged by users and eventually reviewed by an actual person, often long after it’s been widely viewed. It’s not unheard of for moderators to require therapy after being exposed to so much of the worst of humankind and defeats the purpose of automation in reducing the tasks that are dangerous and/or labour-intensive for humans to do alone.

Deepfakes currently pose the biggest challenge for social media platforms. Over time, algorithms can be trained to detect the markers that indicate content has been altered. Microsoft is developing such a system called Video Authenticator that was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset:

However, it’s also true that increasingly advanced deepfakes are making the markers ever more subtle. Back in February, researchers from the University of California – San Diego found that current systems designed to counter the increasing prevalence of deepfakes can be deceived.

Another challenge with deepfakes is their resilience to being prevented from being reuploaded. Increasing processing power means that it doesn’t take long for small changes to be made so the “new” content evades algorithmic blocking.

In a report from the NYU Stern Center for Business and Human Rights, the researchers highlighted the various ways disinformation could be used to influence democratic processes. One method is for deepfake videos to be used during elections to “portray candidates saying and doing things they never said or did”.

The report also predicts that Iran and China will join Russia as major sources of disinformation in Western democracies and that for-profit firms based in the US and abroad will be hired to generate disinformation. It transpired in May that French and German YouTubers, bloggers, and influencers were offered cash by a supposedly UK-based PR agency with Russian connections to falsely tell their followers the Pfizer/BioNTech vaccine has a high death rate. Influencers were asked to tell their subscribers that “the mainstream media ignores this theme”, which I’m sure you’ve since heard from other people.

While recognising the challenges, the likes of Facebook, YouTube, and Twitter should have the resources at their disposal to be doing a much better job at countering misleading content than they are. Some leniency can be given for deepfakes as a relatively emerging threat but some things are unforgivable at this point.

Take this video that is making the rounds, for example:

Sickeningly, it is a real and unmanipulated video. However, it’s also from ~2001. Despite many removals, the social networks continue to allow it to be reposted with claims of it being new footage without any warning that it’s old and has been flagged as being misleading.

While it’s difficult to put much faith in the Taliban’s claims that they’ll treat women and children much better than their barbaric history suggests, it’s always important for facts and genuine material to be separated from known fiction and misrepresented content no matter the issue or personal views. The networks are clearly aware of the problematic content and continue to allow it to be spread—often entirely unhindered.

An image of CNN correspondent Omar Jimenez standing in front of a helicopter taking off in Afghanistan alongside the news caption “Violent but mostly peaceful transfer of power” was posted to various social networks over the weekend. Reuters and Politifact both fact-checked the image and concluded that it had been digitally-altered.

The image of Jimenez was taken from his 2020 coverage of protests in Kenosha, Wisconsin following a police shooting alongside the caption “Fiery but mostly peaceful protests after police shooting” that was criticised by some conservatives. The doctored image is clearly intended to be satire but the comments suggest many people believed it to be true.

On Facebook, to its credit, the image has now been labeled as an “Altered photo” and clearly states that “Independent fact-checkers say this information could mislead people”. On Twitter, as of writing, the image is still circulating without any label. The caption is also being used as a title in a YouTube video with some different footage but the platform also hasn’t labeled it and claims that it doesn’t violate their rules.

Social media platforms can’t become thought police, but where algorithms have detected manipulated content – and/or there is clear evidence of even real material being used for misleading purposes – it should be indisputable that action needs to be taken to support fair discussion and debate around genuine information.

Not enough is currently being done, and we appear doomed to the same socially-damaging failings during every pivotal event for the foreseeable future unless that changes.

(Photo by Adem AY on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Social media algorithms are still failing to counter misleading content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/feed/ 0