Twitter Archives - AI News https://www.artificialintelligence-news.com/tag/twitter/ Artificial Intelligence News Mon, 03 Jun 2024 12:44:46 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Twitter Archives - AI News https://www.artificialintelligence-news.com/tag/twitter/ 32 32 X now permits AI-generated adult content https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/ https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/#respond Mon, 03 Jun 2024 12:44:45 +0000 https://www.artificialintelligence-news.com/?p=14927 Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities. “We believe that users should be able to create, distribute, and consume material related... Read more »

The post X now permits AI-generated adult content appeared first on AI News.

]]>
Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities.

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed. Sexual expression, visual or written, can be a legitimate form of artistic expression,” X’s updated ‘adult content’ policy states.

The policy further elaborates: “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality. We balance this freedom by restricting exposure to adult content for children or adult users who choose not to see it.”

Users can mark their posts as containing sensitive media, ensuring that such content is restricted from users under 18 or those who haven’t provided their birth dates.

While X’s violent content rules have similar guidelines, the platform maintains a strict stance against excessively gory content and depictions of sexual violence. Explicit threats or content inciting or glorifying violence remain prohibited.

X’s decision to allow graphic content is aimed at enabling users to participate in discussions about current events, including sharing relevant images and videos. 

Although X has never outright banned porn, these new clauses could pave the way for developing services centred around adult content, potentially creating a competitor to services like OnlyFans and enhancing its revenue streams. This would further Musk’s vision of X becoming an “everything app,” similar to China’s WeChat.

A 2022 Reuters report, citing internal company documents, indicated that approximately 13% of posts on the platform contained adult content. This percentage has likely increased, especially with the proliferation of porn bots on X.

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post X now permits AI-generated adult content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/feed/ 0
Deepfakes are being used to push anti-Ukraine disinformation https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/ https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/#respond Tue, 01 Mar 2022 18:01:38 +0000 https://artificialintelligence-news.com/?p=11719 Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation. Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces. As humans, we’re somewhat trained to believe what we see with our eyes.... Read more »

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation.

Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces.

As humans, we’re somewhat trained to believe what we see with our eyes. Many believed that it was only a matter of time before Russia took advantage of deepfakes and our human psychology to take its vast disinformation campaigns to the next level.

Facebook and Twitter removed two anti-Ukraine “covert influence operations” over the weekend. One had ties to Russia, while the other was connected to Belarus.

As we’ve often seen around things like Covid-19 disinformation, the Russian propaganda operation included websites aimed at pushing readers towards anti-Ukraine views. The campaign had links with the News Front and South Front websites which the US government has linked to Russian intelligence disinformation efforts.

However, Facebook said this particular campaign used AI-generated faces to give the idea that it was posted by credible columnists. Here’s one “columnist” and the “editor-in-chief” of one propaganda website:

Ears are often still a giveaway with AI-generated faces like those created on ‘This Person Does Not Exist’. The fictional woman’s mismatched earrings are one indicator while the man’s right ear is clearly not quite right.

Part of the campaign was to promote the idea that Russia’s military operation is going well and Ukraine’s efforts are going poorly. We know that Russia’s state broadcasters have only acknowledged ludicrously small losses—including just one Russian soldier fatality.

On Saturday, state-owned news agency RIA-Novosti even accidentally published and then deleted an article headlined “The arrival of Russia in a new world” in what appeared to be a pre-prepared piece expecting a swift victory. The piece piled praise on Putin’s regime and claims that Russia is returning to lead a new world order to rectify the “terrible catastrophe” that was the collapse of the Soviet Union.

So far, Russia is expected to have lost around 5,300 troops, 816 armoured combat vehicles, 101 tanks, 74 guns, 29 warplanes, 29 helicopters, and two ships/motorboats, as part of its decision to invade Ukraine.

The slow progress and mounting losses appear to have angered Russia with its military now conducting what appears to be very clear war crimes—targeting civilian areas, bombing hospitals and kindergartens, and using thermobaric and cluster munitions indiscriminately. Putin has even hinted at using nuclear weapons offensively rather than defensively in an unprecedented escalation.

Many ordinary Russian citizens are becoming outraged at what their government is doing to Ukraine, where many have family, friends, and share deep cultural ties. Russia appears to be ramping up its propaganda to counter as the country finds itself increasingly isolated.

Western governments and web giants have clamped down on Russia’s state propagandists in recent days.

British telecoms regulator Ofcom has launched 15 investigations into state broadcaster RT after observing “a significant increase in the number of programmes on the RT service that warrant investigation under our Broadcasting Code.”

Facebook has decided to block access to RT and Sputnik across the EU following “a number” of government requests from within the EU. Twitter, for its part, has announced that it would label tweets from Russian state media accounts.

Hacker collective Anonymous claims to have carried out over 1,500 cyberattacks against Russian government sites, transport infrastructure, banks, and state media to counter their falsehoods and broadcast the truth about the invasion to Russian citizens.

Russia’s media regulator Roskomnadzor, for its part, has restricted Russian users’ access to Facebook and Twitter.

(Photo by Max Kukurudziak on Unsplash)

Related: Ukraine is using Starlink to maintain global connectivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/feed/ 0
Twitter begins labelling ‘good’ bots on the social media platform https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/ https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/#respond Fri, 10 Sep 2021 14:13:40 +0000 http://artificialintelligence-news.com/?p=11039 Twitter is testing a new feature that will give the good kind of bots some due recognition. Bots have become a particularly hot topic in recent years, but mainly for negative reasons. We’ve all seen their increased use to share propaganda to sway democratic processes and spread disinformation around things like COVID-19 vaccines. However, despite... Read more »

The post Twitter begins labelling ‘good’ bots on the social media platform appeared first on AI News.

]]>
Twitter is testing a new feature that will give the good kind of bots some due recognition.

Bots have become a particularly hot topic in recent years, but mainly for negative reasons. We’ve all seen their increased use to share propaganda to sway democratic processes and spread disinformation around things like COVID-19 vaccines.

However, despite their image problem, bots can be an important tool for good.

Some bots share critical information around things like severe weather, natural disasters, active shooters, and other emergencies. Others can be educational and provide facts or dig up historical events and artifacts to remind us of the past as we’re browsing on our modern devices.

On Thursday, Twitter announced that it’s testing a new label to let users know the account is using automated but legitimate content.

Twitter says the new feature is based on research from users that found they want more context about non-human accounts.

A study by Carnegie Mellon University last year found that almost half of Twitter accounts tweeting about the coronavirus pandemic were likely automated accounts. Twitter says it will continue to remove fake accounts that break its rules.

The move could be likened to Twitter’s verified accounts scheme that puts a little blue tick mark next to a user’s name to show others that it belongs to the person in question and isn’t a fake, often created for scam purposes.

However, unlike Twitter’s verified accounts scheme that provides no guarantees about the content of a user’s tweets, the social network is taking a bit of a gamble that tweets from a ‘good’ bot account will remain accurate.

(Photo by Jeremy Bezanger on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Twitter begins labelling ‘good’ bots on the social media platform appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/feed/ 0
Social media algorithms are still failing to counter misleading content https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/ https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/#respond Tue, 17 Aug 2021 14:23:52 +0000 http://artificialintelligence-news.com/?p=10917 As the Afghanistan crisis continues to unfold, it’s clear that social media algorithms are unable to counter enough misleading and/or fake content. While it’s unreasonable to expect that no disingenuous content will slip through the net, the sheer amount that continues to plague social networks shows that platform-holders still have little grip on the issue.... Read more »

The post Social media algorithms are still failing to counter misleading content appeared first on AI News.

]]>
As the Afghanistan crisis continues to unfold, it’s clear that social media algorithms are unable to counter enough misleading and/or fake content.

While it’s unreasonable to expect that no disingenuous content will slip through the net, the sheer amount that continues to plague social networks shows that platform-holders still have little grip on the issue.

When content is removed, it should either be prevented from being reuploaded or at least flagged as potentially misleading when displayed to other users. Too often, another account – whether real or fake – simply reposts the removed content so that it can continue spreading without limitation.

The damage is only stopped when the vast amount of content that makes it AI-powered moderation efforts like object detection and scene recognition is flagged by users and eventually reviewed by an actual person, often long after it’s been widely viewed. It’s not unheard of for moderators to require therapy after being exposed to so much of the worst of humankind and defeats the purpose of automation in reducing the tasks that are dangerous and/or labour-intensive for humans to do alone.

Deepfakes currently pose the biggest challenge for social media platforms. Over time, algorithms can be trained to detect the markers that indicate content has been altered. Microsoft is developing such a system called Video Authenticator that was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset:

However, it’s also true that increasingly advanced deepfakes are making the markers ever more subtle. Back in February, researchers from the University of California – San Diego found that current systems designed to counter the increasing prevalence of deepfakes can be deceived.

Another challenge with deepfakes is their resilience to being prevented from being reuploaded. Increasing processing power means that it doesn’t take long for small changes to be made so the “new” content evades algorithmic blocking.

In a report from the NYU Stern Center for Business and Human Rights, the researchers highlighted the various ways disinformation could be used to influence democratic processes. One method is for deepfake videos to be used during elections to “portray candidates saying and doing things they never said or did”.

The report also predicts that Iran and China will join Russia as major sources of disinformation in Western democracies and that for-profit firms based in the US and abroad will be hired to generate disinformation. It transpired in May that French and German YouTubers, bloggers, and influencers were offered cash by a supposedly UK-based PR agency with Russian connections to falsely tell their followers the Pfizer/BioNTech vaccine has a high death rate. Influencers were asked to tell their subscribers that “the mainstream media ignores this theme”, which I’m sure you’ve since heard from other people.

While recognising the challenges, the likes of Facebook, YouTube, and Twitter should have the resources at their disposal to be doing a much better job at countering misleading content than they are. Some leniency can be given for deepfakes as a relatively emerging threat but some things are unforgivable at this point.

Take this video that is making the rounds, for example:

Sickeningly, it is a real and unmanipulated video. However, it’s also from ~2001. Despite many removals, the social networks continue to allow it to be reposted with claims of it being new footage without any warning that it’s old and has been flagged as being misleading.

While it’s difficult to put much faith in the Taliban’s claims that they’ll treat women and children much better than their barbaric history suggests, it’s always important for facts and genuine material to be separated from known fiction and misrepresented content no matter the issue or personal views. The networks are clearly aware of the problematic content and continue to allow it to be spread—often entirely unhindered.

An image of CNN correspondent Omar Jimenez standing in front of a helicopter taking off in Afghanistan alongside the news caption “Violent but mostly peaceful transfer of power” was posted to various social networks over the weekend. Reuters and Politifact both fact-checked the image and concluded that it had been digitally-altered.

The image of Jimenez was taken from his 2020 coverage of protests in Kenosha, Wisconsin following a police shooting alongside the caption “Fiery but mostly peaceful protests after police shooting” that was criticised by some conservatives. The doctored image is clearly intended to be satire but the comments suggest many people believed it to be true.

On Facebook, to its credit, the image has now been labeled as an “Altered photo” and clearly states that “Independent fact-checkers say this information could mislead people”. On Twitter, as of writing, the image is still circulating without any label. The caption is also being used as a title in a YouTube video with some different footage but the platform also hasn’t labeled it and claims that it doesn’t violate their rules.

Social media platforms can’t become thought police, but where algorithms have detected manipulated content – and/or there is clear evidence of even real material being used for misleading purposes – it should be indisputable that action needs to be taken to support fair discussion and debate around genuine information.

Not enough is currently being done, and we appear doomed to the same socially-damaging failings during every pivotal event for the foreseeable future unless that changes.

(Photo by Adem AY on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Social media algorithms are still failing to counter misleading content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/feed/ 0
Twitter turns to HackerOne community to help fix its AI biases https://www.artificialintelligence-news.com/2021/08/02/twitter-turns-hackerone-community-help-fix-ai-biases/ https://www.artificialintelligence-news.com/2021/08/02/twitter-turns-hackerone-community-help-fix-ai-biases/#respond Mon, 02 Aug 2021 17:04:36 +0000 http://artificialintelligence-news.com/?p=10816 Twitter is recruiting the help of the HackerOne community to try and fix troubling biases with its AI models. The image-cropping algorithm used by Twitter was intended to keep the most interesting parts of an image in the preview crop in people’s timelines. That’s all good, until users found last year that it favoured lighter... Read more »

The post Twitter turns to HackerOne community to help fix its AI biases appeared first on AI News.

]]>
Twitter is recruiting the help of the HackerOne community to try and fix troubling biases with its AI models.

The image-cropping algorithm used by Twitter was intended to keep the most interesting parts of an image in the preview crop in people’s timelines. That’s all good, until users found last year that it favoured lighter skin colours over dark and the breasts and legs of women over their faces.

When researchers fed a picture of a black man and a white woman into the system, the algorithm displayed the white woman 64 percent of the time and the black man just 36 percent of the time. For images of a white woman and a black woman, the algorithm displayed the white woman 57 percent of the time.

Twitter has offered bounties ranging between $500 and $3500 to anyone who finds evidence of harmful bias in their algorithms. Anyone successful will also be invited to DEF CON, a major hacker convention.

Rumman Chowdhury, Director of Software Engineering at Twitter, and Jutta Williams, Product Manager, wrote in a blog post:

“We want to take this work a step further by inviting and incentivizing the community to help identify potential harms of this algorithm beyond what we identified ourselves.”

After initially denying the problem, it’s good to see Twitter taking responsibility and attempting to fix the issue. By doing so, the company says it wants to “set a precedent at Twitter, and in the industry, for proactive and collective identification of algorithmic harms.”

Three staffers from Twitter’s Machine Learning Ethics, Transparency, and Accountability department found biases in their own tests and claim the algorithm is, on average, around four percent more likely to display people with lighter skin compared to darker and eight percent more likely to display women compared to men.

However, the staffers found no evidence that certain parts of people’s bodies were more likely to be displayed than others.

“We found that no more than 3 out of 100 images per gender have the crop not on the head,” they explained in a paper that was published on arXiv.

Twitter has gradually ditched its problematic image-cropping algorithm and doesn’t seem to be in a rush to reinstate it anytime soon:

In its place, Twitter has been rolling out the ability for users to control how their images are cropped.

“We considered the trade-offs between the speed and consistency of automated cropping with the potential risks we saw in this research,” wrote Chowdhury in a blog post in May.

“One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people.”

The HackerOne page for the challenge can be found here.

(Photo by Edgar MORAN on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Twitter turns to HackerOne community to help fix its AI biases appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/02/twitter-turns-hackerone-community-help-fix-ai-biases/feed/ 0
F-Secure: AI-based recommendation engines are easy to manipulate https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/ https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/#respond Thu, 24 Jun 2021 11:10:26 +0000 http://artificialintelligence-news.com/?p=10716 Cybersecurity giant F-Secure has warned that AI-based recommendation systems are easy to manipulate. Recommendations often come under increased scrutiny around major elections due to concerns that bias could, in extreme cases, lead to electoral manipulation. However, the recommendations that are delivered to people day-to-day matter just as much, if not more. Matti Aksela, VP of... Read more »

The post F-Secure: AI-based recommendation engines are easy to manipulate appeared first on AI News.

]]>
Cybersecurity giant F-Secure has warned that AI-based recommendation systems are easy to manipulate.

Recommendations often come under increased scrutiny around major elections due to concerns that bias could, in extreme cases, lead to electoral manipulation. However, the recommendations that are delivered to people day-to-day matter just as much, if not more.

Matti Aksela, VP of Artificial Intelligence at F-Secure, commented:

“As we rely more and more on AI in the future, we need to understand what we need to do to protect it from potential abuse. 

Having AI and machine learning power more and more of the services we depend on requires us to understand its security strengths and weaknesses, in addition to the benefits we can obtain, so that we can trust the results.

Secure AI is the foundation of trustworthy AI.”

Sophisticated disinformation efforts – such as those organised by Russia’s infamous “troll farms” – have spread dangerous lies around COVID-19 vaccines, immigration, and high-profile figures.

Andy Patel, Researcher at F-Secure’s Artificial Intelligence Center of Excellence, said:

“Twitter and other networks have become battlefields where different people and groups push different narratives. These include organic conversations and ads, but also messages intended to undermine and erode trust in legitimate information.

Examining how these ‘combatants’ can manipulate AI helps expose the limits of what AI can realistically do, and ideally, how it can be improved.” 

Legitimate and reliable information is needed more than ever. Scepticism is healthy, but people are beginning to either trust nothing or believe everything. Both are problematic.

According to a PEW Research Center survey from late-2020, 53 percent of Americans get their news from social media. Younger respondents, aged between 18-29, reported that social media is their main source of news.

No person or media outlet gets everything right, but a history of credibility must be taken into account—which tools such as NewsGuard help with. However, almost all mainstream media outlets have at least more credibility than a random social media user who may or may not even be who they claim to be.

In 2018, an investigation found that Twitter posts containing falsehoods are 70 percent more likely to be reshared. The ripple effect created by this resharing without fact-checking is why disinformation can spread so far within minutes. For some topics, like COVID-19 vaccines, Facebook has at least started to prompt users whether they’ve considered if the information is accurate before they share it.

Patel trained collaborative filtering models (a type of machine learning used to encode similarities between users and content based on previous interactions) using data collected from Twitter for use in recommendation systems. As part of his experiments, Patel “poisoned” the data using additional retweets to retrain the model and see how the recommendations changed.

The findings showed how even a very small number of retweets could manipulate the recommendation engine into promoting accounts whose content was shared through the injected retweets.

“We performed tests against simplified models to learn more about how the real attacks might actually work,” said Patel.

“I think social media platforms are already facing attacks that are similar to the ones demonstrated in this research, but it’s hard for these organisations to be certain this is what’s happening because they’ll only see the result, not how it works.”

Patel’s research can be recreated using the code and datasets hosted on GitHub here.

(Photo by Charles Deluvio on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post F-Secure: AI-based recommendation engines are easy to manipulate appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/feed/ 0
Twitter’s latest acquisition tackles fake news using AI https://www.artificialintelligence-news.com/2019/06/04/twitter-acquisition-fake-news-ai/ https://www.artificialintelligence-news.com/2019/06/04/twitter-acquisition-fake-news-ai/#respond Tue, 04 Jun 2019 15:46:03 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5716 Twitter has acquired Fabula AI, a UK-based startup employing artificial intelligence for tackling fake news. Fake news is among the most difficult challenges of our time. Aside from real stories often being called it by certain politicians, actual fake news is used to coerce people into making decisions. Governments have been putting increasing pressure on... Read more »

The post Twitter’s latest acquisition tackles fake news using AI appeared first on AI News.

]]>
Twitter has acquired Fabula AI, a UK-based startup employing artificial intelligence for tackling fake news.

Fake news is among the most difficult challenges of our time. Aside from real stories often being called it by certain politicians, actual fake news is used to coerce people into making decisions.

Governments have been putting increasing pressure on sites like Twitter and Facebook to take more responsibility for the content shared on them.

With billions of users, each uploading content, manual moderation of it all isn’t feasible. Automation is increasingly being used to flag problem content before a human moderator checks it.

Twitter CTO Parag Agrawal says its acquisition of Fabula is “to improve the health of the conversation, with expanding applications to stop spam and abuse and other strategic priorities in the future.”

Fabula has developed the ability to analyse “very large and complex data sets” for signs of network manipulation and can identify patterns that other machine-learning techniques can’t, according to Agrawal.

In addition, Fabula has created a truth-risk score to identify misinformation. The score is generated using data from trust fact-checking sources like PolitiFact and Snopes. Armed with the score, Twitter can determine how trustworthy a claim is and perhaps even make it visible to others.

A post on Twitter’s blog yesterday hints at the possible direction: “Context on Tweets and our enforcement is important in understanding our rules, so we’ll add more notices within Twitter for clarity, such as if a Tweet breaks our rules but remains on the service because the content is in the public interest.”

Often fake news is used to cause political gain or turmoil. Russia is regularly linked with modern disinformation campaigns, but even Western democracies have used it to influence both national and international affairs.

The US presidential elections were influenced by fake news. Last year, Congress released more than 3,000 Facebook ads purchased by Russian-linked agents ahead of the 2016 presidential contest.

In Fabula AI’s home country, some allege fake news was behind the UK’s decision to leave in the EU referendum. There’s less conclusive data behind the allegation, but we do know powerful targeted advertising was used to promote so-called ‘alternative facts’.

Fabula’s team will be joining the Twitter Cortex machine-learning team. Exact terms of the deal or how Fabula’s technology will be used have not been disclosed.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Twitter’s latest acquisition tackles fake news using AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/06/04/twitter-acquisition-fake-news-ai/feed/ 0
Trump speech ‘DeepFake’ shows a present AI threat https://www.artificialintelligence-news.com/2019/01/14/trump-speech-deepfake-ai-threat/ https://www.artificialintelligence-news.com/2019/01/14/trump-speech-deepfake-ai-threat/#comments Mon, 14 Jan 2019 12:19:09 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4424 A so-called ‘DeepFake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, showing a very present AI threat. The station, Q13, broadcasted a doctored Trump speech in which he somehow appeared even more orange and pulled amusing faces. You can see a side-by-side comparison with the original below: Following the broadcast,... Read more »

The post Trump speech ‘DeepFake’ shows a present AI threat appeared first on AI News.

]]>
A so-called ‘DeepFake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, showing a very present AI threat.

The station, Q13, broadcasted a doctored Trump speech in which he somehow appeared even more orange and pulled amusing faces.

You can see a side-by-side comparison with the original below:

https://www.youtube.com/watch?v=UZLs11uSg-A&feature=youtu.be

Following the broadcast, a Q13 employee was sacked. It’s unclear if the worker created the clip or whether it was just allowed to air.

The video could be the first DeepFake to be televised, but it won’t be the last. Social media provides even less filtration and enables fake clips to spread with ease.

We’ve heard much about sophisticated disinformation campaigns. At one point, the US was arguably the most prominent creator of such campaigns to influence foreign decisions.

Russia, in particular, has been linked to vast disinformation campaigns. These have primarily targeted social media with things such as their infamous Twitter bots.

According to Pew Research, just five percent of Americans have ‘a lot of trust’ in the information they get from social media. This is much lower than in national and local news organisations.

It’s not difficult to imagine an explosion in doctored videos that appear like they’re coming from trusted outlets. Combining the reach of social media with the increased trust Americans have in traditional news organisations is a dangerous concept.

While the Trump video appears to be a bit of fun, the next could be used to influence an election or big policy decision. It’s a clear example of how AI is already creating new threats.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Trump speech ‘DeepFake’ shows a present AI threat appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/01/14/trump-speech-deepfake-ai-threat/feed/ 1