social media Archives - AI News https://www.artificialintelligence-news.com/tag/social-media/ Artificial Intelligence News Mon, 03 Jun 2024 12:44:46 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png social media Archives - AI News https://www.artificialintelligence-news.com/tag/social-media/ 32 32 X now permits AI-generated adult content https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/ https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/#respond Mon, 03 Jun 2024 12:44:45 +0000 https://www.artificialintelligence-news.com/?p=14927 Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities. “We believe that users should be able to create, distribute, and consume material related... Read more »

The post X now permits AI-generated adult content appeared first on AI News.

]]>
Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities.

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed. Sexual expression, visual or written, can be a legitimate form of artistic expression,” X’s updated ‘adult content’ policy states.

The policy further elaborates: “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality. We balance this freedom by restricting exposure to adult content for children or adult users who choose not to see it.”

Users can mark their posts as containing sensitive media, ensuring that such content is restricted from users under 18 or those who haven’t provided their birth dates.

While X’s violent content rules have similar guidelines, the platform maintains a strict stance against excessively gory content and depictions of sexual violence. Explicit threats or content inciting or glorifying violence remain prohibited.

X’s decision to allow graphic content is aimed at enabling users to participate in discussions about current events, including sharing relevant images and videos. 

Although X has never outright banned porn, these new clauses could pave the way for developing services centred around adult content, potentially creating a competitor to services like OnlyFans and enhancing its revenue streams. This would further Musk’s vision of X becoming an “everything app,” similar to China’s WeChat.

A 2022 Reuters report, citing internal company documents, indicated that approximately 13% of posts on the platform contained adult content. This percentage has likely increased, especially with the proliferation of porn bots on X.

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post X now permits AI-generated adult content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/feed/ 0
Reddit is reportedly selling data for AI training https://www.artificialintelligence-news.com/2024/02/19/reddit-is-reportedly-selling-data-for-ai-training/ https://www.artificialintelligence-news.com/2024/02/19/reddit-is-reportedly-selling-data-for-ai-training/#respond Mon, 19 Feb 2024 11:11:40 +0000 https://www.artificialintelligence-news.com/?p=14419 Reddit has negotiated a content licensing deal to allow its data to be used for training AI models, according to a Bloomberg report. Just ahead of a potential $5 billion initial public offering (IPO) debut in March, Reddit has reportedly signed a $60 million deal with an undisclosed major AI company. This move could be... Read more »

The post Reddit is reportedly selling data for AI training appeared first on AI News.

]]>
Reddit has negotiated a content licensing deal to allow its data to be used for training AI models, according to a Bloomberg report.

Just ahead of a potential $5 billion initial public offering (IPO) debut in March, Reddit has reportedly signed a $60 million deal with an undisclosed major AI company. This move could be seen as a last-minute effort to showcase potential revenue streams in the rapidly growing AI industry to prospective investors.

Although Reddit has yet to confirm the deal, the decision could have significant implications. If true, it would mean that Reddit’s vast trove of user-generated content – including posts from popular subreddits, comments from both prominent and obscure users, and discussions on a wide range of topics – could be used to train and enhance existing large language models (LLMs) or provide the foundation for the development of new generative AI systems.

However, this decision by Reddit may not sit well with its user base, as the company has faced increasing opposition from its community regarding its recent business decisions.

Last year, when Reddit announced plans to start charging for access to its application programming interfaces (APIs), thousands of Reddit forums temporarily shut down in protest. Days later, a group of Reddit hackers threatened to release previously stolen site data unless the company reversed the API plan or paid a ransom of $4.5 million.

Reddit has recently made other controversial decisions, such as removing years of private chat logs and messages from users’ accounts. The platform also implemented new automatic moderation features and removed the option for users to turn off personalised advertising, fuelling additional discontent among its users.

This latest reported deal to sell Reddit’s data for AI training could generate even more backlash from users, as the debate over the ethics of using public data, art, and other human-created content to train AI systems continues to intensify across various industries and platforms.

(Photo by Brett Jordan on Unsplash)

See also: Amazon trains 980M parameter LLM with ’emergent abilities’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Reddit is reportedly selling data for AI training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/19/reddit-is-reportedly-selling-data-for-ai-training/feed/ 0
Deepfakes are being used to push anti-Ukraine disinformation https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/ https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/#respond Tue, 01 Mar 2022 18:01:38 +0000 https://artificialintelligence-news.com/?p=11719 Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation. Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces. As humans, we’re somewhat trained to believe what we see with our eyes.... Read more »

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation.

Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces.

As humans, we’re somewhat trained to believe what we see with our eyes. Many believed that it was only a matter of time before Russia took advantage of deepfakes and our human psychology to take its vast disinformation campaigns to the next level.

Facebook and Twitter removed two anti-Ukraine “covert influence operations” over the weekend. One had ties to Russia, while the other was connected to Belarus.

As we’ve often seen around things like Covid-19 disinformation, the Russian propaganda operation included websites aimed at pushing readers towards anti-Ukraine views. The campaign had links with the News Front and South Front websites which the US government has linked to Russian intelligence disinformation efforts.

However, Facebook said this particular campaign used AI-generated faces to give the idea that it was posted by credible columnists. Here’s one “columnist” and the “editor-in-chief” of one propaganda website:

Ears are often still a giveaway with AI-generated faces like those created on ‘This Person Does Not Exist’. The fictional woman’s mismatched earrings are one indicator while the man’s right ear is clearly not quite right.

Part of the campaign was to promote the idea that Russia’s military operation is going well and Ukraine’s efforts are going poorly. We know that Russia’s state broadcasters have only acknowledged ludicrously small losses—including just one Russian soldier fatality.

On Saturday, state-owned news agency RIA-Novosti even accidentally published and then deleted an article headlined “The arrival of Russia in a new world” in what appeared to be a pre-prepared piece expecting a swift victory. The piece piled praise on Putin’s regime and claims that Russia is returning to lead a new world order to rectify the “terrible catastrophe” that was the collapse of the Soviet Union.

So far, Russia is expected to have lost around 5,300 troops, 816 armoured combat vehicles, 101 tanks, 74 guns, 29 warplanes, 29 helicopters, and two ships/motorboats, as part of its decision to invade Ukraine.

The slow progress and mounting losses appear to have angered Russia with its military now conducting what appears to be very clear war crimes—targeting civilian areas, bombing hospitals and kindergartens, and using thermobaric and cluster munitions indiscriminately. Putin has even hinted at using nuclear weapons offensively rather than defensively in an unprecedented escalation.

Many ordinary Russian citizens are becoming outraged at what their government is doing to Ukraine, where many have family, friends, and share deep cultural ties. Russia appears to be ramping up its propaganda to counter as the country finds itself increasingly isolated.

Western governments and web giants have clamped down on Russia’s state propagandists in recent days.

British telecoms regulator Ofcom has launched 15 investigations into state broadcaster RT after observing “a significant increase in the number of programmes on the RT service that warrant investigation under our Broadcasting Code.”

Facebook has decided to block access to RT and Sputnik across the EU following “a number” of government requests from within the EU. Twitter, for its part, has announced that it would label tweets from Russian state media accounts.

Hacker collective Anonymous claims to have carried out over 1,500 cyberattacks against Russian government sites, transport infrastructure, banks, and state media to counter their falsehoods and broadcast the truth about the invasion to Russian citizens.

Russia’s media regulator Roskomnadzor, for its part, has restricted Russian users’ access to Facebook and Twitter.

(Photo by Max Kukurudziak on Unsplash)

Related: Ukraine is using Starlink to maintain global connectivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/feed/ 0
Facebook claims its AI reduced hate by 50% despite internal documents highlighting failures https://www.artificialintelligence-news.com/2021/10/18/facebook-claims-ai-reduced-hate-50-internal-documents-highlighting-failures/ https://www.artificialintelligence-news.com/2021/10/18/facebook-claims-ai-reduced-hate-50-internal-documents-highlighting-failures/#respond Mon, 18 Oct 2021 11:43:42 +0000 http://artificialintelligence-news.com/?p=11246 Damning reports about the ineffectiveness of Facebook’s AI in countering hate speech prompted the firm to publish a post to the contrary, but the company’s own internal documents highlight serious failures. Facebook has had a particularly rough time as of late, with a series of Wall Street Journal reports in particular claiming the company knows... Read more »

The post Facebook claims its AI reduced hate by 50% despite internal documents highlighting failures appeared first on AI News.

]]>
Damning reports about the ineffectiveness of Facebook’s AI in countering hate speech prompted the firm to publish a post to the contrary, but the company’s own internal documents highlight serious failures.

Facebook has had a particularly rough time as of late, with a series of Wall Street Journal reports in particular claiming the company knows that “its platforms are riddled with flaws that cause harm” and “despite congressional hearings, its own pledges and numerous media exposés, the company didn’t fix them”.

Some of the allegations include:

  • An algorithm change made Facebook an “angrier” place and CEO Mark Zuckerberg resisted suggested fixes because “they would lead people to interact with Facebook less”
  • Employees flag human traffickers, drug cartels, organ sellers, and more but the response is “inadequate or nothing at all”
  • Facebook’s tools were used to sow doubt about the severity of Covid-19’s threat and the safety of vaccines
  • The company’s own engineers have doubts about Facebook’s public claim that AI will clean up the platform.
  • Facebook knows Instagram is especially toxic for teen girls
  • A “secret elite” are exempt from the rules

The reports come predominantly from whistleblower Frances Haugen who grabbed “tens of thousands” of pages of documents from Facebook, plans to testify to Congress, and has filed at least eight SEC complaints claiming that Facebook lied to shareholders about its own products.

It makes you wonder whether former British Deputy PM Nick Clegg knew just how much he’d be taking on when he became Facebook’s VP for Global Affairs and Communications.

Over the weekend, Clegg released a blog post but instead chose to focus on Facebook’s plan to hire 10,000 Europeans to help build its vision for the metaverse—a suspiciously timed announcement that many believe was aimed to counter the negative news.

However, Facebook didn’t avoid the media reports. Guy Rosen, VP of Integrity at Facebook, also released a blog post over the weekend titled Hate Speech Prevalence Has Dropped by Almost 50% on Facebook.

According to Facebook’s post, hate speech prevalence has dropped 50 percent over the last three quarters:

When the company began reporting on hate speech metrics, just 23.6 percent of removed content was proactively detected by its systems. Facebook claims that number is now over 97 percent and there are now just five views of hate speech for every 10,000 content views on Facebook.

“Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress,” Rosen said. “This is not true.”

One of the reports found that Facebook’s AI couldn’t identify first-person shooting videos, racist rants, and couldn’t separate cockfighting from car crashes in one specific incident. Haugen claims the company only takes action on 3-5 percent of hate and 0.6 percent of violence and incitement content

In the latest exposé from the WSJ published on Sunday, Facebook employees told the outlet they don’t believe the company is capable of screening for offensive content. Employees claim that Facebook switched to largely using AI enforcement of the platform’s regulations around two years ago, which served to inflate the apparent success of its moderation tech in public statistics.

Clegg has called the WSJ’s reports “deliberate mischaracterisations” that use quotes from leaked material to create “a deliberately lop-sided view of the wider facts.”

Few people underestimate the challenge that a platform like Facebook has in catching hateful content and misinformation across billions of users – and doing so in a way that doesn’t suppress free speech – but the company doesn’t appear to be helping itself in overpromising what its AI systems can do and, reportedly, even willfully ignoring fixes to known problems over concerns they would reduce engagement.

(Photo by Prateek Katyal on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Facebook claims its AI reduced hate by 50% despite internal documents highlighting failures appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/10/18/facebook-claims-ai-reduced-hate-50-internal-documents-highlighting-failures/feed/ 0
Twitter begins labelling ‘good’ bots on the social media platform https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/ https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/#respond Fri, 10 Sep 2021 14:13:40 +0000 http://artificialintelligence-news.com/?p=11039 Twitter is testing a new feature that will give the good kind of bots some due recognition. Bots have become a particularly hot topic in recent years, but mainly for negative reasons. We’ve all seen their increased use to share propaganda to sway democratic processes and spread disinformation around things like COVID-19 vaccines. However, despite... Read more »

The post Twitter begins labelling ‘good’ bots on the social media platform appeared first on AI News.

]]>
Twitter is testing a new feature that will give the good kind of bots some due recognition.

Bots have become a particularly hot topic in recent years, but mainly for negative reasons. We’ve all seen their increased use to share propaganda to sway democratic processes and spread disinformation around things like COVID-19 vaccines.

However, despite their image problem, bots can be an important tool for good.

Some bots share critical information around things like severe weather, natural disasters, active shooters, and other emergencies. Others can be educational and provide facts or dig up historical events and artifacts to remind us of the past as we’re browsing on our modern devices.

On Thursday, Twitter announced that it’s testing a new label to let users know the account is using automated but legitimate content.

Twitter says the new feature is based on research from users that found they want more context about non-human accounts.

A study by Carnegie Mellon University last year found that almost half of Twitter accounts tweeting about the coronavirus pandemic were likely automated accounts. Twitter says it will continue to remove fake accounts that break its rules.

The move could be likened to Twitter’s verified accounts scheme that puts a little blue tick mark next to a user’s name to show others that it belongs to the person in question and isn’t a fake, often created for scam purposes.

However, unlike Twitter’s verified accounts scheme that provides no guarantees about the content of a user’s tweets, the social network is taking a bit of a gamble that tweets from a ‘good’ bot account will remain accurate.

(Photo by Jeremy Bezanger on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Twitter begins labelling ‘good’ bots on the social media platform appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/10/twitter-begins-labelling-good-bots-on-the-social-platform/feed/ 0
Social media algorithms are still failing to counter misleading content https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/ https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/#respond Tue, 17 Aug 2021 14:23:52 +0000 http://artificialintelligence-news.com/?p=10917 As the Afghanistan crisis continues to unfold, it’s clear that social media algorithms are unable to counter enough misleading and/or fake content. While it’s unreasonable to expect that no disingenuous content will slip through the net, the sheer amount that continues to plague social networks shows that platform-holders still have little grip on the issue.... Read more »

The post Social media algorithms are still failing to counter misleading content appeared first on AI News.

]]>
As the Afghanistan crisis continues to unfold, it’s clear that social media algorithms are unable to counter enough misleading and/or fake content.

While it’s unreasonable to expect that no disingenuous content will slip through the net, the sheer amount that continues to plague social networks shows that platform-holders still have little grip on the issue.

When content is removed, it should either be prevented from being reuploaded or at least flagged as potentially misleading when displayed to other users. Too often, another account – whether real or fake – simply reposts the removed content so that it can continue spreading without limitation.

The damage is only stopped when the vast amount of content that makes it AI-powered moderation efforts like object detection and scene recognition is flagged by users and eventually reviewed by an actual person, often long after it’s been widely viewed. It’s not unheard of for moderators to require therapy after being exposed to so much of the worst of humankind and defeats the purpose of automation in reducing the tasks that are dangerous and/or labour-intensive for humans to do alone.

Deepfakes currently pose the biggest challenge for social media platforms. Over time, algorithms can be trained to detect the markers that indicate content has been altered. Microsoft is developing such a system called Video Authenticator that was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset:

However, it’s also true that increasingly advanced deepfakes are making the markers ever more subtle. Back in February, researchers from the University of California – San Diego found that current systems designed to counter the increasing prevalence of deepfakes can be deceived.

Another challenge with deepfakes is their resilience to being prevented from being reuploaded. Increasing processing power means that it doesn’t take long for small changes to be made so the “new” content evades algorithmic blocking.

In a report from the NYU Stern Center for Business and Human Rights, the researchers highlighted the various ways disinformation could be used to influence democratic processes. One method is for deepfake videos to be used during elections to “portray candidates saying and doing things they never said or did”.

The report also predicts that Iran and China will join Russia as major sources of disinformation in Western democracies and that for-profit firms based in the US and abroad will be hired to generate disinformation. It transpired in May that French and German YouTubers, bloggers, and influencers were offered cash by a supposedly UK-based PR agency with Russian connections to falsely tell their followers the Pfizer/BioNTech vaccine has a high death rate. Influencers were asked to tell their subscribers that “the mainstream media ignores this theme”, which I’m sure you’ve since heard from other people.

While recognising the challenges, the likes of Facebook, YouTube, and Twitter should have the resources at their disposal to be doing a much better job at countering misleading content than they are. Some leniency can be given for deepfakes as a relatively emerging threat but some things are unforgivable at this point.

Take this video that is making the rounds, for example:

Sickeningly, it is a real and unmanipulated video. However, it’s also from ~2001. Despite many removals, the social networks continue to allow it to be reposted with claims of it being new footage without any warning that it’s old and has been flagged as being misleading.

While it’s difficult to put much faith in the Taliban’s claims that they’ll treat women and children much better than their barbaric history suggests, it’s always important for facts and genuine material to be separated from known fiction and misrepresented content no matter the issue or personal views. The networks are clearly aware of the problematic content and continue to allow it to be spread—often entirely unhindered.

An image of CNN correspondent Omar Jimenez standing in front of a helicopter taking off in Afghanistan alongside the news caption “Violent but mostly peaceful transfer of power” was posted to various social networks over the weekend. Reuters and Politifact both fact-checked the image and concluded that it had been digitally-altered.

The image of Jimenez was taken from his 2020 coverage of protests in Kenosha, Wisconsin following a police shooting alongside the caption “Fiery but mostly peaceful protests after police shooting” that was criticised by some conservatives. The doctored image is clearly intended to be satire but the comments suggest many people believed it to be true.

On Facebook, to its credit, the image has now been labeled as an “Altered photo” and clearly states that “Independent fact-checkers say this information could mislead people”. On Twitter, as of writing, the image is still circulating without any label. The caption is also being used as a title in a YouTube video with some different footage but the platform also hasn’t labeled it and claims that it doesn’t violate their rules.

Social media platforms can’t become thought police, but where algorithms have detected manipulated content – and/or there is clear evidence of even real material being used for misleading purposes – it should be indisputable that action needs to be taken to support fair discussion and debate around genuine information.

Not enough is currently being done, and we appear doomed to the same socially-damaging failings during every pivotal event for the foreseeable future unless that changes.

(Photo by Adem AY on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Social media algorithms are still failing to counter misleading content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/feed/ 0
You can now buy AI technologies from TikTok https://www.artificialintelligence-news.com/2021/07/05/you-can-now-buy-ai-technologies-from-tiktok/ https://www.artificialintelligence-news.com/2021/07/05/you-can-now-buy-ai-technologies-from-tiktok/#respond Mon, 05 Jul 2021 12:15:09 +0000 http://artificialintelligence-news.com/?p=10745 From the company’s owner, not your favourite TikTok influencer. Behind every successful TikTok video is a bunch of clever algorithms helping to make it a viral sensation. The company’s owner, ByteDance, launched a new division last month called BytePlus which sells TikTok’s AI technologies. Up for grabs is the recommendation algorithm behind the ForYou feed,... Read more »

The post You can now buy AI technologies from TikTok appeared first on AI News.

]]>
From the company’s owner, not your favourite TikTok influencer.

Behind every successful TikTok video is a bunch of clever algorithms helping to make it a viral sensation. The company’s owner, ByteDance, launched a new division last month called BytePlus which sells TikTok’s AI technologies.

Up for grabs is the recommendation algorithm behind the ForYou feed, computer vision tech, automatic speech-to-text and text-to-speech, data analysis tools, and more.

A look at the division’s website shows that it’s already generated some interest from some major players including WeBuy, GOAT, and Wego.

Wego is one of the provided case studies and claims to have improved the relevancy of their search results by using BytePlus Recommend’s machine learning algorithm. The company reportedly increased their conversions per user by 40 percent.

The battle-tested recommendation engine will probably generate the most interest of all the current offerings from BytePlus.

On TikTok, by (somewhat creepily) keeping tabs on just about everything you do on the platform – including the videos you like or comment on, the hashtags you use, your device type, and location – the Recommend algorithm behind ForYou can make some scarily accurate assumptions.

There have been dozens, if not hundreds, of cases where people claim TikTok’s algorithm knew their sexuality or certain mental health conditions before they did and guided them towards relevant communities of people.

BytePlus will be competing against players with large resources including Microsoft, Amazon, IBM, Google, and others. Given that some governments have expressed concern that TikTok could be used by the Chinese state to collect data about their citizens and/or influence their decisions, many companies outside of China may be wary about using BytePlus’ solutions.

(Photo by Solen Feyissa on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post You can now buy AI technologies from TikTok appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/07/05/you-can-now-buy-ai-technologies-from-tiktok/feed/ 0
F-Secure: AI-based recommendation engines are easy to manipulate https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/ https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/#respond Thu, 24 Jun 2021 11:10:26 +0000 http://artificialintelligence-news.com/?p=10716 Cybersecurity giant F-Secure has warned that AI-based recommendation systems are easy to manipulate. Recommendations often come under increased scrutiny around major elections due to concerns that bias could, in extreme cases, lead to electoral manipulation. However, the recommendations that are delivered to people day-to-day matter just as much, if not more. Matti Aksela, VP of... Read more »

The post F-Secure: AI-based recommendation engines are easy to manipulate appeared first on AI News.

]]>
Cybersecurity giant F-Secure has warned that AI-based recommendation systems are easy to manipulate.

Recommendations often come under increased scrutiny around major elections due to concerns that bias could, in extreme cases, lead to electoral manipulation. However, the recommendations that are delivered to people day-to-day matter just as much, if not more.

Matti Aksela, VP of Artificial Intelligence at F-Secure, commented:

“As we rely more and more on AI in the future, we need to understand what we need to do to protect it from potential abuse. 

Having AI and machine learning power more and more of the services we depend on requires us to understand its security strengths and weaknesses, in addition to the benefits we can obtain, so that we can trust the results.

Secure AI is the foundation of trustworthy AI.”

Sophisticated disinformation efforts – such as those organised by Russia’s infamous “troll farms” – have spread dangerous lies around COVID-19 vaccines, immigration, and high-profile figures.

Andy Patel, Researcher at F-Secure’s Artificial Intelligence Center of Excellence, said:

“Twitter and other networks have become battlefields where different people and groups push different narratives. These include organic conversations and ads, but also messages intended to undermine and erode trust in legitimate information.

Examining how these ‘combatants’ can manipulate AI helps expose the limits of what AI can realistically do, and ideally, how it can be improved.” 

Legitimate and reliable information is needed more than ever. Scepticism is healthy, but people are beginning to either trust nothing or believe everything. Both are problematic.

According to a PEW Research Center survey from late-2020, 53 percent of Americans get their news from social media. Younger respondents, aged between 18-29, reported that social media is their main source of news.

No person or media outlet gets everything right, but a history of credibility must be taken into account—which tools such as NewsGuard help with. However, almost all mainstream media outlets have at least more credibility than a random social media user who may or may not even be who they claim to be.

In 2018, an investigation found that Twitter posts containing falsehoods are 70 percent more likely to be reshared. The ripple effect created by this resharing without fact-checking is why disinformation can spread so far within minutes. For some topics, like COVID-19 vaccines, Facebook has at least started to prompt users whether they’ve considered if the information is accurate before they share it.

Patel trained collaborative filtering models (a type of machine learning used to encode similarities between users and content based on previous interactions) using data collected from Twitter for use in recommendation systems. As part of his experiments, Patel “poisoned” the data using additional retweets to retrain the model and see how the recommendations changed.

The findings showed how even a very small number of retweets could manipulate the recommendation engine into promoting accounts whose content was shared through the injected retweets.

“We performed tests against simplified models to learn more about how the real attacks might actually work,” said Patel.

“I think social media platforms are already facing attacks that are similar to the ones demonstrated in this research, but it’s hard for these organisations to be certain this is what’s happening because they’ll only see the result, not how it works.”

Patel’s research can be recreated using the code and datasets hosted on GitHub here.

(Photo by Charles Deluvio on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post F-Secure: AI-based recommendation engines are easy to manipulate appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/feed/ 0
Opinion: How AI can protect users in the online world https://www.artificialintelligence-news.com/2021/03/30/opinion-how-ai-protect-users-in-online-world/ https://www.artificialintelligence-news.com/2021/03/30/opinion-how-ai-protect-users-in-online-world/#respond Tue, 30 Mar 2021 17:18:44 +0000 http://artificialintelligence-news.com/?p=10427 With more than 74 percent of Gen Z spending their free time online – averaging around 10 hours per day – it’s safe to say their online and offline worlds are becoming entwined. With increased social media usage now the norm and all of us living our lives online a little bit more, we must... Read more »

The post Opinion: How AI can protect users in the online world appeared first on AI News.

]]>
With more than 74 percent of Gen Z spending their free time online – averaging around 10 hours per day – it’s safe to say their online and offline worlds are becoming entwined. With increased social media usage now the norm and all of us living our lives online a little bit more, we must look for ways to mitigate risks, protect our safety and filter out communications that are causing concern. Step forward, Artificial Intelligence (AI) – advanced machine learning technology that plays an important role in modern life and is fundamental in how today’s social networks function. 

With just one click AI tools such as chatbots, algorithms and auto-suggestions impact what you see on your screen and how often you see it, creating a customised feed that has completely changed the way we interact on these platforms. By analysing our behaviours, deep learning tools can determine habits, likes and dislike and only display material they anticipate you will enjoy. Human intelligence combined with these deep learning systems not only make scrolling our feeds feel more personalised but also provide a crucial and effective way to monitor for and quickly react to harmful and threatening behaviours we are exposed to online, which can have damaging consequences in the long term. 

The importance of AI in making social platforms safer 

The lack of parental control on most social networks means it can be a toxic environment to be in, and the amount of users that are unknown to you on these platforms carries a large degree of risk. The reality is teens today have constant access to the internet yet most lack parental involvement in their digital lives. Lots of children face day to day challenges online, having seen or witnessed cyberbullying along with other serious threats such as radicalisation, child exploitation and the rise of pro-suicide chat rooms to name a few and all of these activities go on unsupervised by parents and guardians.

AI exists to improve people’s lives, yet there has always been a fear that these ‘robots’ will begin to replace humans, that classic ‘battle’ between man and machine. Instead, we must be willing to tap in and embrace its possibilities – cybersecurity is one of the greatest challenges of our time and by harnessing the power of AI we can begin to fight back against actions that have harmful consequences and reduce online risk.

Advanced safety features

AI has proven to be an effective weapon in the fight against online harassment and the spreading of harmful content and these deep learning tools are now playing an important role in our society, improving security in both our virtual and real worlds. AI can be leveraged to moderate content that is uploaded to social platforms as well as monitor interactions between users – something that would not be possible if done manually due to sheer volume. At Yubo we use a form of AI called neural network learning, Yoti Age Scan, to accurately estimate a user’s age on accounts where there are suspicions or doubts – our users must be 13 to sign up and there are separate adult accounts for over 18’s. Flagged accounts are reviewed within seconds and users must verify their age and identity before they can continue using the platform – it is just one vital step we are taking to protect young people online. With over 100 million hours of video and 350 million photos uploaded on Facebook alone every day, algorithms are programmed to shift through mind-boggling amounts of content and delete both the posts and the users when content is harmful and does not comply with the platform standards. Algorithms are constantly developing and learning and are able to recognise duplicate posts, understand the context of scenes in videos and even identify sentiment analysis – recognising tones such as anger or sarcasm. If a post cannot be identified it will be flagged for human review. Using AI to review the majority of online activity shields human moderators from disturbing content that could otherwise lead to mental health issues.

AI also uses Natural Language Processing (NPL) tools to monitor interactions between users on social networks and identify inappropriate messages being sent amongst underage and vulnerable users. In practice, most harmful content is generated by a minority of users and so AI techniques can be used to identify malicious users and prioritise their content for review. Machine learning enables these systems to find patterns in behaviours and conversations invisible to humans and can suggest new categories for further investigation. With its advanced analytical capabilities, AI can also automate the verification of information and the validation of a post’s authenticity to eliminate the spread of misinformation and misleading content.

Unleashing the power of AI for education 

Young people need a safe and stimulating environment when they are online.  AI can be used to proactively educate users about responsible online behaviour through real-time alerts and blockers. At Yubo, where our user base is made up of only Gen Zers, we use a combination of sophisticated AI technology and human interaction to monitor users behaviour. Our safety features prevent the sharing of personal information or inappropriate messages by intervening in real-time – for example, if a user is about to share sensitive information, such as a personal number, address or even an inappropriate image they’ll receive a pop up from Yubo highlighting the implications that could arise from sharing this information. The user will then have to confirm they want to proceed before they are allowed to do so. Additionally, if users attempt to share revealing images or an inappropriate request, Yubo will block that content from being shared with the intended recipient before they can hit send. We are actively educating our users not only about the risks associated with sharing personal information but also prompting them to rethink their actions before participating in activities that could have negative consequences for themselves or others. We are committed to providing a safe place for Gen Z to connect and socialise – we know our user base is of an age where if we can educate them around online dangers and best practices now then we can mould their behaviours in a positive way for the future.  

Applying AI tools for social good

Social media, when used safely, is a powerful tool that enables people to collaborate, build connections, encourages innovation and helps to raise awareness about important societal issues along with an untold number of other positives. With so much importance placed in these digital worlds, it’s imperative that users are both educated and protected so they can navigate these platforms and reap the benefits in the most responsible way.  We are already seeing the positive impact AI technology is having on social networks –  they are vital in analysing and monitoring the expansive amount of data and users that are active on these platforms every day. 

At Yubo, we know it’s our duty to protect our users and have implemented sophisticated AI technology to help mitigate any risks and we will continue to utilise AI to shield our users from harmful interactions and content as well as starting an ongoing dialogue about the consequences of inappropriate behaviour. AI tools present an unlimited potential for making social spaces safer and we need to harness the power they have to increase well being for us all.

(Photo by Prateek Katyal on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Opinion: How AI can protect users in the online world appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/03/30/opinion-how-ai-protect-users-in-online-world/feed/ 0
Facebook uses AI to help people support each other https://www.artificialintelligence-news.com/2020/10/02/facebook-ai-help-people-support-each-other/ https://www.artificialintelligence-news.com/2020/10/02/facebook-ai-help-people-support-each-other/#respond Fri, 02 Oct 2020 11:59:30 +0000 http://artificialintelligence-news.com/?p=9899 Facebook has deployed an AI system which matches people needing support with local heroes offering it. “United we stand, divided we fall” is a clichéd saying—but tackling a pandemic is a collective effort. While we’ve all seen people taking selfish actions, they’ve been more than balanced out by those helping to support their communities. Facebook... Read more »

The post Facebook uses AI to help people support each other appeared first on AI News.

]]>
Facebook has deployed an AI system which matches people needing support with local heroes offering it.

“United we stand, divided we fall” is a clichéd saying—but tackling a pandemic is a collective effort. While we’ve all seen people taking selfish actions, they’ve been more than balanced out by those helping to support their communities.

Facebook has been its usual blessing and a curse during the pandemic. On the one hand, it’s helped people to stay connected and organise community efforts. On the other, it’s allowed dangerous misinformation to spread like wildfire that’s led to the increase in anti-vaccine and anti-mask movements.

The social media giant is hoping that AI can help to swing the balance more towards Facebook having an overall benefit within our communities.

If a person has posted asking for help because they’re unable to leave the house, Facebook’s AI may automatically match that person with someone local who has recently said they’re willing to get things like groceries or prescriptions for people.

In a blog post, Facebook explains how it built its matching algorithm:

We built and deployed this matching algorithm using XLM-R, our open-source, cross-lingual understanding model that extends our work on XLM and RoBERTa, to produce a relevance score that ranks how closely a request for help matches the current offers for help in that community.

The system then integrates the posts’ ranking score into a set of models trained on PyText, our open-source framework for natural language processing.

It’s a great idea which could go a long way to making a real positive impact on people in difficult times. Hopefully, we’ll see more of such efforts from Facebook to improve our communities.

(Photo by Bohdan Pyryn on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Facebook uses AI to help people support each other appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/10/02/facebook-ai-help-people-support-each-other/feed/ 0