media Archives - AI News https://www.artificialintelligence-news.com/tag/media/ Artificial Intelligence News Mon, 29 Apr 2024 15:57:07 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png media Archives - AI News https://www.artificialintelligence-news.com/tag/media/ 32 32 FT and OpenAI ink partnership amid web scraping criticism https://www.artificialintelligence-news.com/2024/04/29/ft-and-openai-ink-partnership-web-scraping-criticism/ https://www.artificialintelligence-news.com/2024/04/29/ft-and-openai-ink-partnership-web-scraping-criticism/#respond Mon, 29 Apr 2024 15:57:06 +0000 https://www.artificialintelligence-news.com/?p=14759 The Financial Times and OpenAI have announced a strategic partnership and licensing agreement that will integrate the newspaper’s journalism into ChatGPT and collaborate on developing new AI products for FT readers. However, just because OpenAI is cozying up to publishers doesn’t mean it’s not still scraping information from the web without permission. Through the deal,... Read more »

The post FT and OpenAI ink partnership amid web scraping criticism appeared first on AI News.

]]>
The Financial Times and OpenAI have announced a strategic partnership and licensing agreement that will integrate the newspaper’s journalism into ChatGPT and collaborate on developing new AI products for FT readers. However, just because OpenAI is cozying up to publishers doesn’t mean it’s not still scraping information from the web without permission.

Through the deal, ChatGPT users will be able to see selected attributed summaries, quotes, and rich links to FT journalism in response to relevant queries. Additionally, the FT became a customer of ChatGPT Enterprise earlier this year, providing access for all employees to familiarise themselves with the technology and benefit from its potential productivity gains.

“This is an important agreement in a number of respects,” said John Ridding, FT Group CEO. “It recognises the value of our award-winning journalism and will give us early insights into how content is surfaced through AI.”

In 2023, technology companies faced numerous lawsuits and widespread criticism for allegedly using copyrighted material from artists and publishers to train their AI models without proper authorisation.

OpenAI, in particular, drew significant backlash for training its GPT models on data obtained from the internet without obtaining consent from the respective content creators. This issue escalated to the point where The New York Times filed a lawsuit against OpenAI and Microsoft last year, accusing them of copyright infringement.

While emphasising the FT’s commitment to human journalism, Ridding noted the agreement would broaden the reach of its newsroom’s work while deepening the understanding of reader interests.

“Apart from the benefits to the FT, there are broader implications for the industry. It’s right, of course, that AI platforms pay publishers for the use of their material. OpenAI understands the importance of transparency, attribution, and compensation – all essential for us,” explained Ridding.

Earlier this month, The New York Times reported that OpenAI was utilising scripts from YouTube videos to train its AI models. According to the publication, this practice violates copyright laws, as content creators who upload videos to YouTube retain the copyright ownership of the material they produce.

However, OpenAI maintains that its use of online content falls under the fair use doctrine. The company, along with numerous other technology firms, argues that their large language models (LLMs) transform the information gathered from the internet into an entirely new and distinct creation.

In January, OpenAI asserted to a UK parliamentary committee that it would be “impossible” to develop today’s leading AI systems without using vast amounts of copyrighted data.

Brad Lightcap, COO of OpenAI, expressed his enthusiasm about the FT partnership: “Our partnership and ongoing dialogue with the FT is about finding creative and productive ways for AI to empower news organisations and journalists, and enrich the ChatGPT experience with real-time, world-class journalism for millions of people around the world.”

This agreement between OpenAI and the Financial Times is the most recent in a series of new collaborations that OpenAI has forged with major news publishers worldwide.

While the financial details of these contracts were not revealed, OpenAI’s recent partnerships with publishers will enable the company to continue training its algorithms on web content, but with the crucial difference being that it now has obtained the necessary permissions to do so.

Ridding said the FT values “the opportunity to be inside the development loop as people discover content in new ways.” He acknowledged the potential for significant advancements and challenges with transformative technologies like AI but emphasised, “what’s never possible is turning back time.”

“It’s important for us to represent quality journalism as these products take shape – with the appropriate safeguards in place to protect the FT’s content and brand,” Ridding added.

The FT has embraced new technologies throughout its history. “We’ll continue to operate with both curiosity and vigilance as we navigate this next wave of change,” Ridding concluded.

(Photo by Utsav Srestha)

See also: OpenAI faces complaint over fictional outputs

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FT and OpenAI ink partnership amid web scraping criticism appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/ft-and-openai-ink-partnership-web-scraping-criticism/feed/ 0
James Cameron warns of the dangers of deepfakes https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/ https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/#respond Mon, 24 Jan 2022 18:40:34 +0000 https://artificialintelligence-news.com/?p=11603 Legendary director James Cameron has warned of the dangers that deepfakes pose to society. Deepfakes leverage machine learning and AI techniques to convincingly manipulate or generate visual and audio content. Their high potential to deceive makes them a powerful tool for spreading disinformation, committing fraud, trolling, and more. “Every time we improve these tools, we’re... Read more »

The post James Cameron warns of the dangers of deepfakes appeared first on AI News.

]]>
Legendary director James Cameron has warned of the dangers that deepfakes pose to society.

Deepfakes leverage machine learning and AI techniques to convincingly manipulate or generate visual and audio content. Their high potential to deceive makes them a powerful tool for spreading disinformation, committing fraud, trolling, and more.

“Every time we improve these tools, we’re actually in a sense building a toolset to create fake media — and we’re seeing it happening now,” said Cameron in a BBC video interview.

“Right now the tools are — the people just playing around on apps aren’t that great. But over time, those limitations will go away. Things that you see and fully believe you’re seeing could be faked.”

Have you ever said “I’ll believe it when I see it with my own eyes,” or similar? I certainly have. As humans, we’re subconsciously trained to believe what we can see (unless it’s quite obviously faked.)

The problem is amplified with today’s fast news cycle. It’s a well-known problem that many articles get shared based on their headline before moving on to the next story. Few people are going to stop to analyse images and videos for small imperfections.

Often the stories are shared with reactions to the headline without reading the story to get the full context. This can lead to a butterfly effect of people seeing their contacts’ reactions to the headline and feel they don’t need additional context—often just sharing in whatever emotional response the headline was designed to invoke (generally outrage.)

“News cycles happen so fast, and people respond so quickly, you could have a major incident take place between the interval between when the deepfake drops and when it’s exposed as a fake,” says Cameron.

“We’ve seen situations — you know, Arab Spring being a classic example — where with social media, the uprising was practically overnight.”

It’s a difficult problem to tackle as it is. We’ve all seen the amount of disinformation around things such as the COVID-19 vaccines. However, an article posted with convincing deepfake media will be almost impossible to stop from being posted and/or shared widely.

AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. However, researchers have found that current tools can easily be deceived.

Images and videos that can be verified as original and authentic using technologies like distributed ledgers could also be used to help give audiences confidence the media they’re consuming isn’t a manipulated version and they really can trust their own eyes.

In the meantime, Cameron suggest using Occam’s razor—a problem-solving principle that’s can be summarised as the simplest explanation is the likeliest.

“Conspiracy theories are all too complicated. People aren’t that good, human systems aren’t that good, people can’t keep a secret to save their lives, and most people in positions of power are bumbling stooges.

“The fact that we think that they could realistically pull off these — these complex plots? I don’t buy any of that crap! Bill Gates is not really trying to microchip you with the flu vaccine!”

However, Cameron admits his scepticism of new technology.

“Every single advancement in technology that’s ever been created has been weaponised. I say this to AI scientists all the time, and they go, ‘No, no, no, we’ve got this under control.’ You know, ‘We just give the AIs the right goals…’

“So who’s deciding what those goals are? The people that put up the money for the research, right? Which are all either big business or defense. So you’re going to teach these new sentient entities to be either greedy or murderous.”

Of course, Skynet gets an honourary mention.

“If Skynet wanted to take over and wipe us out, it would actually look a lot like what’s going on right now. It’s not going to have to — like, wipe out the entire, you know, biosphere and environment with nuclear weapons to do it. It’s going to be so much easier and less energy required to just turn our minds against ourselves.

“All Skynet would have to do is just deepfake a bunch of people, pit them against each other, stir up a lot of foment, and just run this giant deepfake on humanity.”

Russia’s infamous state-sponsored “troll farms” are one of the largest sources of disinformation and are used to conduct online influence campaigns.

In a January 2017 report issued by the United States Intelligence Community – Assessing Russian Activities and Intentions in Recent US Elections (PDF) – described the ‘Internet Research Agency’ as one such troll farm.

“The likely financier of the so-called Internet Research Agency of professional trolls located in Saint Petersburg is a close ally of [Vladimir] Putin with ties to Russian intelligence,” commenting that “they previously were devoted to supporting Russian actions in Ukraine.”

Western officials have warned that Russia may use disinformation campaigns – including claims of an attack from Ukrainian troops – to rally support and justify an invasion of Ukraine. It’s not out the realms of possibility that manipulated content will play a role, so it could be too late to counter the first large-scale disaster supported by deepfakes.

Related: University College London: Deepfakes are the ‘most serious’ AI crime threat

(Image Credit: Gage Skidmore. Image cropped. CC BY-SA 3.0 license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post James Cameron warns of the dangers of deepfakes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/feed/ 0