copyright Archives - AI News https://www.artificialintelligence-news.com/tag/copyright/ Artificial Intelligence News Wed, 01 May 2024 13:21:47 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png copyright Archives - AI News https://www.artificialintelligence-news.com/tag/copyright/ 32 32 Coalition of news publishers sue Microsoft and OpenAI https://www.artificialintelligence-news.com/2024/05/01/coalition-news-publishers-sue-microsoft-openai/ https://www.artificialintelligence-news.com/2024/05/01/coalition-news-publishers-sue-microsoft-openai/#respond Wed, 01 May 2024 13:21:44 +0000 https://www.artificialintelligence-news.com/?p=14768 A coalition of major news publishers has filed a lawsuit against Microsoft and OpenAI, accusing the tech giants of unlawfully using copyrighted articles to train their generative AI models without permission or payment. First reported by The Verge, the group of eight publications owned by Alden Global Capital (AGC) – including the Chicago Tribune, New... Read more »

The post Coalition of news publishers sue Microsoft and OpenAI appeared first on AI News.

]]>
A coalition of major news publishers has filed a lawsuit against Microsoft and OpenAI, accusing the tech giants of unlawfully using copyrighted articles to train their generative AI models without permission or payment.

First reported by The Verge, the group of eight publications owned by Alden Global Capital (AGC) – including the Chicago Tribune, New York Daily News, and Orlando Sentinel – allege the companies have purloined “millions” of their articles without permission and without payment “to fuel the commercialisation of their generative artificial intelligence products, including ChatGPT and Copilot.”

The lawsuit is the latest legal action taken against Microsoft and OpenAI over their alleged misuse of copyrighted content to build large language models (LLMs) that power AI technologies like ChatGPT. In the complaint, the AGC publications claim the companies’ chatbots can reproduce their articles verbatim shortly after publication, without providing prominent links back to the original sources.

“This lawsuit is not a battle between new technology and old technology. It is not a battle between a thriving industry and an industry in transition. It is most surely not a battle to resolve the phalanx of social, political, moral, and economic issues that GenAI raises,” the complaint reads.

“This lawsuit is about how Microsoft and OpenAI are not entitled to use copyrighted newspaper content to build their new trillion-dollar enterprises without paying for that content.”

The plaintiffs also accuse the AI models of “hallucinations,” attributing inaccurate reporting to their publications. They reference OpenAI’s previous admission that it would be “impossible” to train today’s leading AI models without using copyrighted materials.

The allegations echo those made by The New York Times in a separate lawsuit filed last year. The Times claimed Microsoft and OpenAI used almost a century’s worth of copyrighted content to allow their AI to mimic its expressive style without a licensing agreement.

In seeking to dismiss key parts of the Times’ lawsuit, Microsoft accused the paper of “doomsday futurology” by suggesting generative AI could threaten independent journalism.

The AGC publications argue that OpenAI, now valued at $90 billion after becoming a for-profit company, and Microsoft – which has seen hundreds of billions of dollars added to its market value from ChatGPT and Copilot – are profiting from the unauthorised use of copyrighted works.

The news publishers are seeking unspecified damages and an order for Microsoft and OpenAI to destroy any GPT and LLM models utilising their copyrighted content.

Earlier this week, OpenAI signed a licensing partnership with The Financial Times to lawfully integrate the newspaper’s journalism. However, the latest lawsuit from AGC highlights the growing tensions between tech companies developing generative AI and content creators concerned about the unchecked use of their works to train profitable AI systems.

(Photo by Wesley Tingey)

See also: OpenAI faces complaint over fictional outputs

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Coalition of news publishers sue Microsoft and OpenAI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/01/coalition-news-publishers-sue-microsoft-openai/feed/ 0
FT and OpenAI ink partnership amid web scraping criticism https://www.artificialintelligence-news.com/2024/04/29/ft-and-openai-ink-partnership-web-scraping-criticism/ https://www.artificialintelligence-news.com/2024/04/29/ft-and-openai-ink-partnership-web-scraping-criticism/#respond Mon, 29 Apr 2024 15:57:06 +0000 https://www.artificialintelligence-news.com/?p=14759 The Financial Times and OpenAI have announced a strategic partnership and licensing agreement that will integrate the newspaper’s journalism into ChatGPT and collaborate on developing new AI products for FT readers. However, just because OpenAI is cozying up to publishers doesn’t mean it’s not still scraping information from the web without permission. Through the deal,... Read more »

The post FT and OpenAI ink partnership amid web scraping criticism appeared first on AI News.

]]>
The Financial Times and OpenAI have announced a strategic partnership and licensing agreement that will integrate the newspaper’s journalism into ChatGPT and collaborate on developing new AI products for FT readers. However, just because OpenAI is cozying up to publishers doesn’t mean it’s not still scraping information from the web without permission.

Through the deal, ChatGPT users will be able to see selected attributed summaries, quotes, and rich links to FT journalism in response to relevant queries. Additionally, the FT became a customer of ChatGPT Enterprise earlier this year, providing access for all employees to familiarise themselves with the technology and benefit from its potential productivity gains.

“This is an important agreement in a number of respects,” said John Ridding, FT Group CEO. “It recognises the value of our award-winning journalism and will give us early insights into how content is surfaced through AI.”

In 2023, technology companies faced numerous lawsuits and widespread criticism for allegedly using copyrighted material from artists and publishers to train their AI models without proper authorisation.

OpenAI, in particular, drew significant backlash for training its GPT models on data obtained from the internet without obtaining consent from the respective content creators. This issue escalated to the point where The New York Times filed a lawsuit against OpenAI and Microsoft last year, accusing them of copyright infringement.

While emphasising the FT’s commitment to human journalism, Ridding noted the agreement would broaden the reach of its newsroom’s work while deepening the understanding of reader interests.

“Apart from the benefits to the FT, there are broader implications for the industry. It’s right, of course, that AI platforms pay publishers for the use of their material. OpenAI understands the importance of transparency, attribution, and compensation – all essential for us,” explained Ridding.

Earlier this month, The New York Times reported that OpenAI was utilising scripts from YouTube videos to train its AI models. According to the publication, this practice violates copyright laws, as content creators who upload videos to YouTube retain the copyright ownership of the material they produce.

However, OpenAI maintains that its use of online content falls under the fair use doctrine. The company, along with numerous other technology firms, argues that their large language models (LLMs) transform the information gathered from the internet into an entirely new and distinct creation.

In January, OpenAI asserted to a UK parliamentary committee that it would be “impossible” to develop today’s leading AI systems without using vast amounts of copyrighted data.

Brad Lightcap, COO of OpenAI, expressed his enthusiasm about the FT partnership: “Our partnership and ongoing dialogue with the FT is about finding creative and productive ways for AI to empower news organisations and journalists, and enrich the ChatGPT experience with real-time, world-class journalism for millions of people around the world.”

This agreement between OpenAI and the Financial Times is the most recent in a series of new collaborations that OpenAI has forged with major news publishers worldwide.

While the financial details of these contracts were not revealed, OpenAI’s recent partnerships with publishers will enable the company to continue training its algorithms on web content, but with the crucial difference being that it now has obtained the necessary permissions to do so.

Ridding said the FT values “the opportunity to be inside the development loop as people discover content in new ways.” He acknowledged the potential for significant advancements and challenges with transformative technologies like AI but emphasised, “what’s never possible is turning back time.”

“It’s important for us to represent quality journalism as these products take shape – with the appropriate safeguards in place to protect the FT’s content and brand,” Ridding added.

The FT has embraced new technologies throughout its history. “We’ll continue to operate with both curiosity and vigilance as we navigate this next wave of change,” Ridding concluded.

(Photo by Utsav Srestha)

See also: OpenAI faces complaint over fictional outputs

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FT and OpenAI ink partnership amid web scraping criticism appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/ft-and-openai-ink-partnership-web-scraping-criticism/feed/ 0
Nightshade ‘poisons’ AI models to fight copyright theft https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/ https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/#respond Tue, 24 Oct 2023 14:49:13 +0000 https://www.artificialintelligence-news.com/?p=13779 University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery. The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models. Many... Read more »

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery.

The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models.

Many artists and creators have expressed concern over the use of their work in training commercial AI products without their consent.

AI models rely on vast amounts of multimedia data – including written material and images, often scraped from the web – to function effectively. Nightshade offers a potential solution by sabotaging this data.

When integrated into digital artwork, Nightshade misleads AI models, causing them to misidentify objects and scenes.

For instance, Nightshade transformed images of dogs into data that appeared to AI models as cats. After exposure to a mere 100 poison samples, the AI reliably generated a cat when asked for a dog—demonstrating the tool’s effectiveness.

This technique not only confuses AI models but also challenges the fundamental way in which generative AI operates. By exploiting the clustering of similar words and ideas in AI models, Nightshade can manipulate responses to specific prompts and further undermine the accuracy of AI-generated content.

Developed by computer science professor Ben Zhao and his team, Nightshade is an extension of their prior product, Glaze, which cloaks digital artwork and distorts pixels to baffle AI models regarding artistic style.

While the potential for misuse of Nightshade is acknowledged, the researchers’ primary objective is to shift the balance of power from AI companies back to artists and discourage intellectual property violations.

The introduction of Nightshade presents a major challenge to AI developers. Detecting and removing images with poisoned pixels is a complex task, given the imperceptible nature of the alterations.

If integrated into existing AI training datasets, these images necessitate removal and potential retraining of AI models, posing a substantial hurdle for companies relying on stolen or unauthorised data.

As the researchers await peer review of their work, Nightshade is a beacon of hope for artists seeking to protect their creative endeavours.

(Photo by Josie Weiss on Unsplash)

See also: UMG files landmark lawsuit against AI developer Anthropic

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/feed/ 0
UMG files landmark lawsuit against AI developer Anthropic https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/ https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/#respond Thu, 19 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13770 Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI. This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG –... Read more »

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI.

This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG – is seeking $75 million in damages.

The lawsuit centres around the alleged unauthorised use of copyrighted music by Anthropic to train its AI models. The publishers claim that Anthropic illicitly incorporated songs from artists they represent into its AI dataset without obtaining the necessary permissions.

Legal representatives for the publishers have asserted that the action was taken to address the “systematic and widespread infringement” of copyrighted song lyrics by Anthropic.

The lawsuit, spanning 60 pages and posted online by The Hollywood Reporter, emphasises the publishers’ support for innovation and ethical AI use. However, they contend that Anthropic has violated these principles and must be held accountable under established copyright laws.

Anthropic, despite positioning itself as an AI ‘safety and research’ company, stands accused of copyright infringement without regard for the law or the creative community whose works underpin its services, according to the lawsuit.

In addition to the significant monetary damages, the publishers have demanded a jury trial. They also seek reimbursement for legal fees, the destruction of all infringing material, public disclosure of how Anthropic’s AI model was trained, and financial penalties of up to $150,000 per infringed work.

This latest lawsuit follows a string of legal battles between AI developers and creators. Each new case is worth observing to see the precedent that is set for future battles.

(Photo by Jason Rosewell on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/feed/ 0
GitLab: Developers view AI as ‘essential’ despite concerns https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/ https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/#respond Wed, 06 Sep 2023 09:48:08 +0000 https://www.artificialintelligence-news.com/?p=13564 A survey by GitLab has shed light on the views of developers on the landscape of AI in software development. The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals. The report reveals a complex relationship between enthusiasm for AI... Read more »

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals.

The report reveals a complex relationship between enthusiasm for AI adoption and concerns about data privacy, intellectual property, and security.

“Enterprises are seeking out platforms that allow them to harness the power of AI while addressing potential privacy and security risks,” said Alexander Johnston, Research Analyst in the Data, AI & Analytics Channel at 451 Research, a part of S&P Global Market Intelligence.

While 83 percent of the survey’s respondents view AI implementation as essential to stay competitive, a significant 79 percent expressed worries about AI tools accessing sensitive information and intellectual property.

Impact on developer productivity

AI is perceived as a boon for developer productivity, with 51 percent of all respondents citing it as a key benefit of AI implementation. However, security professionals are apprehensive that AI-generated code might lead to an increase in security vulnerabilities, potentially creating more work for them.

Only seven percent of developers’ time is currently spent identifying and mitigating security vulnerabilities, compared to 11 percent allocated to testing code. This raises questions about the widening gap between developers and security professionals in the AI era.

Privacy and intellectual property concerns

The survey underscores the paramount importance of data privacy and intellectual property protection when selecting AI tools. 95 percent of senior technology executives prioritise these aspects when choosing AI solutions.

Moreover, 32 percent of respondents admitted to being “very” or “extremely” concerned about introducing AI into the software development lifecycle. Within this group, 39 percent cited worries about AI-generated code introducing security vulnerabilities, and 48 percent expressed concerns that AI-generated code may not receive the same copyright protection as code produced by humans.

AI skills gap

Despite optimism about AI’s potential, the report identifies a disconnect between organisations’ provision of AI training resources and practitioners’ satisfaction with them. 

While 75 percent of respondents stated that their organisations offer training and resources for using AI, an equivalent proportion expressed the need to seek resources independently—suggesting that the available training may be insufficient.

A striking 81 percent of respondents said they require more training to effectively utilise AI in their daily work. Furthermore, 65 percent of those planning to use AI for software development indicated that their organsations plan to hire new talent to manage AI implementation.

David DeSanto, Chief Product Officer at GitLab, said:

“According to the GitLab Global DevSecOps Report, only 25 percent of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60 percent of developers’ day-to-day work.

To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software – not just developers – to benefit from the efficiency boost.” 

While AI holds immense promise for the software development industry, GitLab’s report makes it clear that addressing cybersecurity and privacy concerns, bridging the skills gap, and fostering collaboration between developers and security professionals are pivotal to successful AI adoption.

(Photo by Luca Bravo on Unsplash)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/feed/ 0
Getty is suing Stable Diffusion’s creator for copyright infringement https://www.artificialintelligence-news.com/2023/01/18/getty-suing-stable-diffusion-creator-copyright-infringement/ https://www.artificialintelligence-news.com/2023/01/18/getty-suing-stable-diffusion-creator-copyright-infringement/#respond Wed, 18 Jan 2023 09:05:33 +0000 https://www.artificialintelligence-news.com/?p=12626 Stock image service Getty Images is suing Stable Diffusion creator Stability AI over alleged copyright infringement. Stable Diffusion is one of the most popular text-to-image tools. Unlike many of its rivals, the generative AI model can run on a local computer. Apple is a supporter of the Stable Diffusion project and recently optimised its performance... Read more »

The post Getty is suing Stable Diffusion’s creator for copyright infringement appeared first on AI News.

]]>
Stock image service Getty Images is suing Stable Diffusion creator Stability AI over alleged copyright infringement.

Stable Diffusion is one of the most popular text-to-image tools. Unlike many of its rivals, the generative AI model can run on a local computer.

Apple is a supporter of the Stable Diffusion project and recently optimised its performance on M-powered Macs. Last month, AI News reported that M2 Macs can now generate images using Stable Diffusion in under 18 seconds.

Text-to-image generators like Stable Diffusion have come under the spotlight for potential copyright infringement. Human artists have complained their creations have been used to train the models without permission or compensation.

Getty Images has now accused Stability AI of using its content and has commenced legal proceedings.

In a statement, Getty Images wrote:

“This week Getty Images commenced legal proceedings in the High Court of Justice in London against Stability AI claiming Stability AI infringed intellectual property rights including copyright in content owned or represented by Getty Images. It is Getty Images’ position that Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a license to benefit Stability AI’s commercial interests and to the detriment of the content creators.

Getty Images believes artificial intelligence has the potential to stimulate creative endeavors. Accordingly, Getty Images provided licenses to leading technology innovators for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights. Stability AI did not seek any such license from Getty Images and instead, we believe, chose to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests.”

While the images used for training alternatives like DALL-E 2 haven’t been disclosed, Stability AI has been transparent about how their model is trained. However, that may now have put the biz in hot water.

In an independent analysis of 12 million of the 2.3 billion images used to train Stable Diffusion, conducted by Andy Baio and Simon Willison, they found it was trained using images from nonprofit Common Crawl which scrapes billions of webpages monthly.

“Unsurprisingly, a large number came from stock image sites. 123RF was the biggest with 497k, 171k images came from Adobe Stock’s CDN at ftcdn.net, 117k from PhotoShelter, 35k images from Dreamstime, 23k from iStockPhoto, 22k from Depositphotos, 22k from Unsplash, 15k from Getty Images, 10k from VectorStock, and 10k from Shutterstock, among many others,” wrote the researchers.

Platforms with high amounts of user-generated content such as Pinterest, WordPress, Blogspot, Flickr, DeviantArt, and Tumblr were also found to be large sources of images that were scraped for training purposes.

The concerns around the use of copyrighted content for training AI models appear to be warranted. It’s likely we’ll see a growing number of related lawsuits over the coming months and years unless a balance is found between enabling AI training and respecting the work of human creators.

In October, Shutterstock announced that it was expanding its partnership with DALL-E creator OpenAI. As part of the expanded partnership, Shutterstock will offer DALL-E images to customers.

The partnership between Shutterstock and OpenAI will see the former create frameworks that will compensate artists when their intellectual property is used and when their works have contributed to the development of AI models.

(Photo by Tingey Injury Law Firm on Unsplash)

Relevant: Adobe to begin selling AI-generated stock images

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Getty is suing Stable Diffusion’s creator for copyright infringement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/18/getty-suing-stable-diffusion-creator-copyright-infringement/feed/ 0
OpenAI and Microsoft hit with lawsuit over GitHub Copilot https://www.artificialintelligence-news.com/2022/11/09/openai-and-microsoft-lawsuit-github-copilot/ https://www.artificialintelligence-news.com/2022/11/09/openai-and-microsoft-lawsuit-github-copilot/#respond Wed, 09 Nov 2022 12:17:52 +0000 https://www.artificialintelligence-news.com/?p=12460 A class-action lawsuit has been launched against OpenAI and Microsoft over GitHub Copilot. GitHub Copilot uses technology from OpenAI to help generate code and speed up software development. Microsoft says that it is trained on “billions of lines of public code … written by others.” Last month, developer and lawyer Matthew Butterick announced that he’d... Read more »

The post OpenAI and Microsoft hit with lawsuit over GitHub Copilot appeared first on AI News.

]]>
A class-action lawsuit has been launched against OpenAI and Microsoft over GitHub Copilot.

GitHub Copilot uses technology from OpenAI to help generate code and speed up software development. Microsoft says that it is trained on “billions of lines of public code … written by others.”

Last month, developer and lawyer Matthew Butterick announced that he’d partnered with the Joseph Saveri Law Firm to investigate whether Copilot infringed on the rights of developers by scraping their code and not providing due attribution.

This could unwittingly cause serious legal problems for GitHub Copilot users.

“Copilot leaves copyleft compliance as an exercise for the user. Users likely face growing liability that only increases as Copilot improves,” wrote Bradley M. Kuhn of Software Freedom Conservancy earlier this year.

“Users currently have no methods besides serendipity and educated guesses to know whether Copilot’s output is copyrighted by someone else.”

Copilot is powered by Codex, an AI system created by OpenAI and licensed to Microsoft. Codex currently offers suggestions on how to finish a line but Microsoft has touted its ability to suggest larger blocks of code, like the entire body of a function.

Butterick and litigators from the Joseph Saveri Law Firm have now filed a class-action lawsuit against Microsoft, GitHub, and OpenAI in a US federal court in San Francisco.

In addition to violating attribution requirements of open-source licenses, the claimants allege the defendants have violated:

The claimants acknowledge that this is the first step in what will likely be a long journey.

In a post on the claim’s website, Butterick wrote:

“As far as we know, this is the first class-action case in the US chal­leng­ing the train­ing and out­put of AI sys­tems. It will not be the last. AI sys­tems are not exempt from the law. 

Those who cre­ate and oper­ate these sys­tems must remain account­able. If com­pa­nies like Microsoft, GitHub, and OpenAI choose to dis­re­gard the law, they should not expect that we the pub­lic will sit still.

AI needs to be fair & eth­i­cal for every­one. If it’s not, then it can never achieve its vaunted aims of ele­vat­ing human­ity. It will just become another way for the priv­i­leged few to profit from the work of the many.”

AI News will keep you updated on the progress of the lawsuit as it emerges.

(Photo by Conny Schneider on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI and Microsoft hit with lawsuit over GitHub Copilot appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/09/openai-and-microsoft-lawsuit-github-copilot/feed/ 0
Experts debate whether GitHub’s latest AI tool violates copyright law https://www.artificialintelligence-news.com/2021/07/06/experts-debate-github-latest-ai-tool-violates-copyright-law/ https://www.artificialintelligence-news.com/2021/07/06/experts-debate-github-latest-ai-tool-violates-copyright-law/#respond Tue, 06 Jul 2021 15:47:53 +0000 http://artificialintelligence-news.com/?p=10749 GitHub’s impressive new code-assisting AI tool called Copilot is receiving both praise and criticism. Copilot draws context from the code that a developer is working on and can suggest entire lines or functions. The system, from OpenAI, claims to be “significantly more capable than GPT-3” in generating code and can help even veteran programmers to... Read more »

The post Experts debate whether GitHub’s latest AI tool violates copyright law appeared first on AI News.

]]>
GitHub’s impressive new code-assisting AI tool called Copilot is receiving both praise and criticism.

Copilot draws context from the code that a developer is working on and can suggest entire lines or functions. The system, from OpenAI, claims to be “significantly more capable than GPT-3” in generating code and can help even veteran programmers to discover new APIs or ways to solve problems.

Critics claim the system is using copyrighted code that GitHub then plans to charge for:

Julia Reda, a researcher and former MEP, published a blog post arguing that “GitHub Copilot is not infringing your copyright”.

GitHub – and therefore its owner, Microsoft – is using the huge number of repositories it hosts with ‘copyleft’ licenses for its tool. Copyleft allows open-source software or documentation to be modified and distributed back to the community.

Reda argues in her post that clamping down on tools such as GitHub’s through tighter copyright laws would harm copyleft and the benefits it offers.

One commenter isn’t entirely convinced:

“Lots of people have demonstrated that it pretty much regurgitates code verbatim from codebases with abandon. Putting GPL code inside a neural network does not remove the license if the output is the same as the input.

A large portion of what Copilot outputs is already full of copyright/license violations, even without extensions.”

Because the code is machine-generated, Reda also claims that it cannot be determined to be ‘derivative work’ that would face the wrath of intellectual property laws.

“Copyright law has only ever applied to intellectual creations – where there is no creator, there is no work,” says Reda. “This means that machine-generated code like that of GitHub Copilot is not a work under copyright law at all, so it is not a derivative work either.”

There is, of course, also a debate over whether the increasing amounts of machine-generated work should be covered under IP laws. We’ll let you decide your own position on the matter.

(Photo by Markus Winkler on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Experts debate whether GitHub’s latest AI tool violates copyright law appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/07/06/experts-debate-github-latest-ai-tool-violates-copyright-law/feed/ 0