intellectual property Archives - AI News https://www.artificialintelligence-news.com/tag/intellectual-property/ Artificial Intelligence News Thu, 07 Mar 2024 17:04:06 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png intellectual property Archives - AI News https://www.artificialintelligence-news.com/tag/intellectual-property/ 32 32 Google engineer stole AI tech for Chinese firms https://www.artificialintelligence-news.com/2024/03/07/google-engineer-stole-ai-tech-for-chinese-firms/ https://www.artificialintelligence-news.com/2024/03/07/google-engineer-stole-ai-tech-for-chinese-firms/#respond Thu, 07 Mar 2024 17:04:05 +0000 https://www.artificialintelligence-news.com/?p=14500 A former Google engineer has been charged with stealing trade secrets related to the company’s AI technology and secretly working with two Chinese firms. Linwei Ding, a 38-year-old Chinese national, was arrested on Wednesday in Newark, California, and faces four counts of federal trade secret theft, each punishable by up to 10 years in prison.... Read more »

The post Google engineer stole AI tech for Chinese firms appeared first on AI News.

]]>
A former Google engineer has been charged with stealing trade secrets related to the company’s AI technology and secretly working with two Chinese firms.

Linwei Ding, a 38-year-old Chinese national, was arrested on Wednesday in Newark, California, and faces four counts of federal trade secret theft, each punishable by up to 10 years in prison.

The indictment alleges that Ding, who was hired by Google in 2019 to develop software for the company’s supercomputing data centres, began transferring sensitive trade secrets and confidential information to his personal Google Cloud account in 2021.

“Ding continued periodic uploads until May 2, 2023, by which time Ding allegedly uploaded more than 500 unique files containing confidential information,” said the US Department of Justice in a statement.

Prosecutors claim that after stealing the trade secrets, Ding was offered a chief technology officer position at a startup AI company in China and participated in investor meetings for that firm. Additionally, Ding is alleged to have founded and served as CEO of a China-based startup focused on training AI models using supercomputing chips.

“Today’s charges are the latest illustration of the lengths affiliates of companies based in the People’s Republic of China are willing to go to steal American innovation,” said FBI Director Christopher Wray.

“The theft of innovative technology and trade secrets from American companies can cost jobs and have devastating economic and national security consequences.”

If convicted on all counts, Ding faces a maximum penalty of 40 years in prison and a fine of up to $1 million.

The case underscores the ongoing tensions between the US and China over intellectual property theft and the race to dominate emerging technologies like AI.

(Photo by Towfiqu Barbhuiya on Unsplash)

See also: OpenAI: Musk wanted us to merge with Tesla or take ‘full control’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google engineer stole AI tech for Chinese firms appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/07/google-engineer-stole-ai-tech-for-chinese-firms/feed/ 0
Nightshade ‘poisons’ AI models to fight copyright theft https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/ https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/#respond Tue, 24 Oct 2023 14:49:13 +0000 https://www.artificialintelligence-news.com/?p=13779 University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery. The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models. Many... Read more »

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery.

The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models.

Many artists and creators have expressed concern over the use of their work in training commercial AI products without their consent.

AI models rely on vast amounts of multimedia data – including written material and images, often scraped from the web – to function effectively. Nightshade offers a potential solution by sabotaging this data.

When integrated into digital artwork, Nightshade misleads AI models, causing them to misidentify objects and scenes.

For instance, Nightshade transformed images of dogs into data that appeared to AI models as cats. After exposure to a mere 100 poison samples, the AI reliably generated a cat when asked for a dog—demonstrating the tool’s effectiveness.

This technique not only confuses AI models but also challenges the fundamental way in which generative AI operates. By exploiting the clustering of similar words and ideas in AI models, Nightshade can manipulate responses to specific prompts and further undermine the accuracy of AI-generated content.

Developed by computer science professor Ben Zhao and his team, Nightshade is an extension of their prior product, Glaze, which cloaks digital artwork and distorts pixels to baffle AI models regarding artistic style.

While the potential for misuse of Nightshade is acknowledged, the researchers’ primary objective is to shift the balance of power from AI companies back to artists and discourage intellectual property violations.

The introduction of Nightshade presents a major challenge to AI developers. Detecting and removing images with poisoned pixels is a complex task, given the imperceptible nature of the alterations.

If integrated into existing AI training datasets, these images necessitate removal and potential retraining of AI models, posing a substantial hurdle for companies relying on stolen or unauthorised data.

As the researchers await peer review of their work, Nightshade is a beacon of hope for artists seeking to protect their creative endeavours.

(Photo by Josie Weiss on Unsplash)

See also: UMG files landmark lawsuit against AI developer Anthropic

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/feed/ 0
GitLab: Developers view AI as ‘essential’ despite concerns https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/ https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/#respond Wed, 06 Sep 2023 09:48:08 +0000 https://www.artificialintelligence-news.com/?p=13564 A survey by GitLab has shed light on the views of developers on the landscape of AI in software development. The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals. The report reveals a complex relationship between enthusiasm for AI... Read more »

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals.

The report reveals a complex relationship between enthusiasm for AI adoption and concerns about data privacy, intellectual property, and security.

“Enterprises are seeking out platforms that allow them to harness the power of AI while addressing potential privacy and security risks,” said Alexander Johnston, Research Analyst in the Data, AI & Analytics Channel at 451 Research, a part of S&P Global Market Intelligence.

While 83 percent of the survey’s respondents view AI implementation as essential to stay competitive, a significant 79 percent expressed worries about AI tools accessing sensitive information and intellectual property.

Impact on developer productivity

AI is perceived as a boon for developer productivity, with 51 percent of all respondents citing it as a key benefit of AI implementation. However, security professionals are apprehensive that AI-generated code might lead to an increase in security vulnerabilities, potentially creating more work for them.

Only seven percent of developers’ time is currently spent identifying and mitigating security vulnerabilities, compared to 11 percent allocated to testing code. This raises questions about the widening gap between developers and security professionals in the AI era.

Privacy and intellectual property concerns

The survey underscores the paramount importance of data privacy and intellectual property protection when selecting AI tools. 95 percent of senior technology executives prioritise these aspects when choosing AI solutions.

Moreover, 32 percent of respondents admitted to being “very” or “extremely” concerned about introducing AI into the software development lifecycle. Within this group, 39 percent cited worries about AI-generated code introducing security vulnerabilities, and 48 percent expressed concerns that AI-generated code may not receive the same copyright protection as code produced by humans.

AI skills gap

Despite optimism about AI’s potential, the report identifies a disconnect between organisations’ provision of AI training resources and practitioners’ satisfaction with them. 

While 75 percent of respondents stated that their organisations offer training and resources for using AI, an equivalent proportion expressed the need to seek resources independently—suggesting that the available training may be insufficient.

A striking 81 percent of respondents said they require more training to effectively utilise AI in their daily work. Furthermore, 65 percent of those planning to use AI for software development indicated that their organsations plan to hire new talent to manage AI implementation.

David DeSanto, Chief Product Officer at GitLab, said:

“According to the GitLab Global DevSecOps Report, only 25 percent of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60 percent of developers’ day-to-day work.

To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software – not just developers – to benefit from the efficiency boost.” 

While AI holds immense promise for the software development industry, GitLab’s report makes it clear that addressing cybersecurity and privacy concerns, bridging the skills gap, and fostering collaboration between developers and security professionals are pivotal to successful AI adoption.

(Photo by Luca Bravo on Unsplash)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/feed/ 0
Experts debate whether GitHub’s latest AI tool violates copyright law https://www.artificialintelligence-news.com/2021/07/06/experts-debate-github-latest-ai-tool-violates-copyright-law/ https://www.artificialintelligence-news.com/2021/07/06/experts-debate-github-latest-ai-tool-violates-copyright-law/#respond Tue, 06 Jul 2021 15:47:53 +0000 http://artificialintelligence-news.com/?p=10749 GitHub’s impressive new code-assisting AI tool called Copilot is receiving both praise and criticism. Copilot draws context from the code that a developer is working on and can suggest entire lines or functions. The system, from OpenAI, claims to be “significantly more capable than GPT-3” in generating code and can help even veteran programmers to... Read more »

The post Experts debate whether GitHub’s latest AI tool violates copyright law appeared first on AI News.

]]>
GitHub’s impressive new code-assisting AI tool called Copilot is receiving both praise and criticism.

Copilot draws context from the code that a developer is working on and can suggest entire lines or functions. The system, from OpenAI, claims to be “significantly more capable than GPT-3” in generating code and can help even veteran programmers to discover new APIs or ways to solve problems.

Critics claim the system is using copyrighted code that GitHub then plans to charge for:

Julia Reda, a researcher and former MEP, published a blog post arguing that “GitHub Copilot is not infringing your copyright”.

GitHub – and therefore its owner, Microsoft – is using the huge number of repositories it hosts with ‘copyleft’ licenses for its tool. Copyleft allows open-source software or documentation to be modified and distributed back to the community.

Reda argues in her post that clamping down on tools such as GitHub’s through tighter copyright laws would harm copyleft and the benefits it offers.

One commenter isn’t entirely convinced:

“Lots of people have demonstrated that it pretty much regurgitates code verbatim from codebases with abandon. Putting GPL code inside a neural network does not remove the license if the output is the same as the input.

A large portion of what Copilot outputs is already full of copyright/license violations, even without extensions.”

Because the code is machine-generated, Reda also claims that it cannot be determined to be ‘derivative work’ that would face the wrath of intellectual property laws.

“Copyright law has only ever applied to intellectual creations – where there is no creator, there is no work,” says Reda. “This means that machine-generated code like that of GitHub Copilot is not a work under copyright law at all, so it is not a derivative work either.”

There is, of course, also a debate over whether the increasing amounts of machine-generated work should be covered under IP laws. We’ll let you decide your own position on the matter.

(Photo by Markus Winkler on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Experts debate whether GitHub’s latest AI tool violates copyright law appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/07/06/experts-debate-github-latest-ai-tool-violates-copyright-law/feed/ 0