legal Archives - AI News https://www.artificialintelligence-news.com/tag/legal/ Artificial Intelligence News Wed, 12 Jun 2024 15:48:24 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png legal Archives - AI News https://www.artificialintelligence-news.com/tag/legal/ 32 32 Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/ https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/#respond Wed, 12 Jun 2024 15:45:08 +0000 https://www.artificialintelligence-news.com/?p=14988 Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process. Musk had initially sued OpenAI in March 2024,... Read more »

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process.

Musk had initially sued OpenAI in March 2024, alleging breach of contracts, unfair business practices, and failure in fiduciary duty. He claimed that his contributions to the company were made “in exchange for and in reliance on promises that those assets were irrevocably dedicated to building AI for public benefit, with only safety as a countervailing concern.”

The lawsuit sought remedies for “breach of contract, promissory estoppel, breach of fiduciary duty, unfair business practices, and accounting,” as well as specific performance, restitution, and damages.

However, Musk’s filings to withdraw the case provided no explanation for abandoning the lawsuit. OpenAI had previously called Musk’s claims “incoherent” and that his inability to produce a contract made his breach claims difficult to prove, stating that documents provided by Musk “contradict his allegations as to the alleged terms of the agreement.”

The withdrawal of the lawsuit comes at a time when Musk is strongly opposing Apple’s plans to integrate ChatGPT into its operating systems.

During Apple’s keynote event announcing Apple Intelligence for iOS 18, iPadOS 18, and macOS Sequoia, Musk threatened to ban Apple devices from his companies, calling the integration “an unacceptable security violation.”

Despite assurances from Apple and OpenAI that user data would only be shared with explicit consent and that interactions would be secure, Musk questioned Apple’s ability to ensure data security, stating, “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”

Since bringing the lawsuit against OpenAI, Musk has also created his own AI company, xAI, and secured over $6 billion in funding for his plans to advance the Grok chatbot on his social network, X.

While Musk’s reasoning for dropping the OpenAI lawsuit remains unclear, his actions suggest a potential shift in focus towards advancing his own AI endeavours while continuing to vocalise his criticism of OpenAI through social media rather than the courts.

See also: DuckDuckGo releases portal giving private access to AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/feed/ 0
OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information... Read more »

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/feed/ 0
IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/ https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/#respond Wed, 27 Mar 2024 10:37:59 +0000 https://www.artificialintelligence-news.com/?p=14619 A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted. The report identifies two key stages of generative... Read more »

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted.

The report identifies two key stages of generative AI adoption. The first wave, which is already underway, exposes 11 percent of tasks performed by UK workers. Routine cognitive tasks like database management and organisational tasks like scheduling are most at risk. 

However, in a potential second wave, AI could handle a staggering 59 percent of tasks—impacting higher-earning jobs and non-routine cognitive work like creating databases.

Bhargav Srinivasa Desikan, Senior Research Fellow at IPPR, said: “We could see jobs such as copywriters, graphic designers, and personal assistants roles being heavily affected by AI. The question is how we can steer technological change in a way that allows for novel job opportunities, increased productivity, and economic benefits for all.”

“We are at a sliding doors moment, and policy makers urgently need to develop a strategy to make sure our labour market adapts to the 21st century, without leaving millions behind. It is crucial that all workers benefit from these technological advancements, and not just the big tech corporations.”

IPPR modelled three scenarios for the second wave’s impact:

  • Worst case: 7.9 million jobs lost with no GDP gains
  • Central case: 4.4 million jobs lost but 6.3 percent GDP growth (£144bn/year) 
  • Best case: No jobs lost and 13 percent GDP boost (£306bn/year) from augmenting at-risk jobs

IPPR warns the worst-case displacement is possible without government intervention, urging a “job-centric” AI strategy with fiscal incentives, regulation ensuring human oversight, and support for green jobs less exposed to automation.

The analysis underscores the disproportionate impact on certain demographics, with women and young people bearing the brunt of job displacement. Entry-level positions, predominantly occupied by these groups, face the gravest jeopardy as AI encroaches on roles such as secretarial and customer service positions.

Carsten Jung, Senior Economist at IPPR, said: “History shows that technological transition can be a boon if well managed, or can end in disruption if left to unfold without controls. Indeed, some occupations could be hard hit by generative AI, starting with back office jobs.

“But technology isn’t destiny and a jobs apocalypse is not inevitable – government, employers, and unions have the opportunity to make crucial design decisions now that ensure we manage this new technology well. If they don’t act soon, it may be too late.”

A full copy of the report can be found here (PDF)

(Photo by Cullan Smith)

See also: Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/feed/ 0
EU approves controversial AI Act to mixed reactions https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/ https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/#respond Wed, 13 Mar 2024 16:39:55 +0000 https://www.artificialintelligence-news.com/?p=14535 The European Parliament today approved the AI Act, the first ever regulatory framework governing the use of AI systems. The legislation passed with an overwhelming majority of 523 votes in favour, 46 against and 49 abstentions. “This is a historic day,” said Italian lawmaker Brando Benifei, co-lead on the AI Act. “We have the first... Read more »

The post EU approves controversial AI Act to mixed reactions appeared first on AI News.

]]>
The European Parliament today approved the AI Act, the first ever regulatory framework governing the use of AI systems. The legislation passed with an overwhelming majority of 523 votes in favour, 46 against and 49 abstentions.

“This is a historic day,” said Italian lawmaker Brando Benifei, co-lead on the AI Act. “We have the first regulation in the world which puts a clear path for safe and human-centric development of AI.”

The AI Act will categorise AI systems into four tiers based on their potential risk to society. High-risk applications like self-driving cars will face strict requirements before being allowed on the EU market. Lower risk systems will have fewer obligations.

“The main point now will be implementation and compliance by businesses and institutions,” Benifei stated. “We are also working on further AI legislation for workplace conditions.”

His counterpart, Dragoş Tudorache of Romania, said the EU aims to promote these pioneering rules globally. “We have to be open to work with others on how to build governance with like-minded parties.”

The general AI rules take effect in May 2025, while obligations for high-risk systems kick in after three years. National oversight agencies will monitor compliance.

Differing viewpoints on impact

Reaction was mixed on whether the Act properly balances innovation with protecting rights.

Curtis Wilson, a data scientist at Synopsys, believes it will build public trust: “The strict rules and punishing fines will deter careless developers, and help customers be more confident in using AI systems…Ensuring all AI developers adhere to these standards is to everyone’s benefit.”

However, Mher Hakobyan from Amnesty International criticised the legislation as favouring industry over human rights: “It is disappointing that the EU chose to prioritise interests of industry and law enforcement over protecting people…It lacks proper transparency and accountability provisions, which will likely exacerbate abuses.”

Companies now face the challenge of overhauling practices to comply.

Marcus Evans, a data privacy lawyer, advised: “Businesses need to create and maintain robust AI governance to make the best use of the technology and ensure compliance with the new regime…They need to start preparing now to not fall foul of the rules.”

After years of negotiations, the AI Act signals the EU intends to lead globally on this transformative technology. But dissenting voices show challenges remain in finding the right balance.

(Photo by Tabrez Syed on Unsplash)

See also: OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU approves controversial AI Act to mixed reactions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/feed/ 0
OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ https://www.artificialintelligence-news.com/2024/03/12/openai-calls-elon-musk-lawsuit-claims-incoherent/ https://www.artificialintelligence-news.com/2024/03/12/openai-calls-elon-musk-lawsuit-claims-incoherent/#respond Tue, 12 Mar 2024 16:36:27 +0000 https://www.artificialintelligence-news.com/?p=14529 OpenAI has hit back at Elon Musk’s lawsuit, deeming his claims “convoluted — often incoherent — factual premises.” Musk’s lawsuit accuses OpenAI of breaching its non-profit status and reneging on a founding agreement to keep the organisation non-profit and release its AI technology publicly. However, OpenAI has refuted these allegations, stating that there is no... Read more »

The post OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ appeared first on AI News.

]]>
OpenAI has hit back at Elon Musk’s lawsuit, deeming his claims “convoluted — often incoherent — factual premises.”

Musk’s lawsuit accuses OpenAI of breaching its non-profit status and reneging on a founding agreement to keep the organisation non-profit and release its AI technology publicly. However, OpenAI has refuted these allegations, stating that there is no such agreement with Musk and branding it as a mere “fiction.”

According to court filings, OpenAI asserts that there is no existing agreement with Musk, contradicting his assertions in the lawsuit.

The organisation further alleges that Musk had actually supported the idea of transitioning OpenAI into a for-profit entity under his control. It is claimed that Musk advocated for full control of the company as CEO, majority equity ownership, and even suggested tethering it to Tesla for financial backing. However, negotiations between Musk and OpenAI did not culminate in an agreement, leading to Musk’s withdrawal from the project.

OpenAI’s rebuttal highlights purported emails exchanged between Musk and the organisation, indicating his prior knowledge and support for its transition to a for-profit model. The company suggests that Musk’s lawsuit is driven by his desire to claim credit for OpenAI’s successes after he disengaged from the project.

In response to Musk’s legal action, OpenAI has portrayed his motives as self-serving rather than altruistic, asserting that his lawsuit is a bid to further his own commercial interests under the guise of championing humanity’s cause.

Meanwhile, Musk’s own foray into the realm of artificial intelligence with his company xAI has drawn attention.

Musk announced xAI’s intention to open source its Grok chatbot shortly after OpenAI’s publication of emails purportedly demonstrating Musk’s prior awareness of its non-open source intentions. While this move could be interpreted as a retaliatory gesture against OpenAI, it also presents an opportunity for xAI to garner feedback from developers and enhance its technology.

The legal clash between Musk and OpenAI underscores the complexities surrounding the development and governance of AI technologies, as well as the competing interests within the tech industry.

(Photo by Tim Mossholder on Unsplash)

See also: OpenAI announces new board lineup and governance structure

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/12/openai-calls-elon-musk-lawsuit-claims-incoherent/feed/ 0
Google engineer stole AI tech for Chinese firms https://www.artificialintelligence-news.com/2024/03/07/google-engineer-stole-ai-tech-for-chinese-firms/ https://www.artificialintelligence-news.com/2024/03/07/google-engineer-stole-ai-tech-for-chinese-firms/#respond Thu, 07 Mar 2024 17:04:05 +0000 https://www.artificialintelligence-news.com/?p=14500 A former Google engineer has been charged with stealing trade secrets related to the company’s AI technology and secretly working with two Chinese firms. Linwei Ding, a 38-year-old Chinese national, was arrested on Wednesday in Newark, California, and faces four counts of federal trade secret theft, each punishable by up to 10 years in prison.... Read more »

The post Google engineer stole AI tech for Chinese firms appeared first on AI News.

]]>
A former Google engineer has been charged with stealing trade secrets related to the company’s AI technology and secretly working with two Chinese firms.

Linwei Ding, a 38-year-old Chinese national, was arrested on Wednesday in Newark, California, and faces four counts of federal trade secret theft, each punishable by up to 10 years in prison.

The indictment alleges that Ding, who was hired by Google in 2019 to develop software for the company’s supercomputing data centres, began transferring sensitive trade secrets and confidential information to his personal Google Cloud account in 2021.

“Ding continued periodic uploads until May 2, 2023, by which time Ding allegedly uploaded more than 500 unique files containing confidential information,” said the US Department of Justice in a statement.

Prosecutors claim that after stealing the trade secrets, Ding was offered a chief technology officer position at a startup AI company in China and participated in investor meetings for that firm. Additionally, Ding is alleged to have founded and served as CEO of a China-based startup focused on training AI models using supercomputing chips.

“Today’s charges are the latest illustration of the lengths affiliates of companies based in the People’s Republic of China are willing to go to steal American innovation,” said FBI Director Christopher Wray.

“The theft of innovative technology and trade secrets from American companies can cost jobs and have devastating economic and national security consequences.”

If convicted on all counts, Ding faces a maximum penalty of 40 years in prison and a fine of up to $1 million.

The case underscores the ongoing tensions between the US and China over intellectual property theft and the race to dominate emerging technologies like AI.

(Photo by Towfiqu Barbhuiya on Unsplash)

See also: OpenAI: Musk wanted us to merge with Tesla or take ‘full control’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google engineer stole AI tech for Chinese firms appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/07/google-engineer-stole-ai-tech-for-chinese-firms/feed/ 0
OpenAI: Musk wanted us to merge with Tesla or take ‘full control’ https://www.artificialintelligence-news.com/2024/03/06/openai-musk-wanted-merge-tesla-or-take-full-control/ https://www.artificialintelligence-news.com/2024/03/06/openai-musk-wanted-merge-tesla-or-take-full-control/#respond Wed, 06 Mar 2024 12:52:15 +0000 https://www.artificialintelligence-news.com/?p=14487 Elon Musk, the billionaire CEO of Tesla and SpaceX, allegedly wanted the AI research company OpenAI to either merge with Tesla or give him full control of the organisation. A blog post from OpenAI, in response to a lawsuit filed by Musk against the company, revealed email communications from 2015 to 2018 when Musk was... Read more »

The post OpenAI: Musk wanted us to merge with Tesla or take ‘full control’ appeared first on AI News.

]]>
Elon Musk, the billionaire CEO of Tesla and SpaceX, allegedly wanted the AI research company OpenAI to either merge with Tesla or give him full control of the organisation.

A blog post from OpenAI, in response to a lawsuit filed by Musk against the company, revealed email communications from 2015 to 2018 when Musk was still involved with the company’s operations. 

In one email from 2017 – as OpenAI was exploring a transition to a for-profit model to secure more funding – Musk reportedly wanted majority equity, control of the board of directors, and the CEO position. However, OpenAI felt this level of control by one individual would go against its mission.

“Elon wanted us to merge with Tesla or he wanted full control,” wrote OpenAI in their blog post. “Elon left OpenAI, saying there needed to be a relevant competitor to Google/DeepMind and that he was going to do it himself. He said he’d be supportive of us finding our own path.”

When the merger discussions stalled, Musk suggested in 2018 that OpenAI could become attached to Tesla as a path for the automaker to provide funding. “Tesla is the only path that could even hope to hold a candle to Google,” Musk wrote in an email forwarded to OpenAI.

The blog post indicates these merger or acquisition proposals from Musk did not ultimately succeed, and he soon left the company. In a final email cited, Musk said his “probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%.”

Musk’s lawsuit, filed in March 2024, accuses OpenAI of breach of contract, breach of fiduciary duty and unfair competition. It alleges the company has become a “closed-source de facto subsidiary” of Microsoft after taking $13 billion in investment from the tech giant.

OpenAI denies the claims, stating Musk was aware the “Open” in its name did not mean it had to open-source all its AI technology to the public. The company expressed sadness that the situation has devolved into litigation with someone they “deeply admired.”

Musk has not yet publicly responded to the blog post from OpenAI. The lawsuit seeks to compel OpenAI to make its research freely available and prohibit exclusive arrangements benefiting individual companies.

“We intend to move to dismiss all of Elon’s claims,” says OpenAI.

(Photo by Austin Ramsey on Unsplash)

See also: Anthropic’s latest AI model beats rivals and achieves industry first

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI: Musk wanted us to merge with Tesla or take ‘full control’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/06/openai-musk-wanted-merge-tesla-or-take-full-control/feed/ 0
AIs in India will need government permission before launching https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/ https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/#respond Mon, 04 Mar 2024 17:03:13 +0000 https://www.artificialintelligence-news.com/?p=14478 In an advisory issued by India’s Ministry of Electronics and Information Technology (MeitY) last Friday, it was declared that any AI technology still in development must acquire explicit government permission before being released to the public. Developers will also only be able to deploy these technologies after labelling the potential fallibility or unreliability of the... Read more »

The post AIs in India will need government permission before launching appeared first on AI News.

]]>
In an advisory issued by India’s Ministry of Electronics and Information Technology (MeitY) last Friday, it was declared that any AI technology still in development must acquire explicit government permission before being released to the public.

Developers will also only be able to deploy these technologies after labelling the potential fallibility or unreliability of the output generated.

Furthermore, the document outlines plans for implementing a “consent popup” mechanism to inform users about potential defects or errors produced by AI. It also mandates the labelling of deepfakes with permanent unique metadata or other identifiers to prevent misuse.

In addition to these measures, the advisory orders all intermediaries or platforms to ensure that any AI model product – including large language models (LLM) – does not permit bias, discrimination, or threaten the integrity of the electoral process.

Some industry figures have criticised India’s plans as going too far:

Developers are requested to comply with the advisory within 15 days of its issuance. It has been suggested that after compliance and application for permission to release a product, developers may be required to perform a demo for government officials or undergo stress testing.

Although the advisory is not legally binding at present, it signifies the government’s expectations and hints at the future direction of regulation in the AI sector.

“We are doing it as an advisory today asking you (the AI platforms) to comply with it,” said IT minister Rajeev Chandrasekhar. He added that this stance would eventually be encoded in legislation.

“Generative AI or AI platforms available on the internet will have to take full responsibility for what the platform does, and cannot escape the accountability by saying that their platform is under testing,” continued Chandrasekhar, as reported by local media.

(Photo by Naveed Ahmed on Unsplash)

See also: Elon Musk sues OpenAI over alleged breach of nonprofit agreement

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AIs in India will need government permission before launching appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/feed/ 0
Elon Musk sues OpenAI over alleged breach of nonprofit agreement https://www.artificialintelligence-news.com/2024/03/01/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/ https://www.artificialintelligence-news.com/2024/03/01/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/#respond Fri, 01 Mar 2024 13:09:25 +0000 https://www.artificialintelligence-news.com/?p=14473 Elon Musk has filed a lawsuit against OpenAI and its CEO, Sam Altman, citing a violation of their nonprofit agreement. The legal battle, unfolding in the Superior Court of California for the County of San Francisco, revolves around OpenAI’s departure from its foundational mission of advancing open-source artificial general intelligence (AGI) for the betterment of... Read more »

The post Elon Musk sues OpenAI over alleged breach of nonprofit agreement appeared first on AI News.

]]>
Elon Musk has filed a lawsuit against OpenAI and its CEO, Sam Altman, citing a violation of their nonprofit agreement.

The legal battle, unfolding in the Superior Court of California for the County of San Francisco, revolves around OpenAI’s departure from its foundational mission of advancing open-source artificial general intelligence (AGI) for the betterment of humanity.

Musk was a co-founder and early backer of OpenAI. According to Musk, Altman and Greg Brockman (another co-founder and current president of OpenAI) convinced him to bankroll the startup in 2015 on promises that it would remain a nonprofit.

In his legal challenge, Musk accuses OpenAI of straying from its principles through a collaboration with Microsoft—alleging that the partnership prioritises proprietary technology over the original ethos of open-source advancement.

Musk’s grievances include claims of contract breach, violation of fiduciary duty, and unfair business practices. He calls upon OpenAI to realign with its nonprofit objectives and seeks an injunction to halt the commercial exploitation of AGI technology.

At the heart of the dispute is OpenAI’s recent launch of GPT-4 in March 2023. Musk contends that unlike its predecessors, GPT-4 represents a shift towards closed-source models—a move he believes favours Microsoft’s financial interests at the expense of OpenAI’s altruistic mission.

Founded in 2015 as a nonprofit AI research lab, OpenAI transitioned into a commercial entity in 2020. OpenAI has now adopted a profit-driven approach, with revenues reportedly surpassing $2 billion annually.

Musk, who has long voiced concerns about the risks posed by AI, has called for robust government regulation and responsible AI development. He questions the technical expertise of OpenAI’s current board and highlights the removal and subsequent reinstatement of Altman in November 2023 as evidence of a profit-oriented agenda aligned with Microsoft’s interests.

See also: Mistral AI unveils LLM rivalling major players

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Elon Musk sues OpenAI over alleged breach of nonprofit agreement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/01/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/feed/ 0
UK announces over £100M to support ‘agile’ AI regulation https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/ https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/#respond Tue, 06 Feb 2024 11:56:31 +0000 https://www.artificialintelligence-news.com/?p=14327 The UK government has announced over £100 million in new funding to support an “agile” approach to AI regulation. This includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education.  The investment comes at a vital time, as research from Thoughtworks shows... Read more »

The post UK announces over £100M to support ‘agile’ AI regulation appeared first on AI News.

]]>
The UK government has announced over £100 million in new funding to support an “agile” approach to AI regulation. This includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education. 

The investment comes at a vital time, as research from Thoughtworks shows 91% of British people argue that government regulations must do more to hold businesses accountable for their AI systems. The public wants more transparency, with 82% of consumers favouring businesses that proactively communicate how they are regulating general AI.

In a government response published today to last year’s AI Regulation White Paper consultation, the UK outlined its context-based approach to regulation that empowers existing regulators to address AI risks in a targeted way, while avoiding rushed legislation that could stifle innovation.

However, the government for the first time set out its thinking on potential future binding requirements for developers building advanced AI systems, to ensure accountability for safety – a measure 68% of the public said was needed in AI regulation. 

The response also revealed all key regulators will publish their approach to managing AI risks by 30 April, detailing their expertise and plans for the coming year. This aims to provide confidence to businesses and citizens on transparency. However, 30% still don’t think increased AI regulation is actually for their benefit, indicating scepticism remains.

Additionally, nearly £90 million was announced to launch nine new research hubs across the UK and a US partnership focused on responsible AI development. Separately, £2 million in funding will support projects defining responsible AI across sectors like policing – with 56% of the public wanting improved user education around AI.

Tom Whittaker, Senior Associate at independent UK law firm Burges Salmon, said: “The technology industry will welcome the large financial investment by the UK government to support regulators continuing what many see as an agile and sector-specific approach to AI regulation.

“The UK government is trying to position itself as pro-innovation for AI generally and across multiple sectors.  This is notable at a time when the EU is pushing ahead with its own significant AI legislation that the EU consider will boost trustworthy AI but which some consider a threat to innovation.”

Science Minister Michelle Donelan said the UK’s “innovative approach to AI regulation” has made it a leader in both AI safety and development. She said the agile, sector-specific approach allows the UK to “grip the risks immediately”, paving the way for it to reap AI’s benefits safely.

The wide-ranging funding and initiatives aim to cement the UK as a pioneer in safe AI innovation while assuaging public concerns. This builds on previous commitments like the £100 million AI Safety Institute to evaluate emerging models. 

Greg Hanson, GVP and Head of Sales EMEA North at Informatica, commented: “Undoubtedly, greater AI regulation is coming to the UK. And demand for this is escalating – especially considering half (52%) of UK businesses are already forging ahead with generative AI, above the global average of 45%.

“Yet with the adoption of AI, comes new challenges. Nearly all businesses in the UK who have adopted AI admit to having encountered roadblocks. In fact, 43% say AI governance is the main obstacle, closely followed by AI ethics (42%).”

Overall, the package of measures amounts to over £100 million of new funding towards the UK’s mission to lead on safe and responsible AI progress. This balances safely harnessing AI’s potential economic and societal benefits with a targeted approach to regulating very real risks.

(Photo by Rocco Dipoppa on Unsplash)

See also: Bank of England Governor: AI won’t lead to mass job losses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK announces over £100M to support ‘agile’ AI regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/feed/ 0