law Archives - AI News https://www.artificialintelligence-news.com/tag/law/ Artificial Intelligence News Wed, 12 Jun 2024 15:48:24 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png law Archives - AI News https://www.artificialintelligence-news.com/tag/law/ 32 32 Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/ https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/#respond Wed, 12 Jun 2024 15:45:08 +0000 https://www.artificialintelligence-news.com/?p=14988 Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process. Musk had initially sued OpenAI in March 2024,... Read more »

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process.

Musk had initially sued OpenAI in March 2024, alleging breach of contracts, unfair business practices, and failure in fiduciary duty. He claimed that his contributions to the company were made “in exchange for and in reliance on promises that those assets were irrevocably dedicated to building AI for public benefit, with only safety as a countervailing concern.”

The lawsuit sought remedies for “breach of contract, promissory estoppel, breach of fiduciary duty, unfair business practices, and accounting,” as well as specific performance, restitution, and damages.

However, Musk’s filings to withdraw the case provided no explanation for abandoning the lawsuit. OpenAI had previously called Musk’s claims “incoherent” and that his inability to produce a contract made his breach claims difficult to prove, stating that documents provided by Musk “contradict his allegations as to the alleged terms of the agreement.”

The withdrawal of the lawsuit comes at a time when Musk is strongly opposing Apple’s plans to integrate ChatGPT into its operating systems.

During Apple’s keynote event announcing Apple Intelligence for iOS 18, iPadOS 18, and macOS Sequoia, Musk threatened to ban Apple devices from his companies, calling the integration “an unacceptable security violation.”

Despite assurances from Apple and OpenAI that user data would only be shared with explicit consent and that interactions would be secure, Musk questioned Apple’s ability to ensure data security, stating, “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”

Since bringing the lawsuit against OpenAI, Musk has also created his own AI company, xAI, and secured over $6 billion in funding for his plans to advance the Grok chatbot on his social network, X.

While Musk’s reasoning for dropping the OpenAI lawsuit remains unclear, his actions suggest a potential shift in focus towards advancing his own AI endeavours while continuing to vocalise his criticism of OpenAI through social media rather than the courts.

See also: DuckDuckGo releases portal giving private access to AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/feed/ 0
OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information... Read more »

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/feed/ 0
UK and South Korea to co-host AI Seoul Summit https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/ https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/#respond Fri, 12 Apr 2024 12:03:50 +0000 https://www.artificialintelligence-news.com/?p=14678 The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration. The two-day event will feature a virtual leaders’ session, co-chaired... Read more »

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration.

The two-day event will feature a virtual leaders’ session, co-chaired by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, and a subsequent in-person meeting among Digital Ministers. UK Technology Secretary Michelle Donelan, and Korean Minister of Science and ICT Lee Jong-Ho will co-host the latter.

This summit builds upon the historic Bletchley Park discussions held at the historic location in the UK last year, emphasising AI safety, inclusion, and innovation. It aims to ensure that AI advancements benefit humanity while minimising potential risks and enhancing global governance on tech innovation.

“The summit we held at Bletchley Park was a generational moment,” stated Donelan. “If we continue to bring international governments and a broad range of voices together, I have every confidence that we can continue to develop a global approach which will allow us to realise the transformative potential of this generation-defining technology safely and responsibly.”

Echoing this sentiment, Minister Lee Jong-Ho highlighted the importance of the upcoming Seoul Summit in furthering global cooperation on AI safety and innovation.

“AI is advancing at an unprecedented pace that exceeds our expectations, and it is crucial to establish global norms and governance to harness such technological innovations to enhance the welfare of humanity,” explained Lee. “We hope that the AI Seoul Summit will serve as an opportunity to strengthen global cooperation on not only AI safety but also AI innovation and inclusion, and promote sustainable AI development.”

Innovation remains a focal point for the UK, evidenced by initiatives like the Manchester Prize and the formation of the AI Safety Institute: the first state-backed organisation dedicated to AI safety. This proactive approach mirrors the UK’s commitment to international collaboration on AI governance, underscored by a recent agreement with the US on AI safety measures.

Accompanying the Seoul Summit will be the release of the International Scientific Report on Advanced AI Safety. This report, independently led by Turing Prize winner Yoshua Bengio, represents a collective effort to consolidate the best scientific research on AI safety. It underscores the summit’s role not only as a forum for discussion but as a catalyst for actionable insight into AI’s safe development.

The agenda of the AI Seoul Summit reflects the urgency of addressing the challenges and opportunities presented by AI. From discussing model safety evaluations, to fostering sustainable AI development. As the world embraces AI innovation, the AI Seoul Summit embodies a concerted effort to shape a future where technology serves humanity safely and delivers prosperity and inclusivity for all.

See also: US and Japan announce sweeping AI and tech collaboration

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/feed/ 0
IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/ https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/#respond Wed, 27 Mar 2024 10:37:59 +0000 https://www.artificialintelligence-news.com/?p=14619 A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted. The report identifies two key stages of generative... Read more »

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted.

The report identifies two key stages of generative AI adoption. The first wave, which is already underway, exposes 11 percent of tasks performed by UK workers. Routine cognitive tasks like database management and organisational tasks like scheduling are most at risk. 

However, in a potential second wave, AI could handle a staggering 59 percent of tasks—impacting higher-earning jobs and non-routine cognitive work like creating databases.

Bhargav Srinivasa Desikan, Senior Research Fellow at IPPR, said: “We could see jobs such as copywriters, graphic designers, and personal assistants roles being heavily affected by AI. The question is how we can steer technological change in a way that allows for novel job opportunities, increased productivity, and economic benefits for all.”

“We are at a sliding doors moment, and policy makers urgently need to develop a strategy to make sure our labour market adapts to the 21st century, without leaving millions behind. It is crucial that all workers benefit from these technological advancements, and not just the big tech corporations.”

IPPR modelled three scenarios for the second wave’s impact:

  • Worst case: 7.9 million jobs lost with no GDP gains
  • Central case: 4.4 million jobs lost but 6.3 percent GDP growth (£144bn/year) 
  • Best case: No jobs lost and 13 percent GDP boost (£306bn/year) from augmenting at-risk jobs

IPPR warns the worst-case displacement is possible without government intervention, urging a “job-centric” AI strategy with fiscal incentives, regulation ensuring human oversight, and support for green jobs less exposed to automation.

The analysis underscores the disproportionate impact on certain demographics, with women and young people bearing the brunt of job displacement. Entry-level positions, predominantly occupied by these groups, face the gravest jeopardy as AI encroaches on roles such as secretarial and customer service positions.

Carsten Jung, Senior Economist at IPPR, said: “History shows that technological transition can be a boon if well managed, or can end in disruption if left to unfold without controls. Indeed, some occupations could be hard hit by generative AI, starting with back office jobs.

“But technology isn’t destiny and a jobs apocalypse is not inevitable – government, employers, and unions have the opportunity to make crucial design decisions now that ensure we manage this new technology well. If they don’t act soon, it may be too late.”

A full copy of the report can be found here (PDF)

(Photo by Cullan Smith)

See also: Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/feed/ 0
EU approves controversial AI Act to mixed reactions https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/ https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/#respond Wed, 13 Mar 2024 16:39:55 +0000 https://www.artificialintelligence-news.com/?p=14535 The European Parliament today approved the AI Act, the first ever regulatory framework governing the use of AI systems. The legislation passed with an overwhelming majority of 523 votes in favour, 46 against and 49 abstentions. “This is a historic day,” said Italian lawmaker Brando Benifei, co-lead on the AI Act. “We have the first... Read more »

The post EU approves controversial AI Act to mixed reactions appeared first on AI News.

]]>
The European Parliament today approved the AI Act, the first ever regulatory framework governing the use of AI systems. The legislation passed with an overwhelming majority of 523 votes in favour, 46 against and 49 abstentions.

“This is a historic day,” said Italian lawmaker Brando Benifei, co-lead on the AI Act. “We have the first regulation in the world which puts a clear path for safe and human-centric development of AI.”

The AI Act will categorise AI systems into four tiers based on their potential risk to society. High-risk applications like self-driving cars will face strict requirements before being allowed on the EU market. Lower risk systems will have fewer obligations.

“The main point now will be implementation and compliance by businesses and institutions,” Benifei stated. “We are also working on further AI legislation for workplace conditions.”

His counterpart, Dragoş Tudorache of Romania, said the EU aims to promote these pioneering rules globally. “We have to be open to work with others on how to build governance with like-minded parties.”

The general AI rules take effect in May 2025, while obligations for high-risk systems kick in after three years. National oversight agencies will monitor compliance.

Differing viewpoints on impact

Reaction was mixed on whether the Act properly balances innovation with protecting rights.

Curtis Wilson, a data scientist at Synopsys, believes it will build public trust: “The strict rules and punishing fines will deter careless developers, and help customers be more confident in using AI systems…Ensuring all AI developers adhere to these standards is to everyone’s benefit.”

However, Mher Hakobyan from Amnesty International criticised the legislation as favouring industry over human rights: “It is disappointing that the EU chose to prioritise interests of industry and law enforcement over protecting people…It lacks proper transparency and accountability provisions, which will likely exacerbate abuses.”

Companies now face the challenge of overhauling practices to comply.

Marcus Evans, a data privacy lawyer, advised: “Businesses need to create and maintain robust AI governance to make the best use of the technology and ensure compliance with the new regime…They need to start preparing now to not fall foul of the rules.”

After years of negotiations, the AI Act signals the EU intends to lead globally on this transformative technology. But dissenting voices show challenges remain in finding the right balance.

(Photo by Tabrez Syed on Unsplash)

See also: OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU approves controversial AI Act to mixed reactions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/feed/ 0
Google engineer stole AI tech for Chinese firms https://www.artificialintelligence-news.com/2024/03/07/google-engineer-stole-ai-tech-for-chinese-firms/ https://www.artificialintelligence-news.com/2024/03/07/google-engineer-stole-ai-tech-for-chinese-firms/#respond Thu, 07 Mar 2024 17:04:05 +0000 https://www.artificialintelligence-news.com/?p=14500 A former Google engineer has been charged with stealing trade secrets related to the company’s AI technology and secretly working with two Chinese firms. Linwei Ding, a 38-year-old Chinese national, was arrested on Wednesday in Newark, California, and faces four counts of federal trade secret theft, each punishable by up to 10 years in prison.... Read more »

The post Google engineer stole AI tech for Chinese firms appeared first on AI News.

]]>
A former Google engineer has been charged with stealing trade secrets related to the company’s AI technology and secretly working with two Chinese firms.

Linwei Ding, a 38-year-old Chinese national, was arrested on Wednesday in Newark, California, and faces four counts of federal trade secret theft, each punishable by up to 10 years in prison.

The indictment alleges that Ding, who was hired by Google in 2019 to develop software for the company’s supercomputing data centres, began transferring sensitive trade secrets and confidential information to his personal Google Cloud account in 2021.

“Ding continued periodic uploads until May 2, 2023, by which time Ding allegedly uploaded more than 500 unique files containing confidential information,” said the US Department of Justice in a statement.

Prosecutors claim that after stealing the trade secrets, Ding was offered a chief technology officer position at a startup AI company in China and participated in investor meetings for that firm. Additionally, Ding is alleged to have founded and served as CEO of a China-based startup focused on training AI models using supercomputing chips.

“Today’s charges are the latest illustration of the lengths affiliates of companies based in the People’s Republic of China are willing to go to steal American innovation,” said FBI Director Christopher Wray.

“The theft of innovative technology and trade secrets from American companies can cost jobs and have devastating economic and national security consequences.”

If convicted on all counts, Ding faces a maximum penalty of 40 years in prison and a fine of up to $1 million.

The case underscores the ongoing tensions between the US and China over intellectual property theft and the race to dominate emerging technologies like AI.

(Photo by Towfiqu Barbhuiya on Unsplash)

See also: OpenAI: Musk wanted us to merge with Tesla or take ‘full control’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google engineer stole AI tech for Chinese firms appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/07/google-engineer-stole-ai-tech-for-chinese-firms/feed/ 0
OpenAI: Musk wanted us to merge with Tesla or take ‘full control’ https://www.artificialintelligence-news.com/2024/03/06/openai-musk-wanted-merge-tesla-or-take-full-control/ https://www.artificialintelligence-news.com/2024/03/06/openai-musk-wanted-merge-tesla-or-take-full-control/#respond Wed, 06 Mar 2024 12:52:15 +0000 https://www.artificialintelligence-news.com/?p=14487 Elon Musk, the billionaire CEO of Tesla and SpaceX, allegedly wanted the AI research company OpenAI to either merge with Tesla or give him full control of the organisation. A blog post from OpenAI, in response to a lawsuit filed by Musk against the company, revealed email communications from 2015 to 2018 when Musk was... Read more »

The post OpenAI: Musk wanted us to merge with Tesla or take ‘full control’ appeared first on AI News.

]]>
Elon Musk, the billionaire CEO of Tesla and SpaceX, allegedly wanted the AI research company OpenAI to either merge with Tesla or give him full control of the organisation.

A blog post from OpenAI, in response to a lawsuit filed by Musk against the company, revealed email communications from 2015 to 2018 when Musk was still involved with the company’s operations. 

In one email from 2017 – as OpenAI was exploring a transition to a for-profit model to secure more funding – Musk reportedly wanted majority equity, control of the board of directors, and the CEO position. However, OpenAI felt this level of control by one individual would go against its mission.

“Elon wanted us to merge with Tesla or he wanted full control,” wrote OpenAI in their blog post. “Elon left OpenAI, saying there needed to be a relevant competitor to Google/DeepMind and that he was going to do it himself. He said he’d be supportive of us finding our own path.”

When the merger discussions stalled, Musk suggested in 2018 that OpenAI could become attached to Tesla as a path for the automaker to provide funding. “Tesla is the only path that could even hope to hold a candle to Google,” Musk wrote in an email forwarded to OpenAI.

The blog post indicates these merger or acquisition proposals from Musk did not ultimately succeed, and he soon left the company. In a final email cited, Musk said his “probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%.”

Musk’s lawsuit, filed in March 2024, accuses OpenAI of breach of contract, breach of fiduciary duty and unfair competition. It alleges the company has become a “closed-source de facto subsidiary” of Microsoft after taking $13 billion in investment from the tech giant.

OpenAI denies the claims, stating Musk was aware the “Open” in its name did not mean it had to open-source all its AI technology to the public. The company expressed sadness that the situation has devolved into litigation with someone they “deeply admired.”

Musk has not yet publicly responded to the blog post from OpenAI. The lawsuit seeks to compel OpenAI to make its research freely available and prohibit exclusive arrangements benefiting individual companies.

“We intend to move to dismiss all of Elon’s claims,” says OpenAI.

(Photo by Austin Ramsey on Unsplash)

See also: Anthropic’s latest AI model beats rivals and achieves industry first

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI: Musk wanted us to merge with Tesla or take ‘full control’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/06/openai-musk-wanted-merge-tesla-or-take-full-control/feed/ 0
AIs in India will need government permission before launching https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/ https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/#respond Mon, 04 Mar 2024 17:03:13 +0000 https://www.artificialintelligence-news.com/?p=14478 In an advisory issued by India’s Ministry of Electronics and Information Technology (MeitY) last Friday, it was declared that any AI technology still in development must acquire explicit government permission before being released to the public. Developers will also only be able to deploy these technologies after labelling the potential fallibility or unreliability of the... Read more »

The post AIs in India will need government permission before launching appeared first on AI News.

]]>
In an advisory issued by India’s Ministry of Electronics and Information Technology (MeitY) last Friday, it was declared that any AI technology still in development must acquire explicit government permission before being released to the public.

Developers will also only be able to deploy these technologies after labelling the potential fallibility or unreliability of the output generated.

Furthermore, the document outlines plans for implementing a “consent popup” mechanism to inform users about potential defects or errors produced by AI. It also mandates the labelling of deepfakes with permanent unique metadata or other identifiers to prevent misuse.

In addition to these measures, the advisory orders all intermediaries or platforms to ensure that any AI model product – including large language models (LLM) – does not permit bias, discrimination, or threaten the integrity of the electoral process.

Some industry figures have criticised India’s plans as going too far:

Developers are requested to comply with the advisory within 15 days of its issuance. It has been suggested that after compliance and application for permission to release a product, developers may be required to perform a demo for government officials or undergo stress testing.

Although the advisory is not legally binding at present, it signifies the government’s expectations and hints at the future direction of regulation in the AI sector.

“We are doing it as an advisory today asking you (the AI platforms) to comply with it,” said IT minister Rajeev Chandrasekhar. He added that this stance would eventually be encoded in legislation.

“Generative AI or AI platforms available on the internet will have to take full responsibility for what the platform does, and cannot escape the accountability by saying that their platform is under testing,” continued Chandrasekhar, as reported by local media.

(Photo by Naveed Ahmed on Unsplash)

See also: Elon Musk sues OpenAI over alleged breach of nonprofit agreement

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AIs in India will need government permission before launching appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/feed/ 0
UK announces over £100M to support ‘agile’ AI regulation https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/ https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/#respond Tue, 06 Feb 2024 11:56:31 +0000 https://www.artificialintelligence-news.com/?p=14327 The UK government has announced over £100 million in new funding to support an “agile” approach to AI regulation. This includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education.  The investment comes at a vital time, as research from Thoughtworks shows... Read more »

The post UK announces over £100M to support ‘agile’ AI regulation appeared first on AI News.

]]>
The UK government has announced over £100 million in new funding to support an “agile” approach to AI regulation. This includes £10 million to prepare and upskill regulators to address the risks and opportunities of AI across sectors like telecoms, healthcare, and education. 

The investment comes at a vital time, as research from Thoughtworks shows 91% of British people argue that government regulations must do more to hold businesses accountable for their AI systems. The public wants more transparency, with 82% of consumers favouring businesses that proactively communicate how they are regulating general AI.

In a government response published today to last year’s AI Regulation White Paper consultation, the UK outlined its context-based approach to regulation that empowers existing regulators to address AI risks in a targeted way, while avoiding rushed legislation that could stifle innovation.

However, the government for the first time set out its thinking on potential future binding requirements for developers building advanced AI systems, to ensure accountability for safety – a measure 68% of the public said was needed in AI regulation. 

The response also revealed all key regulators will publish their approach to managing AI risks by 30 April, detailing their expertise and plans for the coming year. This aims to provide confidence to businesses and citizens on transparency. However, 30% still don’t think increased AI regulation is actually for their benefit, indicating scepticism remains.

Additionally, nearly £90 million was announced to launch nine new research hubs across the UK and a US partnership focused on responsible AI development. Separately, £2 million in funding will support projects defining responsible AI across sectors like policing – with 56% of the public wanting improved user education around AI.

Tom Whittaker, Senior Associate at independent UK law firm Burges Salmon, said: “The technology industry will welcome the large financial investment by the UK government to support regulators continuing what many see as an agile and sector-specific approach to AI regulation.

“The UK government is trying to position itself as pro-innovation for AI generally and across multiple sectors.  This is notable at a time when the EU is pushing ahead with its own significant AI legislation that the EU consider will boost trustworthy AI but which some consider a threat to innovation.”

Science Minister Michelle Donelan said the UK’s “innovative approach to AI regulation” has made it a leader in both AI safety and development. She said the agile, sector-specific approach allows the UK to “grip the risks immediately”, paving the way for it to reap AI’s benefits safely.

The wide-ranging funding and initiatives aim to cement the UK as a pioneer in safe AI innovation while assuaging public concerns. This builds on previous commitments like the £100 million AI Safety Institute to evaluate emerging models. 

Greg Hanson, GVP and Head of Sales EMEA North at Informatica, commented: “Undoubtedly, greater AI regulation is coming to the UK. And demand for this is escalating – especially considering half (52%) of UK businesses are already forging ahead with generative AI, above the global average of 45%.

“Yet with the adoption of AI, comes new challenges. Nearly all businesses in the UK who have adopted AI admit to having encountered roadblocks. In fact, 43% say AI governance is the main obstacle, closely followed by AI ethics (42%).”

Overall, the package of measures amounts to over £100 million of new funding towards the UK’s mission to lead on safe and responsible AI progress. This balances safely harnessing AI’s potential economic and societal benefits with a targeted approach to regulating very real risks.

(Photo by Rocco Dipoppa on Unsplash)

See also: Bank of England Governor: AI won’t lead to mass job losses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK announces over £100M to support ‘agile’ AI regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/06/uk-announces-over-100m-support-agile-ai-regulation/feed/ 0
OpenAI: Copyrighted data ‘impossible’ to avoid for AI training https://www.artificialintelligence-news.com/2024/01/09/openai-copyrighted-data-impossible-avoid-for-ai-training/ https://www.artificialintelligence-news.com/2024/01/09/openai-copyrighted-data-impossible-avoid-for-ai-training/#respond Tue, 09 Jan 2024 15:45:05 +0000 https://www.artificialintelligence-news.com/?p=14167 OpenAI made waves this week with its bold assertion to a UK parliamentary committee that it would be “impossible” to develop today’s leading AI systems without using vast amounts of copyrighted data. The company argued that advanced AI tools like ChatGPT require such broad training that adhering to copyright law would be utterly unworkable. In... Read more »

The post OpenAI: Copyrighted data ‘impossible’ to avoid for AI training appeared first on AI News.

]]>
OpenAI made waves this week with its bold assertion to a UK parliamentary committee that it would be “impossible” to develop today’s leading AI systems without using vast amounts of copyrighted data.

The company argued that advanced AI tools like ChatGPT require such broad training that adhering to copyright law would be utterly unworkable.

In written testimony, OpenAI stated that between expansive copyright laws and the ubiquity of protected online content, “virtually every sort of human expression” would be off-limits for training data. From news articles to forum comments to digital images, little online content can be utilised freely and legally.

According to OpenAI, attempts to create capable AI while avoiding copyright infringement would fail: “Limiting training data to public domain books and drawings created more than a century ago … would not provide AI systems that meet the needs of today’s citizens.”

While defending its practices as compliant, OpenAI conceded that partnerships and compensation schemes with publishers may be warranted to “support and empower creators.” But the company gave no indication that it intends to dramatically restrict its harvesting of online data, including paywalled journalism and literature.

This stance has opened OpenAI up to multiple lawsuits, including from media outlets like The New York Times alleging copyright breaches.

Nonetheless, OpenAI appears unwilling to fundamentally alter its data collection and training processes—given the “impossible” constraints self-imposed copyright limits would bring. The company instead hopes to rely on broad interpretations of fair use allowances to legally leverage vast swathes of copyrighted data.

As advanced AI continues to demonstrate uncanny abilities emulating human expression, legal experts expect vigorous courtroom battles around infringement by systems intrinsically designed to absorb enormous volumes of protected text, media, and other creative output. 

For now, OpenAI is betting against copyright maximalists in favour of near-boundless copying to drive ongoing AI development.

(Photo by Levart_Photographer on Unsplash)

See also: OpenAI’s GPT Store to launch next week after delays

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI: Copyrighted data ‘impossible’ to avoid for AI training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/09/openai-copyrighted-data-impossible-avoid-for-ai-training/feed/ 0