Legislation Archives - AI News https://www.artificialintelligence-news.com/tag/legislation/ Artificial Intelligence News Fri, 12 Apr 2024 12:03:51 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Legislation Archives - AI News https://www.artificialintelligence-news.com/tag/legislation/ 32 32 UK and South Korea to co-host AI Seoul Summit https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/ https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/#respond Fri, 12 Apr 2024 12:03:50 +0000 https://www.artificialintelligence-news.com/?p=14678 The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration. The two-day event will feature a virtual leaders’ session, co-chaired... Read more »

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
The UK and South Korea are set to co-host the AI Seoul Summit on the 21st and 22nd of May. This summit aims to pave the way for the safe development of AI technologies, drawing on the cooperative framework laid down by the Bletchley Declaration.

The two-day event will feature a virtual leaders’ session, co-chaired by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, and a subsequent in-person meeting among Digital Ministers. UK Technology Secretary Michelle Donelan, and Korean Minister of Science and ICT Lee Jong-Ho will co-host the latter.

This summit builds upon the historic Bletchley Park discussions held at the historic location in the UK last year, emphasising AI safety, inclusion, and innovation. It aims to ensure that AI advancements benefit humanity while minimising potential risks and enhancing global governance on tech innovation.

“The summit we held at Bletchley Park was a generational moment,” stated Donelan. “If we continue to bring international governments and a broad range of voices together, I have every confidence that we can continue to develop a global approach which will allow us to realise the transformative potential of this generation-defining technology safely and responsibly.”

Echoing this sentiment, Minister Lee Jong-Ho highlighted the importance of the upcoming Seoul Summit in furthering global cooperation on AI safety and innovation.

“AI is advancing at an unprecedented pace that exceeds our expectations, and it is crucial to establish global norms and governance to harness such technological innovations to enhance the welfare of humanity,” explained Lee. “We hope that the AI Seoul Summit will serve as an opportunity to strengthen global cooperation on not only AI safety but also AI innovation and inclusion, and promote sustainable AI development.”

Innovation remains a focal point for the UK, evidenced by initiatives like the Manchester Prize and the formation of the AI Safety Institute: the first state-backed organisation dedicated to AI safety. This proactive approach mirrors the UK’s commitment to international collaboration on AI governance, underscored by a recent agreement with the US on AI safety measures.

Accompanying the Seoul Summit will be the release of the International Scientific Report on Advanced AI Safety. This report, independently led by Turing Prize winner Yoshua Bengio, represents a collective effort to consolidate the best scientific research on AI safety. It underscores the summit’s role not only as a forum for discussion but as a catalyst for actionable insight into AI’s safe development.

The agenda of the AI Seoul Summit reflects the urgency of addressing the challenges and opportunities presented by AI. From discussing model safety evaluations, to fostering sustainable AI development. As the world embraces AI innovation, the AI Seoul Summit embodies a concerted effort to shape a future where technology serves humanity safely and delivers prosperity and inclusivity for all.

See also: US and Japan announce sweeping AI and tech collaboration

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK and South Korea to co-host AI Seoul Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/12/uk-and-south-korea-cohost-ai-seoul-summit/feed/ 0
IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/ https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/#respond Wed, 27 Mar 2024 10:37:59 +0000 https://www.artificialintelligence-news.com/?p=14619 A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted. The report identifies two key stages of generative... Read more »

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
A report by the Institute for Public Policy Research (IPPR) sheds light on the potential impact of AI on the UK job market. The study warns of an imminent ‘job apocalypse’, threatening to engulf over eight million careers across the nation, unless swift government intervention is enacted.

The report identifies two key stages of generative AI adoption. The first wave, which is already underway, exposes 11 percent of tasks performed by UK workers. Routine cognitive tasks like database management and organisational tasks like scheduling are most at risk. 

However, in a potential second wave, AI could handle a staggering 59 percent of tasks—impacting higher-earning jobs and non-routine cognitive work like creating databases.

Bhargav Srinivasa Desikan, Senior Research Fellow at IPPR, said: “We could see jobs such as copywriters, graphic designers, and personal assistants roles being heavily affected by AI. The question is how we can steer technological change in a way that allows for novel job opportunities, increased productivity, and economic benefits for all.”

“We are at a sliding doors moment, and policy makers urgently need to develop a strategy to make sure our labour market adapts to the 21st century, without leaving millions behind. It is crucial that all workers benefit from these technological advancements, and not just the big tech corporations.”

IPPR modelled three scenarios for the second wave’s impact:

  • Worst case: 7.9 million jobs lost with no GDP gains
  • Central case: 4.4 million jobs lost but 6.3 percent GDP growth (£144bn/year) 
  • Best case: No jobs lost and 13 percent GDP boost (£306bn/year) from augmenting at-risk jobs

IPPR warns the worst-case displacement is possible without government intervention, urging a “job-centric” AI strategy with fiscal incentives, regulation ensuring human oversight, and support for green jobs less exposed to automation.

The analysis underscores the disproportionate impact on certain demographics, with women and young people bearing the brunt of job displacement. Entry-level positions, predominantly occupied by these groups, face the gravest jeopardy as AI encroaches on roles such as secretarial and customer service positions.

Carsten Jung, Senior Economist at IPPR, said: “History shows that technological transition can be a boon if well managed, or can end in disruption if left to unfold without controls. Indeed, some occupations could be hard hit by generative AI, starting with back office jobs.

“But technology isn’t destiny and a jobs apocalypse is not inevitable – government, employers, and unions have the opportunity to make crucial design decisions now that ensure we manage this new technology well. If they don’t act soon, it may be too late.”

A full copy of the report can be found here (PDF)

(Photo by Cullan Smith)

See also: Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/27/ippr-8m-uk-careers-at-risk-job-apocalypse-from-ai/feed/ 0
EU approves controversial AI Act to mixed reactions https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/ https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/#respond Wed, 13 Mar 2024 16:39:55 +0000 https://www.artificialintelligence-news.com/?p=14535 The European Parliament today approved the AI Act, the first ever regulatory framework governing the use of AI systems. The legislation passed with an overwhelming majority of 523 votes in favour, 46 against and 49 abstentions. “This is a historic day,” said Italian lawmaker Brando Benifei, co-lead on the AI Act. “We have the first... Read more »

The post EU approves controversial AI Act to mixed reactions appeared first on AI News.

]]>
The European Parliament today approved the AI Act, the first ever regulatory framework governing the use of AI systems. The legislation passed with an overwhelming majority of 523 votes in favour, 46 against and 49 abstentions.

“This is a historic day,” said Italian lawmaker Brando Benifei, co-lead on the AI Act. “We have the first regulation in the world which puts a clear path for safe and human-centric development of AI.”

The AI Act will categorise AI systems into four tiers based on their potential risk to society. High-risk applications like self-driving cars will face strict requirements before being allowed on the EU market. Lower risk systems will have fewer obligations.

“The main point now will be implementation and compliance by businesses and institutions,” Benifei stated. “We are also working on further AI legislation for workplace conditions.”

His counterpart, Dragoş Tudorache of Romania, said the EU aims to promote these pioneering rules globally. “We have to be open to work with others on how to build governance with like-minded parties.”

The general AI rules take effect in May 2025, while obligations for high-risk systems kick in after three years. National oversight agencies will monitor compliance.

Differing viewpoints on impact

Reaction was mixed on whether the Act properly balances innovation with protecting rights.

Curtis Wilson, a data scientist at Synopsys, believes it will build public trust: “The strict rules and punishing fines will deter careless developers, and help customers be more confident in using AI systems…Ensuring all AI developers adhere to these standards is to everyone’s benefit.”

However, Mher Hakobyan from Amnesty International criticised the legislation as favouring industry over human rights: “It is disappointing that the EU chose to prioritise interests of industry and law enforcement over protecting people…It lacks proper transparency and accountability provisions, which will likely exacerbate abuses.”

Companies now face the challenge of overhauling practices to comply.

Marcus Evans, a data privacy lawyer, advised: “Businesses need to create and maintain robust AI governance to make the best use of the technology and ensure compliance with the new regime…They need to start preparing now to not fall foul of the rules.”

After years of negotiations, the AI Act signals the EU intends to lead globally on this transformative technology. But dissenting voices show challenges remain in finding the right balance.

(Photo by Tabrez Syed on Unsplash)

See also: OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU approves controversial AI Act to mixed reactions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/feed/ 0
AIs in India will need government permission before launching https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/ https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/#respond Mon, 04 Mar 2024 17:03:13 +0000 https://www.artificialintelligence-news.com/?p=14478 In an advisory issued by India’s Ministry of Electronics and Information Technology (MeitY) last Friday, it was declared that any AI technology still in development must acquire explicit government permission before being released to the public. Developers will also only be able to deploy these technologies after labelling the potential fallibility or unreliability of the... Read more »

The post AIs in India will need government permission before launching appeared first on AI News.

]]>
In an advisory issued by India’s Ministry of Electronics and Information Technology (MeitY) last Friday, it was declared that any AI technology still in development must acquire explicit government permission before being released to the public.

Developers will also only be able to deploy these technologies after labelling the potential fallibility or unreliability of the output generated.

Furthermore, the document outlines plans for implementing a “consent popup” mechanism to inform users about potential defects or errors produced by AI. It also mandates the labelling of deepfakes with permanent unique metadata or other identifiers to prevent misuse.

In addition to these measures, the advisory orders all intermediaries or platforms to ensure that any AI model product – including large language models (LLM) – does not permit bias, discrimination, or threaten the integrity of the electoral process.

Some industry figures have criticised India’s plans as going too far:

Developers are requested to comply with the advisory within 15 days of its issuance. It has been suggested that after compliance and application for permission to release a product, developers may be required to perform a demo for government officials or undergo stress testing.

Although the advisory is not legally binding at present, it signifies the government’s expectations and hints at the future direction of regulation in the AI sector.

“We are doing it as an advisory today asking you (the AI platforms) to comply with it,” said IT minister Rajeev Chandrasekhar. He added that this stance would eventually be encoded in legislation.

“Generative AI or AI platforms available on the internet will have to take full responsibility for what the platform does, and cannot escape the accountability by saying that their platform is under testing,” continued Chandrasekhar, as reported by local media.

(Photo by Naveed Ahmed on Unsplash)

See also: Elon Musk sues OpenAI over alleged breach of nonprofit agreement

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AIs in India will need government permission before launching appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/feed/ 0
OpenAI: Copyrighted data ‘impossible’ to avoid for AI training https://www.artificialintelligence-news.com/2024/01/09/openai-copyrighted-data-impossible-avoid-for-ai-training/ https://www.artificialintelligence-news.com/2024/01/09/openai-copyrighted-data-impossible-avoid-for-ai-training/#respond Tue, 09 Jan 2024 15:45:05 +0000 https://www.artificialintelligence-news.com/?p=14167 OpenAI made waves this week with its bold assertion to a UK parliamentary committee that it would be “impossible” to develop today’s leading AI systems without using vast amounts of copyrighted data. The company argued that advanced AI tools like ChatGPT require such broad training that adhering to copyright law would be utterly unworkable. In... Read more »

The post OpenAI: Copyrighted data ‘impossible’ to avoid for AI training appeared first on AI News.

]]>
OpenAI made waves this week with its bold assertion to a UK parliamentary committee that it would be “impossible” to develop today’s leading AI systems without using vast amounts of copyrighted data.

The company argued that advanced AI tools like ChatGPT require such broad training that adhering to copyright law would be utterly unworkable.

In written testimony, OpenAI stated that between expansive copyright laws and the ubiquity of protected online content, “virtually every sort of human expression” would be off-limits for training data. From news articles to forum comments to digital images, little online content can be utilised freely and legally.

According to OpenAI, attempts to create capable AI while avoiding copyright infringement would fail: “Limiting training data to public domain books and drawings created more than a century ago … would not provide AI systems that meet the needs of today’s citizens.”

While defending its practices as compliant, OpenAI conceded that partnerships and compensation schemes with publishers may be warranted to “support and empower creators.” But the company gave no indication that it intends to dramatically restrict its harvesting of online data, including paywalled journalism and literature.

This stance has opened OpenAI up to multiple lawsuits, including from media outlets like The New York Times alleging copyright breaches.

Nonetheless, OpenAI appears unwilling to fundamentally alter its data collection and training processes—given the “impossible” constraints self-imposed copyright limits would bring. The company instead hopes to rely on broad interpretations of fair use allowances to legally leverage vast swathes of copyrighted data.

As advanced AI continues to demonstrate uncanny abilities emulating human expression, legal experts expect vigorous courtroom battles around infringement by systems intrinsically designed to absorb enormous volumes of protected text, media, and other creative output. 

For now, OpenAI is betting against copyright maximalists in favour of near-boundless copying to drive ongoing AI development.

(Photo by Levart_Photographer on Unsplash)

See also: OpenAI’s GPT Store to launch next week after delays

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI: Copyrighted data ‘impossible’ to avoid for AI training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/09/openai-copyrighted-data-impossible-avoid-for-ai-training/feed/ 0
Is Europe killing itself financially with the AI Act? https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/ https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/#respond Mon, 18 Sep 2023 15:59:15 +0000 https://www.artificialintelligence-news.com/?p=13606 Europe is tinkering with legislation to regulate artificial intelligence. European regulators are delighted with this, but what does the world say about the AI Act? Now the outlines for the AI Act are known, a debate is beginning to erupt around its possible implications. One camp believes regulations are needed to curb the risks of powerful AI... Read more »

The post Is Europe killing itself financially with the AI Act? appeared first on AI News.

]]>
Europe is tinkering with legislation to regulate artificial intelligence. European regulators are delighted with this, but what does the world say about the AI Act?

Now the outlines for the AI Act are known, a debate is beginning to erupt around its possible implications. One camp believes regulations are needed to curb the risks of powerful AI technology, while the other is convinced that regulation will prove pernicious for the European economy. Is it out of the question that safe AI products also bring economic prosperity?

‘Industrial revolution’ without Europe

The EU “prevents the industrial revolution from happening” and portrays itself as “no part of the future world,” Joe Lonsdale told Bloomberg. He regularly appears in the US media around AI topics as an outspoken advocate of the technology. According to him, the technology has the potential to cause a third industrial revolution, and every company should already have implemented it in its organization.

He earned a bachelor’s degree in computer science in 2003. Meanwhile, he co-founded several technology companies, including those that deploy artificial intelligence. He later grew to become a businessman and venture capitalist.

The only question is, are the concerns well-founded? At the very least, caution seems necessary to avoid seeing major AI products disappear from Europe. Sam Altman, a better-known IT figure as CEO of OpenAI, previously spoke out about the possible disappearance of AI companies from Europe if the rules become too hard to apply. He does not plan to pull ChatGPT out of Europe because of the AI law, but he warns here of the possible actions of other companies.

ChatGPT stays

The CEO himself is essentially a strong supporter of security legislation for AI. He advocates for clear security requirements that AI developers must meet before the official release of a new product.

When a major player in the AI field calls for regulation of the technology he is working with, perhaps we as Europe should listen. That is what is happening with the AI Act, through which the EU is trying to be the first in the world to put out a set of rules for artificial intelligence. The EU is a pioneer, but it will also have to discover the pitfalls of a policy in the absence of a working example in the world.

The rules will be continuously tested until they officially come into effect in 2025 by experts who publicly give their opinions on the law. A public testing period which AI developers should also find important, Altman said. The European Union also avoids making up rules from higher up for a field it doesn’t know much about itself. The legislation will come bottom-up by involving companies and developers already actively engaged in AI setting the standards.

Copy off

Although the EU often pronounces that the AI law will be the world’s first regulation of artificial intelligence, other places are tinkering with a legal framework just as much. The United Kingdom, for example, is eager to embrace the technology but also wants certainty about its security. To that end, it immerses itself in the technology and gains early access to DeepMind, OpenAI and Anthropic’s models for research purposes.

However, Britain has no plans to punish companies that do not comply. The country limits itself to a framework of five principles that artificial intelligence should comply with. The choice seems to play to the disadvantage of guaranteed safety of AI products, as the country says it is necessary not to make a mandatory political framework for companies, to attract investment from AI companies in the UK. So secure AI products and economic prosperity do not appear to fit well together according to the country. Wait and see if Europe’s AI law validates that.

(Editor’s note: This article first appeared on Techzine)

The post Is Europe killing itself financially with the AI Act? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/feed/ 0
White House secures safety commitments from eight more AI companies https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/ https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/#respond Wed, 13 Sep 2023 14:56:10 +0000 https://www.artificialintelligence-news.com/?p=13585 The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies. Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of... Read more »

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies.

Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of safe, secure, and trustworthy AI.

The Biden-Harris Administration is actively working on an Executive Order and pursuing bipartisan legislation to ensure the US leads the way in responsible AI development that unlocks its potential while managing its risks.

The commitments made by these companies revolve around three fundamental principles: safety, security, and trust. They have committed to:

  1. Ensure products are safe before introduction:

The companies commit to rigorous internal and external security testing of their AI systems before releasing them to the public. This includes assessments by independent experts, helping guard against significant AI risks such as biosecurity, cybersecurity, and broader societal effects.

They will also actively share information on AI risk management with governments, civil society, academia, and across the industry. This collaborative approach will include sharing best practices for safety, information on attempts to circumvent safeguards, and technical cooperation.

  1. Build systems with security as a top priority:

The companies have pledged to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Recognising the critical importance of these model weights in AI systems, they commit to releasing them only when intended and when security risks are adequately addressed.

Additionally, the companies will facilitate third-party discovery and reporting of vulnerabilities in their AI systems. This proactive approach ensures that issues can be identified and resolved promptly even after an AI system is deployed.

  1. Earn the public’s trust:

To enhance transparency and accountability, the companies will develop robust technical mechanisms – such as watermarking systems – to indicate when content is AI-generated. This step aims to foster creativity and productivity while reducing the risks of fraud and deception.

They will also publicly report on their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security and societal risks, including fairness and bias. Furthermore, these companies are committed to prioritising research on the societal risks posed by AI systems, including addressing harmful bias and discrimination.

These leading AI companies will also develop and deploy advanced AI systems to address significant societal challenges, from cancer prevention to climate change mitigation, contributing to the prosperity, equality, and security of all.

The Biden-Harris Administration’s engagement with these commitments extends beyond the US, with consultations involving numerous international partners and allies. These commitments complement global initiatives, including the UK’s Summit on AI Safety, Japan’s leadership of the G-7 Hiroshima Process, and India’s leadership as Chair of the Global Partnership on AI.

The announcement marks a significant milestone in the journey towards responsible AI development, with industry leaders and the government coming together to ensure that AI technology benefits society while mitigating its inherent risks.

(Photo by Tabrez Syed on Unsplash)

See also: UK’s AI ecosystem to hit £2.4T by 2027, third in global race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/feed/ 0
UK government outlines AI Safety Summit plans https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/ https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/#respond Mon, 04 Sep 2023 10:46:55 +0000 https://www.artificialintelligence-news.com/?p=13560 The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023. The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both... Read more »

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023.

The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both national and international scales.

Secretary of State Michelle Donelan has officially launched the formal engagement process leading up to the summit. Jonathan Black and Matt Clifford – serving as the Prime Minister’s representatives for the AI Safety Summit – have also initiated discussions with various countries and frontier AI organisations.

This marks a crucial step towards fostering collaboration in the field of AI safety and follows a recent roundtable discussion hosted by Secretary Donelan, which involved representatives from a diverse range of civil society groups.

The AI Safety Summit will serve as a pivotal platform, bringing together not only influential nations but also leading technology organisations, academia, and civil society. Its primary objective is to facilitate informed discussions that can lead to sensible regulations in the AI landscape.

One of the core focuses of the summit will be on identifying and mitigating risks associated with the most powerful AI systems. These risks include the potential misuse of AI for activities such as undermining biosecurity through the proliferation of sensitive information. 

Additionally, the summit aims to explore how AI can be harnessed for the greater good, encompassing domains like life-saving medical technology and safer transportation.

The UK government claims to recognise the importance of diverse perspectives in shaping the discussions surrounding AI and says that it’s committed to working closely with global partners to ensure that it remains safe and that its benefits can be harnessed worldwide.

As part of this iterative and consultative process, the UK has shared five key objectives that will guide the discussions at the summit:

  1. Developing a shared understanding of the risks posed by AI and the necessity for immediate action.
  2. Establishing a forward process for international collaboration on AI safety, including supporting national and international frameworks.
  3. Determining appropriate measures for individual organisations to enhance AI safety.
  4. Identifying areas for potential collaboration in AI safety research, such as evaluating model capabilities and establishing new standards for governance.
  5. Demonstrating how the safe development of AI can lead to global benefits.

The growth potential of AI investment, deployment, and capabilities is staggering, with projections of up to $7 trillion in growth over the next decade and accelerated drug discovery. A report by Google in July suggests that, by 2030, AI could boost the UK economy alone by £400 billion—leading to an annual growth rate of 2.6 percent.

However, these opportunities come with significant risks that transcend national borders. Addressing these risks is now a matter of utmost urgency.

Earlier this month, DeepMind co-founder Mustafa Suleyman called on the US to enforce AI standards. However, Suleyman is far from the only leading industry figure who has expressed concerns and called for measures to manage the risks of AI.

In an open letter in March, over 1,000 experts infamously called for a halt on “out of control” AI development over the “profound risks to society and humanity”.

Multiple stakeholders – including individual countries, international organisations, businesses, academia, and civil society – are already engaged in AI-related work. This includes efforts at the United Nations, the Organisation for Economic Co-operation and Development (OECD), the Global Partnership on Artificial Intelligence (GPAI), the Council of Europe, G7, G20, and standard development organisations.

The AI Safety Summit will build upon these existing initiatives by formulating practical next steps to mitigate risks associated with AI. These steps will encompass discussions on implementing risk-mitigation measures at relevant organisations, identifying key areas for international collaboration, and creating a roadmap for long-term action.

If successful, the AI Safety Summit at Bletchley Park promises to be a milestone event in the global dialogue on AI safety—seeking to strike a balance between harnessing the potential of AI for the benefit of humanity and addressing the challenges it presents.

(Photo by Hal Gatewood on Unsplash)

See also: UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/feed/ 0
Beijing publishes its AI governance rules https://www.artificialintelligence-news.com/2023/07/14/beijing-publishes-its-ai-governance-rules/ https://www.artificialintelligence-news.com/2023/07/14/beijing-publishes-its-ai-governance-rules/#respond Fri, 14 Jul 2023 12:02:36 +0000 https://www.artificialintelligence-news.com/?p=13277 Chinese authorities have published rules governing generative AI which go substantially beyond current regulations in other parts of the world. One notable requirement is that operators of generative AI must ensure that their services adhere to the core values of socialism, while also avoiding content that incites subversion of state power, secession, terrorism, or any... Read more »

The post Beijing publishes its AI governance rules appeared first on AI News.

]]>
Chinese authorities have published rules governing generative AI which go substantially beyond current regulations in other parts of the world.

One notable requirement is that operators of generative AI must ensure that their services adhere to the core values of socialism, while also avoiding content that incites subversion of state power, secession, terrorism, or any actions undermining national unity and social stability.

Generative AI services within China are prohibited from promoting content that provokes ethnic hatred and discrimination, violence, obscenity, or false and harmful information. These content-related rules remain consistent with a draft released in April 2023.

Furthermore, the regulations reveal China’s interest in developing digital public goods for generative AI.

The document emphasises the promotion of public training data resource platforms and the collaborative sharing of model-making hardware to enhance utilisation rates. The authorities also aim to encourage the orderly opening of public data classification and the expansion of high-quality public training data resources.

In terms of technology development, the rules stipulate that AI should be developed using secure and proven tools, including chips, software, tools, computing power, and data resources.

Intellectual property rights must be respected when using data for model development, and the consent of individuals must be obtained before incorporating personal information. There is also a focus on improving the quality, authenticity, accuracy, objectivity, and diversity of training data.

To ensure fairness and non-discrimination, developers are required to create algorithms that do not discriminate based on factors such as ethnicity, belief, country, region, gender, age, occupation, or health.

Moreover, operators of generative AI must obtain licenses for their services under most circumstances, adding a layer of regulatory oversight.

The new rules are scheduled to come into effect on August 15, 2023. China’s rules will not only have implications for domestic AI operators but will also serve as a benchmark for international discussions on AI governance and ethical practices.

You can find a full copy of the rules on the Cyberspace Administration of China’s website here.

(Photo by zhang kaiyv on Unsplash)

See also: OpenAI introduces team dedicated to stopping rogue AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Beijing publishes its AI governance rules appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/14/beijing-publishes-its-ai-governance-rules/feed/ 0
China’s deepfake laws come into effect today https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/ https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/#respond Tue, 10 Jan 2023 16:46:21 +0000 https://www.artificialintelligence-news.com/?p=12594 China will begin enforcing its strict new rules around the creation of deepfakes from today. Deepfakes are increasingly being used for manipulation and humiliation. We’ve seen deepfakes of figures like disgraced FTX founder Sam Bankman-Fried to commit fraud, Ukrainian President Volodymyr Zelenskyy to spread disinformation, and US House Speaker Nancy Pelosi to make her appear... Read more »

The post China’s deepfake laws come into effect today appeared first on AI News.

]]>
China will begin enforcing its strict new rules around the creation of deepfakes from today.

Deepfakes are increasingly being used for manipulation and humiliation. We’ve seen deepfakes of figures like disgraced FTX founder Sam Bankman-Fried to commit fraud, Ukrainian President Volodymyr Zelenskyy to spread disinformation, and US House Speaker Nancy Pelosi to make her appear drunk.

Last month, the Cyberspace Administration of China (CAC) announced rules to clampdown on deepfakes.

“In recent years, in-depth synthetic technology has developed rapidly. While serving user needs and improving user experiences, it has also been used by some criminals to produce, copy, publish, and disseminate illegal and bad information, defame, detract from the reputation and honour of others, and counterfeit others,” explains the CAC.

Providers of services for creating synthetic content will be obligated to ensure their AIs aren’t misused for illegal and/or harmful purposes. Furthermore, any content that was created using an AI must be clearly labelled with a watermark.

China’s new rules come into force today (10 January 2023) and will also require synthetic service providers to:

  • Not illegally process personal information
  • Periodically review, evaluate, and verify algorithms
  • Establish management systems and technical safeguards
  • Authenticate users with real identity information
  • Establish mechanisms for complaints and reporting

The CAC notes that effective governance of synthetic technologies is a multi-entity effort that will require the participation of government, enterprises, and citizens. Such participation, the CAC says, will promote the legal and responsible use of deep synthetic technologies while minimising the associated risks.

(Photo by Henry Chen on Unsplash)

Related: AI & Big Data Expo: Exploring ethics in AI and the guardrails required

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post China’s deepfake laws come into effect today appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/feed/ 0