eu Archives - AI News https://www.artificialintelligence-news.com/tag/eu/ Artificial Intelligence News Thu, 30 May 2024 12:22:10 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png eu Archives - AI News https://www.artificialintelligence-news.com/tag/eu/ 32 32 EU launches office to implement AI Act and foster innovation https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/ https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/#respond Thu, 30 May 2024 12:22:08 +0000 https://www.artificialintelligence-news.com/?p=14903 The European Union has launched a new office dedicated to overseeing the implementation of its landmark AI Act, which is regarded as one of the most comprehensive AI regulations in the world. This new initiative adopts a risk-based approach, imposing stringent regulations on higher-risk AI applications to ensure their safe and ethical deployment. The primary... Read more »

The post EU launches office to implement AI Act and foster innovation appeared first on AI News.

]]>
The European Union has launched a new office dedicated to overseeing the implementation of its landmark AI Act, which is regarded as one of the most comprehensive AI regulations in the world. This new initiative adopts a risk-based approach, imposing stringent regulations on higher-risk AI applications to ensure their safe and ethical deployment.

The primary goal of this office is to promote the “future development, deployment and use” of AI technologies, aiming to harness their societal and economic benefits while mitigating associated risks. By focusing on innovation and safety, the office seeks to position the EU as a global leader in AI regulation and development.

According to Margerthe Vertager, the EU competition chief, the new office will play a “key role” in implementing the AI Act, particularly with regard to general-purpose AI models. She stated, “Together with developers and a scientific community, the office will evaluate and test general-purpose AI to ensure that AI serves us as humans and upholds our European values.”

Sridhar Iyengar, Managing Director for Zoho Europe, welcomed the establishment of the AI office, noting, “The establishment of the AI office in the European Commission to play a key role with the implementation of the EU AI Act is a welcome sign of progress, and it is encouraging to see the EU positioning itself as a global leader in AI regulation. We hope to continue to see collaboration between governments, businesses, academics and industry experts to guide on safe use of AI to boost business growth.”

Iyengar highlighted the dual nature of AI’s impact on businesses, pointing out both its benefits and concerns. He emphasised the importance of adhering to best practice guidance and legislative guardrails to ensure safe and ethical AI adoption.

“AI can drive innovation in business tools, helping to improve fraud detection, forecasting, and customer data analysis to name a few. These benefits not only have the potential to elevate customer experience but can increase efficiency, present insights, and suggest actions to drive further success,” Iyengar said.

The office will be staffed by more than 140 individuals, including technology specialists, administrative assistants, lawyers, policy specialists, and economists. It will consist of various units focusing on regulation and compliance, as well as safety and innovation, reflecting the multifaceted approach needed to govern AI effectively.

Rachael Hays, Transformation Director for Definia, part of The IN Group, commented: “The establishment of a dedicated AI Office within the European Commission underscores the EU’s commitment to both innovation and regulation which is undoubtedly crucial in this rapidly evolving AI landscape.”

Hays also pointed out the potential for workforce upskilling that this initiative provides. She referenced findings from their Tech and the Boardroom research, which revealed that over half of boardroom leaders view AI as the biggest direct threat to their organisations.

“This initiative directly addresses these fears as employees across various sectors are given the opportunity to adapt and thrive in an AI-driven world. The AI Office offers promising hope and guidance in developing economic benefits while mitigating risks associated with AI technology, something we should all get on board with,” she added.

As the EU takes these steps towards comprehensive AI governance, the office’s work will be pivotal in driving forward both innovation and safety in the field.

(Photo by Sara Kurfeß)

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU launches office to implement AI Act and foster innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/feed/ 0
OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information... Read more »

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/feed/ 0
UAE set to help fund OpenAI’s in-house chips https://www.artificialintelligence-news.com/2024/03/15/uae-set-help-fund-openai-in-house-chips/ https://www.artificialintelligence-news.com/2024/03/15/uae-set-help-fund-openai-in-house-chips/#respond Fri, 15 Mar 2024 16:21:50 +0000 https://www.artificialintelligence-news.com/?p=14550 OpenAI’s ambitious plans to develop its own semiconductor chips for powering advanced AI models could receive a boost from the United Arab Emirates (UAE), according to a report by the Financial Times. The report states that MGX — a state-backed group in Abu Dhabi — is in discussions to support OpenAI’s venture to build AI... Read more »

The post UAE set to help fund OpenAI’s in-house chips appeared first on AI News.

]]>
OpenAI’s ambitious plans to develop its own semiconductor chips for powering advanced AI models could receive a boost from the United Arab Emirates (UAE), according to a report by the Financial Times.

The report states that MGX — a state-backed group in Abu Dhabi — is in discussions to support OpenAI’s venture to build AI chips in-house. This information comes from two individuals with knowledge of the discussions.

In order to achieve its goal of creating semiconductor chips internally, OpenAI is reportedly seeking investments worth trillions of dollars from investors worldwide. By manufacturing its own chips, the San Francisco-based company aims to reduce its reliance on Nvidia, the current global leader in semiconductor chip technology.

As part of its funding efforts, OpenAI struck a deal with Thrive Capital in February 2023, which reportedly increased the company’s valuation to more than $80 billion, marking an almost threefold increase in under 10 months.

This comes as the UK semiconductor sector gains enhanced access to research funding through the country’s participation in the EU’s ‘Chips Joint Undertaking’.

The UK’s participation in the Chips Joint Undertaking provides the British semiconductor sector with enhanced access to a €1.3 billion pot of funds set aside from Horizon Europe to support research in semiconductor technologies up to 2027. The move is backed by an initial £5 million from the UK government this year, with an additional £30 million due to support UK participation in further research between 2025 and 2027.

“Our membership of the Chips Joint Undertaking will boost Britain’s strengths in semiconductor science and research to secure our position in the global chip supply chain,” said Technology Minister Saqib Bhatti. “This underscores our unwavering commitment to pushing the boundaries of technology and cements our important role in shaping the future of semiconductor technologies around the world.”

Back in the UAE, MGX — the group behind the potential investment in OpenAI — is an AI-focused fund launched earlier this week and headed by the UAE’s national security adviser, Sheikh Tahnoon Bin Zayed al-Nahyan. The fund was established in collaboration with G42 and Mubadala, with G42 having already entered into a partnership with OpenAI in October 2023 as part of the company’s Middle East expansion.

During the G42 partnership deal, OpenAI CEO Sam Altman stated that they plan to bring AI solutions to the Middle East that “resonate with the nuances of the region.”

One of the sources briefed on the MGX fund emphasised, “They’re looking at creating a structure that will put Abu Dhabi at the centre of this AI strategy with global partners around the world.”

As the race to develop cutting-edge semiconductor technologies intensifies, both the UAE and the UK-EU are positioning themselves as key players.

(Photo by Wael Hneini on Unsplash)

See also: EU approves controversial AI Act to mixed reactions

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UAE set to help fund OpenAI’s in-house chips appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/15/uae-set-help-fund-openai-in-house-chips/feed/ 0
EU approves controversial AI Act to mixed reactions https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/ https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/#respond Wed, 13 Mar 2024 16:39:55 +0000 https://www.artificialintelligence-news.com/?p=14535 The European Parliament today approved the AI Act, the first ever regulatory framework governing the use of AI systems. The legislation passed with an overwhelming majority of 523 votes in favour, 46 against and 49 abstentions. “This is a historic day,” said Italian lawmaker Brando Benifei, co-lead on the AI Act. “We have the first... Read more »

The post EU approves controversial AI Act to mixed reactions appeared first on AI News.

]]>
The European Parliament today approved the AI Act, the first ever regulatory framework governing the use of AI systems. The legislation passed with an overwhelming majority of 523 votes in favour, 46 against and 49 abstentions.

“This is a historic day,” said Italian lawmaker Brando Benifei, co-lead on the AI Act. “We have the first regulation in the world which puts a clear path for safe and human-centric development of AI.”

The AI Act will categorise AI systems into four tiers based on their potential risk to society. High-risk applications like self-driving cars will face strict requirements before being allowed on the EU market. Lower risk systems will have fewer obligations.

“The main point now will be implementation and compliance by businesses and institutions,” Benifei stated. “We are also working on further AI legislation for workplace conditions.”

His counterpart, Dragoş Tudorache of Romania, said the EU aims to promote these pioneering rules globally. “We have to be open to work with others on how to build governance with like-minded parties.”

The general AI rules take effect in May 2025, while obligations for high-risk systems kick in after three years. National oversight agencies will monitor compliance.

Differing viewpoints on impact

Reaction was mixed on whether the Act properly balances innovation with protecting rights.

Curtis Wilson, a data scientist at Synopsys, believes it will build public trust: “The strict rules and punishing fines will deter careless developers, and help customers be more confident in using AI systems…Ensuring all AI developers adhere to these standards is to everyone’s benefit.”

However, Mher Hakobyan from Amnesty International criticised the legislation as favouring industry over human rights: “It is disappointing that the EU chose to prioritise interests of industry and law enforcement over protecting people…It lacks proper transparency and accountability provisions, which will likely exacerbate abuses.”

Companies now face the challenge of overhauling practices to comply.

Marcus Evans, a data privacy lawyer, advised: “Businesses need to create and maintain robust AI governance to make the best use of the technology and ensure compliance with the new regime…They need to start preparing now to not fall foul of the rules.”

After years of negotiations, the AI Act signals the EU intends to lead globally on this transformative technology. But dissenting voices show challenges remain in finding the right balance.

(Photo by Tabrez Syed on Unsplash)

See also: OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU approves controversial AI Act to mixed reactions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/13/eu-approves-controversial-ai-act-mixed-reactions/feed/ 0
Mistral AI unveils LLM rivalling major players https://www.artificialintelligence-news.com/2024/02/27/mistral-ai-unveils-llm-rivalling-major-players/ https://www.artificialintelligence-news.com/2024/02/27/mistral-ai-unveils-llm-rivalling-major-players/#respond Tue, 27 Feb 2024 12:59:49 +0000 https://www.artificialintelligence-news.com/?p=14455 Mistral AI, a France-based startup, has introduced a new large language model (LLM) called Mistral Large that it claims can compete with several top AI systems on the market.   Mistral AI stated that Mistral Large outscored most major LLMs except for OpenAI’s recently launched GPT-4 in tests of language understanding. It also performed strongly in... Read more »

The post Mistral AI unveils LLM rivalling major players appeared first on AI News.

]]>
Mistral AI, a France-based startup, has introduced a new large language model (LLM) called Mistral Large that it claims can compete with several top AI systems on the market.  

Mistral AI stated that Mistral Large outscored most major LLMs except for OpenAI’s recently launched GPT-4 in tests of language understanding. It also performed strongly in maths and coding assessments.

Co-founder and Chief Scientist Guillaume Lample said Mistral Large represents a major advance over earlier Mistral models. The company also launched a chatbot interface named Le Chat to allow users to interact with the system, similar to ChatGPT.  

The proprietary model boasts fluency in English, French, Spanish, German, and Italian, with a vocabulary exceeding 20,000 words. While Mistral’s first model was open-source, Mistral Large’s code remains closed like systems from OpenAI and other firms.  

Mistral AI received nearly $500 million in funding late last year from backers such as Nvidia and Andreessen Horowitz. It also recently partnered with Microsoft to provide access to Mistral Large through Azure cloud services.  

Microsoft’s investment of €15 million into Mistral AI is set to face scrutiny from European Union regulators who are already analysing the tech giant’s ties to OpenAI, maker of market-leading models like GPT-3 and GPT-4. The European Commission said Tuesday it will review Microsoft’s deal with Mistral, which could lead to a formal probe jeopardising the partnership.

Microsoft has focused most of its AI efforts on OpenAI, having invested around $13 billion into the California company. Those links are now also under review in both the EU and UK for potential anti-competitive concerns. 

Pricing for the Mistral Large model starts at $8 per million tokens of input and $24 per million output tokens. The system will leverage Azure’s computing infrastructure for training and deployment needs as Mistral AI and Microsoft partner on AI research as well.

While third-party rankings have yet to fully assess Mistral Large, the firm’s earlier Mistral Medium ranked 6th out of over 60 language models. With the latest release, Mistral AI appears positioned to challenge dominant players in the increasingly crowded AI space.

(Photo by Joshua Golde on Unsplash)

See also: Stability AI previews Stable Diffusion 3 text-to-image model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mistral AI unveils LLM rivalling major players appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/27/mistral-ai-unveils-llm-rivalling-major-players/feed/ 0
AI regulation: A pro-innovation approach – EU vs UK https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/ https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/#respond Mon, 31 Jul 2023 14:07:50 +0000 https://www.artificialintelligence-news.com/?p=13348 In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”). Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners AI – The opportunity and the challenge AI currently delivers broad societal... Read more »

The post AI regulation: A pro-innovation approach – EU vs UK appeared first on AI News.

]]>
In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”).

Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners

AI – The opportunity and the challenge

AI currently delivers broad societal benefits, from medical advances to mitigating climate change. As an example, an AI technology developed by DeepMind, a UK- based business, can predict the structure of almost every protein known to science. Government frameworks consider the role of regulation in creating the environment for AI to flourish. AI technologies have not yet reached their full potential. Under the right conditions, AI will transform all areas of life and stimulate economies by unleashing innovation and driving productivity, creating new jobs and improving the workplace.

The UK has indicated a requirement to act quickly to continue to lead the international conversation on AI governance and demonstrate the value of our pragmatic, proportionate regulatory approach. In their report, the UK government identify the short time frame for intervention to provide a clear, pro-innovation regulatory environment in order to make the UK one of the top places in the world to build foundational AI companies. Not too dissimilar to this EU legislators have signalled an intention to make the EU a global hub for AI innovation. On both fronts responding to risk and building public trust are important drivers for regulation. Yet, clear and consistent regulation can also support business investment and build confidence in innovation.

What remains critical for the industry is winning and retaining consumer trust, which is key to the success of innovation economies. Neither the EU nor the UK can afford not to have clear, proportionate approaches to regulation that enable the responsible application of  AI to flourish. Without such consideration, they risk creating cumbersome rules applying to all AI technologies.

What are the policy objectives and intended effects?

Similarities exist in terms of the overall aims. As shown in the table below, the core similarities revolve around growth, safety and economic prosperity:

EU AI ActUK Approach
Ensure that AI systems placed on the market and used are safe and respect existing laws on fundamental rights and Union values.Drive growth and prosperity by boosting innovation, investment, and public trust to harness the opportunities and benefits that AI technologies present.
Enhance governance and effective enforcement of existing laws on fundamental rights and safety requirements applicable to AI systems.Strengthen the UK’s position as a global leader in AI, by ensuring the UK is the best place to develop and use AI technologies.
Ensure legal certainty to facilitate investment and innovation in AI.
Facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

What are the problems being tackled?

Again, similarities exist in terms of a common focus: the end-user. AI’s involvement in multiple activities of the economy, whether this be from simple chatbots to biometric identification, inevitably mean that end-users end up being affected. Protecting them at all costs seems to be the presiding theme:

EU AI ActUK Approach
Safety risks. Increased risks to the safety and security of citizens caused by the use of AI systems.Market failures. A number of market failures (information asymmetry, misaligned incentives, negative externalities, regulatory failure), mean AI risks are not being adequately addressed.
Fundamental rights risk. The use of AI systems poses an increased risk of violations of citizens’ fundamental rights and Union values.Consumer risks. These risks include damage to physical and mental health, bias and discrimination, and infringements on privacy and individual rights.
Legal uncertainty. Legal uncertainty and complexity on how to ensure compliance with rules applicable to AI systems dissuade businesses from developing and using the technology.
Enforcement. Competent authorities do not have the powers and/or procedural framework to ensure compliance of AIuse with fundamental rights and safety.
Mistrust. Mistrust in AI would slow down AI development in Europe and reduce the global competitiveness of the EU economies.
Fragmentation. Fragmented measures create obstacles for cross-border AI single market and threaten the Union’s digital sovereignty.

What are the differences in policy options?

A variety of options have been considered by the respective policymakers. On the face of it, pro-innovation requires a holistic examination to account for the variety of challenges new ways of working generate. The EU sets the standard with Option 3:

EU AI Act (Decided)UK Approach (In Process)
Option 1 – EU Voluntary labelling scheme – An EU act establishing a voluntary labelling scheme. One definition of AI, however applicable only on a voluntary basis.Option 0 – Do nothing option – Assume the EU delivers the AI Act as drafted in April 2021. The UK makes no regulatory changes regarding AI.
Option 2 – Ad-hoc sectoral approach – Ad-hoc sectoral acts (revision or new). Each sector can adopt a definition of AI and determine the riskiness of the AI systems covered.Option 1 – Delegate to existing regulators, guided by non-statutory advisory principles – Non-legislative option with existing regulators applying cross-sectoral AI governance principles within their remits.
Option 3 – Horizontal risk-based act on AI – A single binding horizontal act on AI. One horizontally applicable AI definition and methodology for the determination of high-risk (risk-based).Option 2 – Delegate to existing regulators with a duty to regard the principles, supported by central AI regulatory functions (Preferred option) – Existing regulators have a ‘duty to have due regard’ to the cross-sectoral AI governance principles, supported by central AI regulatory functions. No new mandatory obligations for businesses.
Option 3+ – Option 3 + industry-led codes of conduct for non-high-risk AI.Option 3 – Centralised AI regulator with new legislative requirements placed on AI systems – The UK establishes a central AI regulator, with mandatory requirements for businesses aligned to the EU AI Act.
Option 4 – Horizontal act for all AI – A single binding horizontal act on AI. One horizontal AI definition, but no methodology/gradation (all risks covered).

What are the estimated direct compliance costs to firms?

Both the UK Approach and the EU AI Act regulatory framework will apply to all AI systems being designed or developed, made available or otherwise being used in the EU/UK, whether they are developed in the EU/UK or abroad. Both businesses that develop and deploy AI, “AI businesses”, and businesses that use AI, “AI adopting businesses”, are in the scope of the framework. These two types of firms have different expected costs per business under the respective frameworks.

UK Approach: Key assumptions for AI system costs

Key finding: Cost of compliance for HRS highest under Option 3

OptionOption 0Option 1Option 2Option 3
% of businesses that provide high-risk systems (HRS)8.1%8.1%8.1%
Cost of compliance per HRS£3,698£3,698£36,981
% of businesses that AI systems that interact with natural persons (non-HRS)39.0%39.0%39.0%
Cost of compliance per non-HRS£330£330£330
Assumed number of AI systems per AI business (2020)Small – 2
Medium – 5
Large – 10
Assumed number of AI systems per AI-adopting business (2020)Small – 2
Medium – 5
Large – 10
EU AI Act: Total compliance cost of the five requirements for each AI product

Key finding: Information provision represents the highest cost incurred by firms.

Administrative ActivityTotal MinutesTotal Admin Cost (Hourly rate = €32)Total Cost
Training Data€5,180.5
Documents & Record Keeping€2,231
Information Provision€6,800
Human Oversight€1,260
Robustness and Accuracy€4,750
Total€20,581.5€10,976.8€29,276.8

In light of these comparisons, it appears the EU estimates a lower cost of compliance compared to the UK. Lower costs don’t confer a less rigid approach. Rather, they indicate an itemised approach to cost estimation as well as using a standard pricing metric, hours. In practice, firms are likely to aim to make this more efficient by reducing the number of hours required to achieve compliance.

Lessons from the UK Approach for the EU AI Act

The forthcoming EU AI Act is set to place the EU at the global forefront of regulating this emerging technology. Accordingly, models for the governance and mitigation of AI risk from outside the region can still provide insightful lessons for EU decision-makers to learn and issues to account for before the EU AI Act is passed.

This is certainly applicable to Article 9 of the EU AI Act, which requires developers to establish, implement, document, and maintain risk management systems for high-risk AI systems. There are three key ideas for EU decision-makers to consider from the UK Approach.

AI assurance techniques and technical standards

Unlike Article 17 of the EU AI Act, the quality management system put in place by providers of high-risk AI systems is designed to ensure compliance. To do this, providers of high-risk  AI  systems must establish techniques, procedures, and systematic actions to be used for development, quality control, and quality assurance. The EU AI Act only briefly covers the concept of assurance, but it could benefit from publishing assurance techniques and technical standards that play a critical role in enabling the responsible adoption of AI so that potential harms at all levels of society are identified and documented.

To assure AI systems effectively, the UK government is calling for a toolbox of assurance techniques to measure, evaluate, and communicate the trustworthiness of AI systems across the development and deployment life cycle. These techniques include impact assessment, audit, and performance testing along with formal verification methods. To help innovators understand how AI assurance techniques can support wider AI governance, the government launched a ‘Portfolio of AI Assurance techniques’ in Spring 2023. This is an industry collaboration to showcase how these tools are already being applied by businesses to real-world use cases and how they align with the AI regulatory principles.

Similarly, assurance techniques need to be underpinned by available technical standards, which provide a common understanding across assurance providers. Technical standards and assurance techniques will also enable organisations to demonstrate that their systems are in line with the regulatory principles enshrined under the EU AI Act and the UK Approach. Similarities exist in terms of the stage of development.

Specifically, the EU AI Act defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalised through harmonised technical standards. In equal fashion, the UK government intends to have a leading role in the development of international technical standards, working with industry, international and UK partners. The UK government plans to continue to support the role of technical standards in complementing our approach to AI regulation, including through the UK AI Standards Hub. These technical standards may demonstrate firms’ compliance with the EU AI Act.

A harmonised vocabulary

All relevant parties would benefit from reaching a consensus on the definitions of key terms related to the foundations of AI regulation. While the EU AI Act and the UK Approach are either under development or in the incubation stage, decision-makers for both initiatives should seize the opportunity to develop a shared understanding of core AI ideas, principles, and concepts, and codify these into a harmonised transatlantic vocabulary. As shown below, identification of where both initiatives are in agreement, and where they diverge, has been undertaken:

EU AI ActUK Approach
SharedAccountability
Safety
Privacy
Transparency
Fairness
DivergentData Governance
Diversity
Environmental and Social Well-Being
Human Agency and Oversight
Technical Robustness
Non-Discrimination
Governance
Security
Robustness
Explainability
Contestability
Redress

How AI & Partners can help

We can help you start assessing your AI systems using recognised metrics ahead of the expected changes brought about by the EU AI Act. Our leading practice is geared towards helping you identify, design, and implement appropriate metrics for your assessments.

 Website: https://www.ai-and-partners.com/

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI regulation: A pro-innovation approach – EU vs UK appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/feed/ 0
AI Act: The power of open-source in guiding regulations https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/ https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/#respond Wed, 26 Jul 2023 10:41:51 +0000 https://www.artificialintelligence-news.com/?p=13328 As the EU debates the AI Act, lessons from open-source software can inform the regulatory approach to open ML systems. The AI Act, set to be a global precedent, aims to address the risks associated with AI while encouraging the development of cutting-edge technology. One of the key aspects of this Act is its support... Read more »

The post AI Act: The power of open-source in guiding regulations appeared first on AI News.

]]>
As the EU debates the AI Act, lessons from open-source software can inform the regulatory approach to open ML systems.

The AI Act, set to be a global precedent, aims to address the risks associated with AI while encouraging the development of cutting-edge technology. One of the key aspects of this Act is its support for open-source, non-profit, and academic research and development in the AI ecosystem. Such support ensures the development of safe, transparent, and accountable AI systems that benefit all EU citizens.

Drawing from the success of open-source software development, policymakers can craft regulations that encourage open AI development while safeguarding user interests. By providing exemptions and proportional requirements for open ML systems, the EU can foster innovation and competition in the AI market while maintaining a thriving open-source ecosystem.

Representing both commercial and nonprofit stakeholders, several organisations – including GitHub, Hugging Face, EleutherAI, Creative Commons, and more – have banded together to release a policy paper calling on EU policymakers to protect open-source innovation.

The organisations have five proposals:

  1. Define AI components clearly: Clear definitions of AI components will help stakeholders understand their roles and responsibilities, facilitating collaboration and innovation in the open ecosystem.
  1. Clarify that collaborative development of open-source AI components is exempt from AI Act requirements: To encourage open-source development, the Act should clarify that contributors to public repositories are not subject to the same regulatory requirements as commercial entities.
  1. Support the AI Office’s coordination with the open-source ecosystem: The Act should encourage inclusive governance and collaboration between the AI Office and open-source developers to foster transparency and knowledge exchange.
  1. Ensure practical and effective R&D exception: Allow limited real-world testing in different conditions, combining aspects of the Council’s approach and the Parliament’s Article 2(5d), to facilitate research and development without compromising safety and accountability.
  1. Set proportional requirements for “foundation models”: Differentiate between various uses and development modalities of foundation models, including open source approaches, to ensure fair treatment and promote competition.

Open-source AI development offers several advantages, including transparency, inclusivity, and modularity. It allows stakeholders to collaborate and build on each other’s work, leading to more robust and diverse AI models. For instance, the EleutherAI community has become a leading open-source ML lab, releasing pre-trained models and code libraries that have enabled foundational research and reduced the barriers to developing large AI models.

Similarly, the BigScience project, which brought together over 1200 multidisciplinary researchers, highlights the importance of facilitating direct access to AI components across institutions and disciplines.

Such open collaborations have democratised access to large AI models, enabling researchers to fine-tune and adapt them to various languages and specific tasks—ultimately contributing to a more diverse and representative AI landscape.

Open research and development also promote transparency and accountability in AI systems. For example, LAION – a non-profit research organisation – released openCLIP models, which have been instrumental in identifying and addressing biases in AI applications. Open access to training data and model components has enabled researchers and the public to scrutinise the inner workings of AI systems and challenge misleading or erroneous claims.

The AI Act’s success depends on striking a balance between regulation and support for the open AI ecosystem. While openness and transparency are essential, regulation must also mitigate risks, ensure standards, and establish clear liability for AI systems’ potential harms.

As the EU sets the stage for regulating AI, embracing open source and open science will be critical to ensure that AI technology benefits all citizens.

By implementing the recommendations provided by organisations representing stakeholders in the open AI ecosystem, the AI Act can foster an environment of collaboration, transparency, and innovation, making Europe a leader in the responsible development and deployment of AI technologies.

(Photo by Nick Page on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Act: The power of open-source in guiding regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/feed/ 0
European Parliament adopts AI Act position https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/ https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/#respond Wed, 14 Jun 2023 14:27:26 +0000 https://www.artificialintelligence-news.com/?p=13192 The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority.  The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while... Read more »

The post European Parliament adopts AI Act position appeared first on AI News.

]]>
The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority. 

The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while imposing strict regulations for high-risk use cases.

The timing of AI regulation has been a subject of debate, but Dragoș Tudorache, one of the European Parliament’s co-rapporteurs on the AI Act, emphasised that it is the right time to regulate AI due to its profound impact.

Dr Ventsislav Ivanov, AI Expert and Lecturer at Oxford Business College, said: “Regulating artificial intelligence is one of the most important political challenges of our time, and the EU should be congratulated for attempting to tame the risks associated with technologies that are already revolutionising our daily lives.

“As the chaos and controversy accompanying this vote show, this will be not an easy feat. Taking on the global tech companies and other interested parties will be akin to Hercules battling the seven-headed hydra.”

The adoption of the AI Act faced uncertainty as a political deal crumbled, leading to amendments from various political groups.

One of the main points of contention was the use of Remote Biometric Identification, with liberal and progressive lawmakers seeking to ban its real-time use except for ex-post investigations of serious crimes. The centre-right European People’s Party attempted to introduce exceptions for exceptional circumstances like terrorist attacks or missing persons, but their efforts were unsuccessful.

A tiered approach for AI models will be introduced with the act, including stricter regulations for foundation models and generative AI.

The European Parliament intends to introduce mandatory labelling for AI-generated content and mandate the disclosure of training data covered by copyright. This move comes as generative AI, exemplified by ChatGPT, gained widespread attention—prompting the European Commission to launch outreach initiatives to foster international alignment on AI rules.

MEPs made several significant changes to the AI Act, including expanding the list of prohibited practices to include subliminal techniques, biometric categorisation, predictive policing, internet-scraped facial recognition databases, and emotion recognition software.

An extra layer was introduced for high-risk AI applications and extended the list of high-risk areas and use cases in law enforcement, migration control, and recommender systems of prominent social media platforms.

Robin Röhm, CEO of Apheris, commented: “The passing of the plenary vote on the EU’s AI Act marks a significant milestone in AI regulation, but raises more questions than it answers. It will make it more difficult for start-ups to compete and means that investors are less likely to deploy capital into companies operating in the EU.

“It is critical that we allow for capital to flow to businesses, given the cost of building AI technology, but the risk-based approach to regulation proposed by the EU is likely to lead to a lot of extra burden for the European ecosystem and will make investing less attractive.”

With the European Parliament’s adoption of its position on the AI Act, interinstitutional negotiations will commence with the EU Council of Ministers and the European Commission. The negotiations – known as trilogues – will address key points of contention such as high-risk categories, fundamental rights, and foundation models.

Spain, which assumes the rotating presidency of the Council in July, has made finalising the AI law its top digital priority. The aim is to reach a deal by November, with multiple trilogues planned as a backup.

The negotiations are expected to intensify in the coming months as the EU seeks to establish comprehensive regulations for AI, balancing innovation and governance while ensuring the protection of fundamental rights.

“The key to good regulation is ensuring that safety concerns are addressed while not stifling innovation. It remains to be seen whether the EU can achieve this,” concludes Röhm.

(Image Credit: European Union 2023 / Mathieu Cugnot)

Similar: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post European Parliament adopts AI Act position appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/feed/ 0
EU committees green-light the AI Act https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/ https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/#respond Thu, 11 May 2023 12:09:27 +0000 https://www.artificialintelligence-news.com/?p=13048 The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act. This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure... Read more »

The post EU committees green-light the AI Act appeared first on AI News.

]]>
The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act.

This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure that AI systems are safe, transparent, traceable, and non-discriminatory.

After the vote, co-rapporteur Brando Benifei (S&D, Italy) said:

“We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level.

We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”

Co-rapporteur Dragos Tudorache (Renew, Romania) added:

“Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy, and safe.

We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate while protecting fundamental rights, strengthening democratic oversight, and ensuring a mature system of AI governance and enforcement.”

The rules are based on a risk-based approach and they establish obligations for providers and users depending on the level of risk that the AI system can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring.

MEPs also substantially amended the list of prohibited AI practices to include bans on intrusive and discriminatory uses of AI systems, such as real-time remote biometric identification systems in publicly accessible spaces, post-remote biometric identification systems (except for law enforcement purposes), biometric categorisation systems using sensitive characteristics, predictive policing systems, emotion recognition systems in law enforcement, border management, workplace, and educational institutions, and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

MEPs also expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights, or the environment. They also added AI systems that influence voters in political campaigns and recommender systems used by social media platforms to the high-risk list.

To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law also promotes regulatory sandboxes – or controlled environments established by public authorities – to test AI before its deployment.

MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

Tim Wright, Tech and AI Regulatory Partner at London-based law firm Fladgate, commented:

“US-based AI developers will likely steal a march on their European competitors given the news that the EU parliamentary committees have green-lit its groundbreaking AI Act, where AI systems will need to be categorised according to their potential for harm from the outset. 

The US tech approach is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework. This approach fosters innovation whereas EU-based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.

The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset. However, the potential to experiment in a safe space – a regulatory sandbox – may prove very attractive.”

Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

(Photo by Denis Sebastian Tamas on Unsplash)

Related: UK details ‘pro-innovation’ approach to AI regulation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU committees green-light the AI Act appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/feed/ 0
​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/ https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/#respond Thu, 13 Apr 2023 15:18:41 +0000 https://www.artificialintelligence-news.com/?p=12944 Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions. The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy... Read more »

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions.

The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy laws and the EU’s infamous General Data Protection Regulation (GDPR).

The GPDP was concerned that ChatGPT could recall and emit personal information, such as phone numbers and addresses, from input queries. Additionally, officials were worried that the chatbot could expose minors to inappropriate answers that could potentially be harmful.

The GPDP says it will lift the ban on ChatGPT if its creator, OpenAI, enforces rules protecting minors and users’ personal data by 30th April 2023.

OpenAI has been asked to notify people on its website how ChatGPT stores and processes their data and require users to confirm that they are 18 and older before using the software.

An age verification process will be required when registering new users and children below the age of 13 must be prevented from accessing the software. People aged 13-18 must obtain consent from their parents to use ChatGPT.

The company must also ask for explicit consent to use people’s data to train its AI models and allow anyone – whether they’re a user or not – to request any false personal information generated by ChatGPT to be corrected or deleted altogether.

All of these changes must be implemented by September 30th or the ban will be reinstated.

This move is part of a larger trend of increased scrutiny of AI technologies by regulators around the world. ChatGPT is not the only AI system that has faced regulatory challenges.

Regulators in Canada and France have also launched investigations into whether ChatGPT violates data privacy laws after receiving official complaints. Meanwhile, Spain has urged the EU’s privacy watchdog to launch a deeper investigation into ChatGPT.

The international scrutiny of ChatGPT and similar AI systems highlights the need for developers to be proactive in addressing privacy concerns and implementing safeguards to protect users’ personal data.

(Photo by Levart_Photographer on Unsplash)

Related: AI think tank calls GPT-4 a risk to public safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/feed/ 0