european union Archives - AI News https://www.artificialintelligence-news.com/tag/european-union/ Artificial Intelligence News Thu, 30 May 2024 12:22:10 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png european union Archives - AI News https://www.artificialintelligence-news.com/tag/european-union/ 32 32 EU launches office to implement AI Act and foster innovation https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/ https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/#respond Thu, 30 May 2024 12:22:08 +0000 https://www.artificialintelligence-news.com/?p=14903 The European Union has launched a new office dedicated to overseeing the implementation of its landmark AI Act, which is regarded as one of the most comprehensive AI regulations in the world. This new initiative adopts a risk-based approach, imposing stringent regulations on higher-risk AI applications to ensure their safe and ethical deployment. The primary... Read more »

The post EU launches office to implement AI Act and foster innovation appeared first on AI News.

]]>
The European Union has launched a new office dedicated to overseeing the implementation of its landmark AI Act, which is regarded as one of the most comprehensive AI regulations in the world. This new initiative adopts a risk-based approach, imposing stringent regulations on higher-risk AI applications to ensure their safe and ethical deployment.

The primary goal of this office is to promote the “future development, deployment and use” of AI technologies, aiming to harness their societal and economic benefits while mitigating associated risks. By focusing on innovation and safety, the office seeks to position the EU as a global leader in AI regulation and development.

According to Margerthe Vertager, the EU competition chief, the new office will play a “key role” in implementing the AI Act, particularly with regard to general-purpose AI models. She stated, “Together with developers and a scientific community, the office will evaluate and test general-purpose AI to ensure that AI serves us as humans and upholds our European values.”

Sridhar Iyengar, Managing Director for Zoho Europe, welcomed the establishment of the AI office, noting, “The establishment of the AI office in the European Commission to play a key role with the implementation of the EU AI Act is a welcome sign of progress, and it is encouraging to see the EU positioning itself as a global leader in AI regulation. We hope to continue to see collaboration between governments, businesses, academics and industry experts to guide on safe use of AI to boost business growth.”

Iyengar highlighted the dual nature of AI’s impact on businesses, pointing out both its benefits and concerns. He emphasised the importance of adhering to best practice guidance and legislative guardrails to ensure safe and ethical AI adoption.

“AI can drive innovation in business tools, helping to improve fraud detection, forecasting, and customer data analysis to name a few. These benefits not only have the potential to elevate customer experience but can increase efficiency, present insights, and suggest actions to drive further success,” Iyengar said.

The office will be staffed by more than 140 individuals, including technology specialists, administrative assistants, lawyers, policy specialists, and economists. It will consist of various units focusing on regulation and compliance, as well as safety and innovation, reflecting the multifaceted approach needed to govern AI effectively.

Rachael Hays, Transformation Director for Definia, part of The IN Group, commented: “The establishment of a dedicated AI Office within the European Commission underscores the EU’s commitment to both innovation and regulation which is undoubtedly crucial in this rapidly evolving AI landscape.”

Hays also pointed out the potential for workforce upskilling that this initiative provides. She referenced findings from their Tech and the Boardroom research, which revealed that over half of boardroom leaders view AI as the biggest direct threat to their organisations.

“This initiative directly addresses these fears as employees across various sectors are given the opportunity to adapt and thrive in an AI-driven world. The AI Office offers promising hope and guidance in developing economic benefits while mitigating risks associated with AI technology, something we should all get on board with,” she added.

As the EU takes these steps towards comprehensive AI governance, the office’s work will be pivotal in driving forward both innovation and safety in the field.

(Photo by Sara Kurfeß)

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU launches office to implement AI Act and foster innovation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/30/eu-launches-office-implement-ai-act-foster-innovation/feed/ 0
OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information... Read more »

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/feed/ 0
Mistral AI unveils LLM rivalling major players https://www.artificialintelligence-news.com/2024/02/27/mistral-ai-unveils-llm-rivalling-major-players/ https://www.artificialintelligence-news.com/2024/02/27/mistral-ai-unveils-llm-rivalling-major-players/#respond Tue, 27 Feb 2024 12:59:49 +0000 https://www.artificialintelligence-news.com/?p=14455 Mistral AI, a France-based startup, has introduced a new large language model (LLM) called Mistral Large that it claims can compete with several top AI systems on the market.   Mistral AI stated that Mistral Large outscored most major LLMs except for OpenAI’s recently launched GPT-4 in tests of language understanding. It also performed strongly in... Read more »

The post Mistral AI unveils LLM rivalling major players appeared first on AI News.

]]>
Mistral AI, a France-based startup, has introduced a new large language model (LLM) called Mistral Large that it claims can compete with several top AI systems on the market.  

Mistral AI stated that Mistral Large outscored most major LLMs except for OpenAI’s recently launched GPT-4 in tests of language understanding. It also performed strongly in maths and coding assessments.

Co-founder and Chief Scientist Guillaume Lample said Mistral Large represents a major advance over earlier Mistral models. The company also launched a chatbot interface named Le Chat to allow users to interact with the system, similar to ChatGPT.  

The proprietary model boasts fluency in English, French, Spanish, German, and Italian, with a vocabulary exceeding 20,000 words. While Mistral’s first model was open-source, Mistral Large’s code remains closed like systems from OpenAI and other firms.  

Mistral AI received nearly $500 million in funding late last year from backers such as Nvidia and Andreessen Horowitz. It also recently partnered with Microsoft to provide access to Mistral Large through Azure cloud services.  

Microsoft’s investment of €15 million into Mistral AI is set to face scrutiny from European Union regulators who are already analysing the tech giant’s ties to OpenAI, maker of market-leading models like GPT-3 and GPT-4. The European Commission said Tuesday it will review Microsoft’s deal with Mistral, which could lead to a formal probe jeopardising the partnership.

Microsoft has focused most of its AI efforts on OpenAI, having invested around $13 billion into the California company. Those links are now also under review in both the EU and UK for potential anti-competitive concerns. 

Pricing for the Mistral Large model starts at $8 per million tokens of input and $24 per million output tokens. The system will leverage Azure’s computing infrastructure for training and deployment needs as Mistral AI and Microsoft partner on AI research as well.

While third-party rankings have yet to fully assess Mistral Large, the firm’s earlier Mistral Medium ranked 6th out of over 60 language models. With the latest release, Mistral AI appears positioned to challenge dominant players in the increasingly crowded AI space.

(Photo by Joshua Golde on Unsplash)

See also: Stability AI previews Stable Diffusion 3 text-to-image model

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mistral AI unveils LLM rivalling major players appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/27/mistral-ai-unveils-llm-rivalling-major-players/feed/ 0
AI regulation: A pro-innovation approach – EU vs UK https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/ https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/#respond Mon, 31 Jul 2023 14:07:50 +0000 https://www.artificialintelligence-news.com/?p=13348 In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”). Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners AI – The opportunity and the challenge AI currently delivers broad societal... Read more »

The post AI regulation: A pro-innovation approach – EU vs UK appeared first on AI News.

]]>
In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”).

Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners

AI – The opportunity and the challenge

AI currently delivers broad societal benefits, from medical advances to mitigating climate change. As an example, an AI technology developed by DeepMind, a UK- based business, can predict the structure of almost every protein known to science. Government frameworks consider the role of regulation in creating the environment for AI to flourish. AI technologies have not yet reached their full potential. Under the right conditions, AI will transform all areas of life and stimulate economies by unleashing innovation and driving productivity, creating new jobs and improving the workplace.

The UK has indicated a requirement to act quickly to continue to lead the international conversation on AI governance and demonstrate the value of our pragmatic, proportionate regulatory approach. In their report, the UK government identify the short time frame for intervention to provide a clear, pro-innovation regulatory environment in order to make the UK one of the top places in the world to build foundational AI companies. Not too dissimilar to this EU legislators have signalled an intention to make the EU a global hub for AI innovation. On both fronts responding to risk and building public trust are important drivers for regulation. Yet, clear and consistent regulation can also support business investment and build confidence in innovation.

What remains critical for the industry is winning and retaining consumer trust, which is key to the success of innovation economies. Neither the EU nor the UK can afford not to have clear, proportionate approaches to regulation that enable the responsible application of  AI to flourish. Without such consideration, they risk creating cumbersome rules applying to all AI technologies.

What are the policy objectives and intended effects?

Similarities exist in terms of the overall aims. As shown in the table below, the core similarities revolve around growth, safety and economic prosperity:

EU AI ActUK Approach
Ensure that AI systems placed on the market and used are safe and respect existing laws on fundamental rights and Union values.Drive growth and prosperity by boosting innovation, investment, and public trust to harness the opportunities and benefits that AI technologies present.
Enhance governance and effective enforcement of existing laws on fundamental rights and safety requirements applicable to AI systems.Strengthen the UK’s position as a global leader in AI, by ensuring the UK is the best place to develop and use AI technologies.
Ensure legal certainty to facilitate investment and innovation in AI.
Facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

What are the problems being tackled?

Again, similarities exist in terms of a common focus: the end-user. AI’s involvement in multiple activities of the economy, whether this be from simple chatbots to biometric identification, inevitably mean that end-users end up being affected. Protecting them at all costs seems to be the presiding theme:

EU AI ActUK Approach
Safety risks. Increased risks to the safety and security of citizens caused by the use of AI systems.Market failures. A number of market failures (information asymmetry, misaligned incentives, negative externalities, regulatory failure), mean AI risks are not being adequately addressed.
Fundamental rights risk. The use of AI systems poses an increased risk of violations of citizens’ fundamental rights and Union values.Consumer risks. These risks include damage to physical and mental health, bias and discrimination, and infringements on privacy and individual rights.
Legal uncertainty. Legal uncertainty and complexity on how to ensure compliance with rules applicable to AI systems dissuade businesses from developing and using the technology.
Enforcement. Competent authorities do not have the powers and/or procedural framework to ensure compliance of AIuse with fundamental rights and safety.
Mistrust. Mistrust in AI would slow down AI development in Europe and reduce the global competitiveness of the EU economies.
Fragmentation. Fragmented measures create obstacles for cross-border AI single market and threaten the Union’s digital sovereignty.

What are the differences in policy options?

A variety of options have been considered by the respective policymakers. On the face of it, pro-innovation requires a holistic examination to account for the variety of challenges new ways of working generate. The EU sets the standard with Option 3:

EU AI Act (Decided)UK Approach (In Process)
Option 1 – EU Voluntary labelling scheme – An EU act establishing a voluntary labelling scheme. One definition of AI, however applicable only on a voluntary basis.Option 0 – Do nothing option – Assume the EU delivers the AI Act as drafted in April 2021. The UK makes no regulatory changes regarding AI.
Option 2 – Ad-hoc sectoral approach – Ad-hoc sectoral acts (revision or new). Each sector can adopt a definition of AI and determine the riskiness of the AI systems covered.Option 1 – Delegate to existing regulators, guided by non-statutory advisory principles – Non-legislative option with existing regulators applying cross-sectoral AI governance principles within their remits.
Option 3 – Horizontal risk-based act on AI – A single binding horizontal act on AI. One horizontally applicable AI definition and methodology for the determination of high-risk (risk-based).Option 2 – Delegate to existing regulators with a duty to regard the principles, supported by central AI regulatory functions (Preferred option) – Existing regulators have a ‘duty to have due regard’ to the cross-sectoral AI governance principles, supported by central AI regulatory functions. No new mandatory obligations for businesses.
Option 3+ – Option 3 + industry-led codes of conduct for non-high-risk AI.Option 3 – Centralised AI regulator with new legislative requirements placed on AI systems – The UK establishes a central AI regulator, with mandatory requirements for businesses aligned to the EU AI Act.
Option 4 – Horizontal act for all AI – A single binding horizontal act on AI. One horizontal AI definition, but no methodology/gradation (all risks covered).

What are the estimated direct compliance costs to firms?

Both the UK Approach and the EU AI Act regulatory framework will apply to all AI systems being designed or developed, made available or otherwise being used in the EU/UK, whether they are developed in the EU/UK or abroad. Both businesses that develop and deploy AI, “AI businesses”, and businesses that use AI, “AI adopting businesses”, are in the scope of the framework. These two types of firms have different expected costs per business under the respective frameworks.

UK Approach: Key assumptions for AI system costs

Key finding: Cost of compliance for HRS highest under Option 3

OptionOption 0Option 1Option 2Option 3
% of businesses that provide high-risk systems (HRS)8.1%8.1%8.1%
Cost of compliance per HRS£3,698£3,698£36,981
% of businesses that AI systems that interact with natural persons (non-HRS)39.0%39.0%39.0%
Cost of compliance per non-HRS£330£330£330
Assumed number of AI systems per AI business (2020)Small – 2
Medium – 5
Large – 10
Assumed number of AI systems per AI-adopting business (2020)Small – 2
Medium – 5
Large – 10
EU AI Act: Total compliance cost of the five requirements for each AI product

Key finding: Information provision represents the highest cost incurred by firms.

Administrative ActivityTotal MinutesTotal Admin Cost (Hourly rate = €32)Total Cost
Training Data€5,180.5
Documents & Record Keeping€2,231
Information Provision€6,800
Human Oversight€1,260
Robustness and Accuracy€4,750
Total€20,581.5€10,976.8€29,276.8

In light of these comparisons, it appears the EU estimates a lower cost of compliance compared to the UK. Lower costs don’t confer a less rigid approach. Rather, they indicate an itemised approach to cost estimation as well as using a standard pricing metric, hours. In practice, firms are likely to aim to make this more efficient by reducing the number of hours required to achieve compliance.

Lessons from the UK Approach for the EU AI Act

The forthcoming EU AI Act is set to place the EU at the global forefront of regulating this emerging technology. Accordingly, models for the governance and mitigation of AI risk from outside the region can still provide insightful lessons for EU decision-makers to learn and issues to account for before the EU AI Act is passed.

This is certainly applicable to Article 9 of the EU AI Act, which requires developers to establish, implement, document, and maintain risk management systems for high-risk AI systems. There are three key ideas for EU decision-makers to consider from the UK Approach.

AI assurance techniques and technical standards

Unlike Article 17 of the EU AI Act, the quality management system put in place by providers of high-risk AI systems is designed to ensure compliance. To do this, providers of high-risk  AI  systems must establish techniques, procedures, and systematic actions to be used for development, quality control, and quality assurance. The EU AI Act only briefly covers the concept of assurance, but it could benefit from publishing assurance techniques and technical standards that play a critical role in enabling the responsible adoption of AI so that potential harms at all levels of society are identified and documented.

To assure AI systems effectively, the UK government is calling for a toolbox of assurance techniques to measure, evaluate, and communicate the trustworthiness of AI systems across the development and deployment life cycle. These techniques include impact assessment, audit, and performance testing along with formal verification methods. To help innovators understand how AI assurance techniques can support wider AI governance, the government launched a ‘Portfolio of AI Assurance techniques’ in Spring 2023. This is an industry collaboration to showcase how these tools are already being applied by businesses to real-world use cases and how they align with the AI regulatory principles.

Similarly, assurance techniques need to be underpinned by available technical standards, which provide a common understanding across assurance providers. Technical standards and assurance techniques will also enable organisations to demonstrate that their systems are in line with the regulatory principles enshrined under the EU AI Act and the UK Approach. Similarities exist in terms of the stage of development.

Specifically, the EU AI Act defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalised through harmonised technical standards. In equal fashion, the UK government intends to have a leading role in the development of international technical standards, working with industry, international and UK partners. The UK government plans to continue to support the role of technical standards in complementing our approach to AI regulation, including through the UK AI Standards Hub. These technical standards may demonstrate firms’ compliance with the EU AI Act.

A harmonised vocabulary

All relevant parties would benefit from reaching a consensus on the definitions of key terms related to the foundations of AI regulation. While the EU AI Act and the UK Approach are either under development or in the incubation stage, decision-makers for both initiatives should seize the opportunity to develop a shared understanding of core AI ideas, principles, and concepts, and codify these into a harmonised transatlantic vocabulary. As shown below, identification of where both initiatives are in agreement, and where they diverge, has been undertaken:

EU AI ActUK Approach
SharedAccountability
Safety
Privacy
Transparency
Fairness
DivergentData Governance
Diversity
Environmental and Social Well-Being
Human Agency and Oversight
Technical Robustness
Non-Discrimination
Governance
Security
Robustness
Explainability
Contestability
Redress

How AI & Partners can help

We can help you start assessing your AI systems using recognised metrics ahead of the expected changes brought about by the EU AI Act. Our leading practice is geared towards helping you identify, design, and implement appropriate metrics for your assessments.

 Website: https://www.ai-and-partners.com/

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI regulation: A pro-innovation approach – EU vs UK appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/feed/ 0
EU committees green-light the AI Act https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/ https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/#respond Thu, 11 May 2023 12:09:27 +0000 https://www.artificialintelligence-news.com/?p=13048 The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act. This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure... Read more »

The post EU committees green-light the AI Act appeared first on AI News.

]]>
The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act.

This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure that AI systems are safe, transparent, traceable, and non-discriminatory.

After the vote, co-rapporteur Brando Benifei (S&D, Italy) said:

“We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level.

We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”

Co-rapporteur Dragos Tudorache (Renew, Romania) added:

“Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy, and safe.

We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate while protecting fundamental rights, strengthening democratic oversight, and ensuring a mature system of AI governance and enforcement.”

The rules are based on a risk-based approach and they establish obligations for providers and users depending on the level of risk that the AI system can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring.

MEPs also substantially amended the list of prohibited AI practices to include bans on intrusive and discriminatory uses of AI systems, such as real-time remote biometric identification systems in publicly accessible spaces, post-remote biometric identification systems (except for law enforcement purposes), biometric categorisation systems using sensitive characteristics, predictive policing systems, emotion recognition systems in law enforcement, border management, workplace, and educational institutions, and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

MEPs also expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights, or the environment. They also added AI systems that influence voters in political campaigns and recommender systems used by social media platforms to the high-risk list.

To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law also promotes regulatory sandboxes – or controlled environments established by public authorities – to test AI before its deployment.

MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

Tim Wright, Tech and AI Regulatory Partner at London-based law firm Fladgate, commented:

“US-based AI developers will likely steal a march on their European competitors given the news that the EU parliamentary committees have green-lit its groundbreaking AI Act, where AI systems will need to be categorised according to their potential for harm from the outset. 

The US tech approach is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework. This approach fosters innovation whereas EU-based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.

The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset. However, the potential to experiment in a safe space – a regulatory sandbox – may prove very attractive.”

Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

(Photo by Denis Sebastian Tamas on Unsplash)

Related: UK details ‘pro-innovation’ approach to AI regulation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU committees green-light the AI Act appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/feed/ 0
AI think tank calls GPT-4 a risk to public safety https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/ https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/#respond Fri, 31 Mar 2023 15:20:10 +0000 https://www.artificialintelligence-news.com/?p=12881 An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4. The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices. Marc Rotenberg, Founder and President of... Read more »

The post AI think tank calls GPT-4 a risk to public safety appeared first on AI News.

]]>
An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4.

The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices.

Marc Rotenberg, Founder and President of the CAIDP, said:

“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4.

We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued.”

The CAIDP claims that OpenAI’s GPT-4 is “biased, deceptive, and a risk to privacy and public safety”.

The think tank cited contents in the GPT-4 System Card that describe the model’s potential to reinforce biases and worldviews, including harmful stereotypes and demeaning associations for certain marginalised groups.

In the aforementioned System Card, OpenAI acknowledges that it “found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.”

Furthermore, the document states: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”

Other harmful outcomes that OpenAI says GPT-4 could lead to include:

  1. Advice or encouragement for self-harm behaviours
  2. Graphic material such as erotic or violent content
  3. Harassing, demeaning, and hateful content
  4. Content useful for planning attacks or violence
  5. Instructions for finding illegal content

The CAIDP claims that OpenAI released GPT-4 to the public without an independent assessment of its risks.

Last week, the FTC told American companies advertising AI products:

“Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors.

Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.”

With its filing, the CAIDP calls on the FTC to investigate the products of OpenAI and other operators of powerful AI systems, prevent further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.

Merve Hickok, Chair and Research Director of the CAIDP, commented:

“We are at a critical moment in the evolution of AI products.

We recognise the opportunities and we support research. But without the necessary safeguards established to limit bias and deception, there is a serious risk to businesses, consumers, and public safety.

The FTC is uniquely positioned to address this challenge.”

The complaint was filed as Elon Musk, Steve Wozniak, and other AI experts signed a petition to “pause” development on AI systems more powerful than GPT-4.

However, other high-profile figures believe progress shouldn’t be slowed/halted:

Musk was a co-founder of OpenAI, which was originally created as a nonprofit with the mission of ensuring that AI benefits humanity. Musk resigned from OpenAI’s board in 2018 and has publicly questioned the company’s transformation:

Global approaches to AI regulation

As AI systems become more advanced and powerful, concerns over their potential risks and biases have grown. Organisations such as CAIDP, UNESCO, and the Future of Life Institute are pushing for ethical guidelines and regulations to be put in place to protect the public and ensure the responsible development of AI technology.

UNESCO (United Nations Educational, Scientific, and Cultural Organization) has called on countries to implement its “Recommendation on the Ethics of AI” framework.

Earlier today, Italy banned ChatGPT. The country’s data protection authorities said it would be investigated and the system does not have a proper legal basis to be collecting personal information about the people using it.

The wider EU is establishing a strict regulatory environment for AI, in contrast to the UK’s relatively “light-touch” approach.

Tim Wright, Partner and specialist tech and AI regulation lawyer at law firm Fladgate, commented on the UK’s vision:

“The regulatory principles set out in the whitepaper simply confirm the Government’s preferred approach which they say will encourage innovation in the space without imposing an undue burden on businesses developing and adopting AI while encouraging fair and ethical use and protecting individuals.

Time will tell if this sector-by-sector approach has the desired effect. What it does do is put the UK on a completely different approach from the EU, which is pushing through a detailed rulebook backed up by a new liability regime and overseen by a single super AI regulator.”

As always, it’s a balancing act between regulation and innovation. Not enough regulation puts the public at risk while too much risks driving innovation elsewhere.

(Photo by Ben Sweet on Unsplash)

Related: What will AI regulation look like for businesses?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI think tank calls GPT-4 a risk to public safety appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/feed/ 0
What will AI regulation look like for businesses? https://www.artificialintelligence-news.com/2023/03/24/what-will-ai-regulation-look-like-for-businesses/ https://www.artificialintelligence-news.com/2023/03/24/what-will-ai-regulation-look-like-for-businesses/#respond Fri, 24 Mar 2023 16:19:11 +0000 https://www.artificialintelligence-news.com/?p=12863 Unlike food, medicine, and cars, we have yet to see clear regulations or laws to guide AI design in the US. Without standard guidelines, companies that design and develop ML models have historically worked off of their own perceptions of right and wrong.  This is about to change.  As the EU finalizes its AI Act... Read more »

The post What will AI regulation look like for businesses? appeared first on AI News.

]]>
Unlike food, medicine, and cars, we have yet to see clear regulations or laws to guide AI design in the US. Without standard guidelines, companies that design and develop ML models have historically worked off of their own perceptions of right and wrong. 

This is about to change. 

As the EU finalizes its AI Act and generative AI continues to rapidly evolve, we will see the artificial intelligence regulatory landscape shift from general, suggested frameworks to more permanent laws. 

The EU AI Act has spurred significant conversations among business leaders: How can we prepare for stricter AI regulations? Should I proactively design AI that meets this criterion? How soon will it be before similar regulation is passed in the US?

Continue reading to better understand what AI regulation may look like for companies in the near future.  

How the EU AI Act will impact your business 

Like the EU’s General Data Protection Regulation (GDPR) released in 2018, the EU AI Act is expected to become a global standard for AI regulation. Parliament is scheduled to vote on the draft by the end of March 2023, and if this timeline is met, the final AI Act could be adopted by the end of the year. 

It’s highly predicted that the effects of the AI Act will be felt beyond the EU’s borders (read: Brussels effect), albeit it being European regulation. Organizations operating on an international scale will be required to directly conform to the legislation. Meanwhile, US and other independently-led companies will quickly realize that it’s in their best interest to comply with this regulation.

We’re beginning to see this already with other similar legislation like Canada’s Artificial Intelligence & Data Act proposal and New York City’s automated employment regulation

AI system risk categories

Under the AI Act, organizations’ AI systems will be classified into three risk categories, each with their own set of guidelines and consequences. 

  • Unacceptable risk. AI systems that meet this level will be banned. This includes manipulative systems that cause harm, real-time biometric identification systems used in public spaces for law enforcement, and all forms of social scoring. 
  • High risk. These AI systems include tools like job applicant scanning models and will be subject to specific legal requirements. 
  • Limited and minimal risk. This category encompasses many of the AI applications businesses use today, including chatbots and AI-powered inventory management tools, and will largely be left unregulated. Customer-facing limited-risk applications, however, will require disclosure that AI is being used. 

What will AI regulation look like? 

Because the AI Act is still under draft, and its global effects are to be determined, we can’t say with certainty what regulation will look like for organizations. However, we do know that it will vary based on industry, the type of model you’re designing, and the risk category in which it falls. 

Regulation will likely include scrutiny with a third party, where your model is stress tested against the population you’re attempting to serve. These tests will evaluate questions including ‘Is the model performing within acceptable margins of error?’ and ‘Are you disclosing the nature and use of your model? ‘

For organizations with high-risk AI systems, the AI Act has already outlined several requirements: 

  • Implementation of a risk-management system. 
  • Data governance and management. 
  • Technical documentation.
  • Record keeping and logging. 
  • Transparency and provision of information to users.
  • Human oversight. 
  • Accuracy, robustness, and cybersecurity.
  • Conformity assessment. 
  • Registration with the EU-member-state government.
  • Post-market monitoring system. 

We can also expect regular reliability testing for models (similar to e-checks for cars) to become a more widespread service in the AI industry. 

How to prepare for AI regulations 

Many AI leaders have already been prioritizing trust and risk mitigation when designing and developing ML models. The sooner you accept AI regulation as our new reality, the more successful you will be in the future. 

Here are just a few steps organizations can take to prepare for stricter AI regulation: 

  • Research and educate your teams on the types of regulation that will exist, and how it impacts your company today and in the future.  
  • Audit your existing and planned models. Which risk category do they align with and which associated regulations will impact you most?
  • Develop and adopt a framework for designing responsible AI solutions.
  • Think through your AI risk mitigation strategy. How does it apply to existing models and ones designed in the future? What unexpected actions should you account for?  
  • Establish an AI governance and reporting strategy that ensures multiple checks before a model goes live. 

In light of the AI Act and inevitable future regulation, ethical and fair AI design is no longer a “nice to have”, but a “must have”. How can your organization prepare for success?

(Photo by ALEXANDRE LALLEMAND on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post What will AI regulation look like for businesses? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/24/what-will-ai-regulation-look-like-for-businesses/feed/ 0
US and EU agree to collaborate on improving lives with AI https://www.artificialintelligence-news.com/2023/01/31/us-and-eu-agree-collaborate-improving-lives-ai/ https://www.artificialintelligence-news.com/2023/01/31/us-and-eu-agree-collaborate-improving-lives-ai/#comments Tue, 31 Jan 2023 17:02:28 +0000 https://www.artificialintelligence-news.com/?p=12678 The US and EU have signed a landmark agreement to explore how AI can be used to improve lives. The US Department of State and EU Commission’s Directorate-General for Communications Networks, Content and Technology (DG CONNECT) simultaneously held a virtual signing ceremony of the agreement in Washington and Brussels. Roberto Viola, Director General of DG... Read more »

The post US and EU agree to collaborate on improving lives with AI appeared first on AI News.

]]>
The US and EU have signed a landmark agreement to explore how AI can be used to improve lives.

The US Department of State and EU Commission’s Directorate-General for Communications Networks, Content and Technology (DG CONNECT) simultaneously held a virtual signing ceremony of the agreement in Washington and Brussels.

Roberto Viola, Director General of DG CONNECT, signed the ‘Administrative Arrangement on Artificial Intelligence for the Public Good’ on behalf of the EU.

“Today, we are strengthening our cooperation with the US on artificial intelligence and computing to address global challenges, from climate change to natural disasters,” commented Thierry Breton, EU Commissioner for the Internal Market.

“Based on common values and interests, EU and US researchers will join forces to develop societal applications of AI and will work with other international partners for a truly global impact.”

Jose W. Fernandez, Under Secretary of State for Economic Growth, Energy, and the Environment, signed the agreement on behalf of the US.

The arrangement will deepen transatlantic scientific and technological research through what many believe to be the fourth industrial revolution.

With rapid advances in AI, the IoT, distributed ledgers, autonomous vehicles, and more, it’s vital that fundamental principles are upheld.

In a statement, Fernandez’s office wrote:

“This arrangement presents an opportunity for joint scientific and technological research with our Transatlantic partners, for the benefit of the global scientific community. 

Furthermore, it offers a compelling vision for how to use AI in a way that serves our peoples and upholds our democratic values such as transparency, fairness, and privacy.”

Some of the specific research areas will include extreme weather and climate forecasting, emergency response management, health and medicine improvements, electric grid optimisation, and agriculture optimisation.

The latest agreement between the US and EU builds upon the Declaration for the Future of the Internet.

(Image Credit: European Commission)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US and EU agree to collaborate on improving lives with AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/31/us-and-eu-agree-collaborate-improving-lives-ai/feed/ 1
Italy’s facial recognition ban exempts law enforcement https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/ https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/#respond Tue, 15 Nov 2022 15:47:07 +0000 https://www.artificialintelligence-news.com/?p=12484 Italy has banned the use of facial recognition, except for law enforcement purposes. On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies. The agency... Read more »

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
Italy has banned the use of facial recognition, except for law enforcement purposes.

On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies.

The agency banned facial recognition systems using biometric data until a specific law governing its use is adopted.

“The moratorium arises from the need to regulate eligibility requirements, conditions and guarantees relating to facial recognition, in compliance with the principle of proportionality,” the agency said in a statement.

However, an exception was added for biometric data technology that is being used “to fight crime” or in a judicial investigation.

In Lecce, the municipality’s authorities said they would begin using facial recognition technologies. Italy’s Data Protection Agency ordered Lecce’s authorities to explain what systems will be used, their purpose, and the legal basis.

As for the Arezzo case, the city’s police were to be equipped with infrared smart glasses that could recognise car license plates.

Facial recognition technology is a central concern in the EU’s proposed AI regulation. The proposal has been released but will need to pass consultations within the EU before it’s adopted into law.

(Photo by Mikita Yo on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/feed/ 0
MEPs back AI mass surveillance ban for the EU https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/ https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/#respond Thu, 07 Oct 2021 10:42:18 +0000 http://artificialintelligence-news.com/?p=11194 MEPs from the European Parliament have adopted a resolution in favour of banning AI-powered mass surveillance and facial recognition in public spaces. With a 71 vote majority, MEPs sided with Petar Vitanov’s report that argued AI must not be allowed to encroach on fundamental rights. An S&D party member, Vitanov pointed out that AI has... Read more »

The post MEPs back AI mass surveillance ban for the EU appeared first on AI News.

]]>
MEPs from the European Parliament have adopted a resolution in favour of banning AI-powered mass surveillance and facial recognition in public spaces.

With a 71 vote majority, MEPs sided with Petar Vitanov’s report that argued AI must not be allowed to encroach on fundamental rights.

An S&D party member, Vitanov pointed out that AI has not yet proven to be a wholly reliable tool on its own.

He cited examples of individuals being denied social benefits because of faulty AI tools, or people being arrested due to innacurate facial recognition, adding that “the victims are always the poor, immigrants, people of colour or Eastern Europeans. I always thought that only happens in the movies”.

Despite the report’s overall majority backing, members of the European People’s Party – the largest party in the EU – all voted against the report apart from seven exceptions.

Behind this dispute is a fundamental disagreement over what exactly constitutes encroaching on civil liberties when using AI surveillance tools.

Karen Melchior

On the left are politicians like Renew Europe MEP Karen Melchior, who believes that “predictive profiling, AI risk assessment, and automated decision making systems are weapons of ‘math destruction’… as dangerous to our democracy as nuclear bombs are for living creatures and life”.

“They will destroy the fundamental rights of each citizen to be equal before the law and in the eye of our authorities,” she said.

Meanwhile, centrist and conservative-leaning MEPs tend to have a more cautious approach to banning AI technologies outright.

Pointing to the July capture of Dutch journalist Peter R. de Vries’ suspected killers thanks to AI, home affairs commissioner Ylva Johanssen described this major case as an example of “smart digital technology used in defence of citizens and our fundamental rights”.

Ylva Johanssen

“Don’t put protection of fundamental rights in contradiction to the protection of human lives and of societies. It’s simply not true that we have to choose. We are capable of doing both,” she added.

The Commission published its proposal for a European Artificial Intelligence Act in April.

Global human rights charity, Fair Trials, welcomed the vote — calling it a “landmark result for fundamental rights and non-discrimination in the technological age”.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post MEPs back AI mass surveillance ban for the EU appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/feed/ 0