gdpr Archives - AI News https://www.artificialintelligence-news.com/tag/gdpr/ Artificial Intelligence News Mon, 29 Apr 2024 08:45:04 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png gdpr Archives - AI News https://www.artificialintelligence-news.com/tag/gdpr/ 32 32 OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information... Read more »

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/feed/ 0
​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/ https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/#respond Thu, 13 Apr 2023 15:18:41 +0000 https://www.artificialintelligence-news.com/?p=12944 Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions. The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy... Read more »

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions.

The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy laws and the EU’s infamous General Data Protection Regulation (GDPR).

The GPDP was concerned that ChatGPT could recall and emit personal information, such as phone numbers and addresses, from input queries. Additionally, officials were worried that the chatbot could expose minors to inappropriate answers that could potentially be harmful.

The GPDP says it will lift the ban on ChatGPT if its creator, OpenAI, enforces rules protecting minors and users’ personal data by 30th April 2023.

OpenAI has been asked to notify people on its website how ChatGPT stores and processes their data and require users to confirm that they are 18 and older before using the software.

An age verification process will be required when registering new users and children below the age of 13 must be prevented from accessing the software. People aged 13-18 must obtain consent from their parents to use ChatGPT.

The company must also ask for explicit consent to use people’s data to train its AI models and allow anyone – whether they’re a user or not – to request any false personal information generated by ChatGPT to be corrected or deleted altogether.

All of these changes must be implemented by September 30th or the ban will be reinstated.

This move is part of a larger trend of increased scrutiny of AI technologies by regulators around the world. ChatGPT is not the only AI system that has faced regulatory challenges.

Regulators in Canada and France have also launched investigations into whether ChatGPT violates data privacy laws after receiving official complaints. Meanwhile, Spain has urged the EU’s privacy watchdog to launch a deeper investigation into ChatGPT.

The international scrutiny of ChatGPT and similar AI systems highlights the need for developers to be proactive in addressing privacy concerns and implementing safeguards to protect users’ personal data.

(Photo by Levart_Photographer on Unsplash)

Related: AI think tank calls GPT-4 a risk to public safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/feed/ 0
BCS, Chartered Institute for IT: Human reviews of AI decisions require legal protection https://www.artificialintelligence-news.com/2021/10/13/bcs-chartered-institute-for-it-human-reviews-of-ai-decisions-require-legal-protection/ https://www.artificialintelligence-news.com/2021/10/13/bcs-chartered-institute-for-it-human-reviews-of-ai-decisions-require-legal-protection/#respond Wed, 13 Oct 2021 12:20:30 +0000 http://artificialintelligence-news.com/?p=11225 A leading IT industry body has warned that human reviews of AI decisions are in need of legal protection. BCS, The Chartered Institute for IT, made the warning amid the launch of the ‘Data: A New Direction’ consultation launched by the Department for Digital, Culture, Media and Sport (DCMS). The consultation aims to re-examine the... Read more »

The post BCS, Chartered Institute for IT: Human reviews of AI decisions require legal protection appeared first on AI News.

]]>
A leading IT industry body has warned that human reviews of AI decisions are in need of legal protection.

BCS, The Chartered Institute for IT, made the warning amid the launch of the ‘Data: A New Direction’ consultation launched by the Department for Digital, Culture, Media and Sport (DCMS).

The consultation aims to re-examine the UK’s data regulations post-Brexit. EU laws that were previously mandatory while the UK was part of the bloc – such as the much-criticised GDPR – will be looked at to determine whether a better balance can be struck between data privacy and ensuring that innovation is not stifled.

“There’s an opportunity for us to set world-leading, gold standard data regulation which protects privacy, but does so in as light-touch a way as possible,” said then-UK Culture Secretary Oliver Dowden earlier this year.

DCMS is considering the removal of Article 22 of GDPR. Article 22 focuses specifically on the right to review fully automated decisions.

Dr Sam De Silva, Chair of BCS’ Law Specialist Group and a partner at law firm CMS, explained: 

“Article 22 is not an easy provision to interpret and there is danger in interpreting it in isolation like many have done.

We still do need clarity on the rights someone has in the scenario where there is fully automated decision making which could have a significant impact on that individual.”

AIs are being used for increasingly critical decisions, including whether to offer loans or grant insurance claims. Given the unsolved issues with bias, there’s a chance that discrimination could end up becoming automated.

One school of thought is that humans should always make final decisions, especially ones that impact people’s lives. BCS believes that human reviews of AI decisions should at least have legal protection.

“Protection of human review of fully automated decisions is currently in a piece of legislation dealing with personal data. If no personal data is involved the protection does not apply, but the decision could still have a life-changing impact on us,” added De Silva.

“For example, say an algorithm is created deciding whether you should get a vaccine. The data you need to enter into the system is likely to be DOB, ethnicity, and other things, but not name or anything which could identify you as the person.

“Based on the input, the decision could be that you’re not eligible for a vaccine. But any protections in the GDPR would not apply as there is no personal data.”

BCS welcomes that the government is consulting carefully prior to making any decision. The body says that it supports the consultation and will be gathering views from across its membership.

Related: UK sets out its 10-year plan to remain a global AI superpower

(Photo by Sergey Zolkin on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post BCS, Chartered Institute for IT: Human reviews of AI decisions require legal protection appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/10/13/bcs-chartered-institute-for-it-human-reviews-of-ai-decisions-require-legal-protection/feed/ 0
The UK is changing its data laws to boost its digital economy https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/ https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/#respond Thu, 26 Aug 2021 12:17:50 +0000 http://artificialintelligence-news.com/?p=10985 Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe. Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion... Read more »

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe.

Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers

“Now that we have left the EU, I’m determined to seize the opportunity by developing a world-leading data policy that will deliver a Brexit dividend for individuals and businesses across the UK,” said Dowden.

When GDPR came into effect, it received its fair share of both praise and criticism.  On the one hand, GDPR admirably sought to protect the data of consumers. On the other, “pointless” cookie popups, extra paperwork, and concerns about hefty fines have caused frustration and led many businesses to pack their bags and take their jobs, innovation, and services to less strict regimes.

GDPR is just one example. Another would be Article 11 and 13 of the EU Copyright Directive that some – including the inventor of the World Wide Web Sir Tim Berners-Lee, and Wikipedia founder Jimmy Wales – have opposed as being an “upload filter”, “link tax”, and “meme killer”. This blog post from YouTube explained why creators should care about Europe’s increasingly strict laws.

Mr Dowden said the new reforms would be “based on common sense, not box-ticking” but uphold the necessary safeguards to protect people’s privacy.

What will the impact be on the UK’s AI industry?

AI is, of course, powered by data—masses of it. The idea of mass data collection terrifies many people but is harmless so long as it’s truly anonymised. Arguably, it’s a lack of data that should be more concerning as biases in many algorithms today are largely due to limited datasets that don’t represent the full diversity of our societies.

Western facial recognition algorithms, for example, have far more false positives against minorities than they do white men—leading to automated racial profiling. A 2010 study (PDF) by researchers at NIST and the University of Texas found that algorithms designed and tested in East Asia are better at recognising East Asians.

However, the data must be collected responsibly and checked as thoroughly as possible. Last year, MIT was forced to take offline a popular dataset called 80 Million Tiny Images that was created in 2008 to train AIs to detect objects after discovering that images were labelled with misogynistic and racist terms.

While a European leader in AI, few people are under any illusion that the UK could become a world leader in pure innovation and deployment because it’s simply unable to match the funding and resources available to powers like the US and China. Instead, experts believe the UK should build on its academic and diplomatic strengths to set the “gold standard” in ethical artificial intelligence.

“There’s an opportunity for us to set world-leading, gold standard data regulation which protects privacy, but does so in as light touch a way as possible,” Mr Dowden said.

As it diverges from the EU’s laws in the first major regulatory shakeup since Brexit, the UK needs to show it can strike a fair balance between the EU’s strict regime and the arguably too lax protections in many other countries.

The UK also needs to promote and support innovation while avoiding the “Singapore-on-Thames”-style model of a race to the bottom in standards, rights, and taxes that many Remain campaigners feared would happen if the country left the EU. Similarly, it needs to prove that “Global Britain” is more than just a soundbite.

To that end, Britain’s data watchdog is getting a shakeup and John Edwards, New Zealand’s current privacy commissioner, will head up the regulator.

“It is a great honour and responsibility to be considered for appointment to this key role as a watchdog for the information rights of the people of the United Kingdom,” said Edwards.

“There is a great opportunity to build on the wonderful work already done and I look forward to the challenge of steering the organisation and the British economy into a position of international leadership in the safe and trusted use of data for the benefit of all.”

The UK is also seeking global data partnerships with six countries: the United States, Australia, the Republic of Korea, Singapore, the Dubai International Finance Centre, and Colombia. Over the long-term, agreements with fast-growing markets like India and Brazil are hoped to be striked to facilitate data flows in scientific research, law enforcement, and more.

Commenting on the UK’s global data plans Andrew Dyson, Global Co-Chair of DLA Piper’s Data Protection, Privacy and Security Group, said:

“The announcements are the first evidence of the UK’s vision to establish a bold new regulatory landscape for digital Britain post-Brexit. Earlier in the year, the UK and EU formally recognised each other’s data protection regimes—that allowed data to continue to flow freely after Brexit.

This announcement shows how the UK will start defining its own future regulatory pathways from here, with an expansion of digital trade a clear driver if you look at the willingness to consider potential recognition of data transfers to Australia, Singapore, India and the USA.

It will be interesting to see the further announcements that are sure to follow on reforms to the wider policy landscape that are just hinted at here, and of course the changes in oversight we can expect from a new Information Commissioner.”

An increasingly punitive EU is not likely to react kindly to the news and added clauses into the recent deal reached with the UK to avoid the country diverging too far from its own standards.

Mr Dowden, however, said there was “no reason” the EU should react with too much animosity as the bloc has reached data agreements with many countries outside of its regulatory orbit and the UK must be free to “set our own path”.

(Photo by Massimiliano Morosinotto on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/feed/ 0
Macron wants Europeans to relax about data or be left behind in AI https://www.artificialintelligence-news.com/2018/03/21/macron-europeans-data-ai/ https://www.artificialintelligence-news.com/2018/03/21/macron-europeans-data-ai/#respond Wed, 21 Mar 2018 13:26:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2931 Emmanuel Macron, President of France, is calling for Europeans to relax about the use of their data by AI companies to prevent those operating in the region falling behind their international counterparts. Citizens are increasingly concerned about their use of data, especially following the ongoing investigations into Facebook and Cambridge Analytica. AI companies, however, rely... Read more »

The post Macron wants Europeans to relax about data or be left behind in AI appeared first on AI News.

]]>
Emmanuel Macron, President of France, is calling for Europeans to relax about the use of their data by AI companies to prevent those operating in the region falling behind their international counterparts.

Citizens are increasingly concerned about their use of data, especially following the ongoing investigations into Facebook and Cambridge Analytica. AI companies, however, rely on the bulk collection of data for training machine learning models.

Macron wants to ensure France is a leader in AI but says his efforts are being held back by European attitudes to privacy.

The president’s comments are in contrast to EU’s current and upcoming policies which companies are concerned will reduce their abilities when compared to the competition in less restrictive countries.

GDPR (General Data Protection Regulation) is often the most cited example of being a particular concern to companies and researchers which rely on the bulk collection of data for their work.

In a piece for our sister publication IoT News, I spoke to Peter Wright, Solicitor and Managing Director of Digital Law UK, who highlighted these very concerns.

“It’s a particular problem when you’re looking at the US, where in places like California they are not under these same pressures,” said Wright. “You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe.”

“Very often we hear ‘Where are the British and European Googles and Facebooks?’ Well, it’s because of barriers like this which stop organisations like that being possible to grow and develop.”

Speaking in Beijing, Macron said the European Union needs to ‘move fast’’ to create ‘a single market that our big data actors can access.’ He said the EU must decide what model it wants to exploit data. His comments were made after witnessing the depth and scope of Chinese data collection.

This may be an uphill proposal for Europeans. In France, 70 percent of people are concerned about personal data collected when they use Internet search engines, according to a December survey.

Do you think Europeans should relax about their data privacy? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Macron wants Europeans to relax about data or be left behind in AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/03/21/macron-europeans-data-ai/feed/ 0
Editorial: Facebook excluding the EU from AI advancement heralds a trend https://www.artificialintelligence-news.com/2017/11/28/facebook-eu-ai-advancement/ https://www.artificialintelligence-news.com/2017/11/28/facebook-eu-ai-advancement/#respond Tue, 28 Nov 2017 17:10:49 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2729 EU regulations have forced Facebook to exclude citizens of member states from its new AI-powered suicide prevention tool, and it heralds a worrying trend. Facebook’s new suicide prevention tool aims to use pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster.... Read more »

The post Editorial: Facebook excluding the EU from AI advancement heralds a trend appeared first on AI News.

]]>
EU regulations have forced Facebook to exclude citizens of member states from its new AI-powered suicide prevention tool, and it heralds a worrying trend.

Facebook’s new suicide prevention tool aims to use pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster.

In a post announcing the feature, Facebook wrote: “We are starting to roll out artificial intelligence outside the US to help identify when someone might be expressing thoughts of suicide, including on Facebook Live. This will eventually be available worldwide, except the EU.”

EU data protection regulations

Facebook’s notable lack of support for its latest AI advancement in EU member countries is likely due to strict data protection regulations.

In recent months, I’ve spoken to lawyers, executives from leading companies, and even concerned members of the European Parliament itself about the EU’s stringent regulations stifling innovation across member states.

Julia Reda, an MEP, says: “When we’re trying to regulate the likes of Google, how do we ensure that we’re not also setting in stone that any European competitor that might be growing at the moment would never emerge?”

My discussions highlighted the fear that European businesses will struggle without the data their international counterparts have access to, and startups may look to non-EU countries to set up their operations.

However, the situation has taken a more serious turn with the potential for loss of life. Beyond the inability to launch potentially life-saving features like Facebook’s suicide prevention, the regulations will slow innovation in fields benefiting from AI such as healthcare.

We often cover medical developments on AI News, and most of these advancements rely on data collection to improve machine learning models. GDPR puts significant restrictions on how, when, and why firms can collect and use this data — which simply do not exist to such an extent anywhere else in the world.

“You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe,” comments Peter Wright, Solicitor and Managing Director of Digital Law UK. “Very often we hear ‘Where are the European Googles and Facebooks?’ Well, it’s because of barriers like this which stop organisations like that being possible to grow and develop.”

The issue stems from austere EU data protection regulations being unsuitable for today’s world. There’s little debate against the need to safeguard data, and penalise where this has been insufficient, but even companies with a history of protecting their users are concerned about the extent of this legislation.

“We deal with a very large amount of customer data at F-Secure and I don’t go a working day without hearing a GDPR discussion around me,” comments Sean Sullivan, Security Advisor at Finnish cyber security company F-Secure. “It’s a huge effort, and many people are involved within my part of the organisation.”

“And not just at the legal level; we have ‘data people’ working with our product developers on our software architecture. We’ve always been a privacy focused company, but the last year has been a whole new level in my experience.”

The penalties for non-compliance with GDPR are severe and could devastate a company. Startups in particular, especially in areas such as AI, will struggle from being unable to collect anywhere near as much data as current leaders such as Google hold. However, that doesn’t mean established companies have it easy.

“Fortunately, we have the people we need. I imagine Facebook is still in the position of needing to find California-based GDPR experts who can work with the local developer teams,” explains Sullivan. “I’m confident it has people in Europe who are working on the high level issues, but I doubt that all of the product teams will be able to find the needed resources to be confident of GDPR compliance.”

“There will be more tech innovations that won’t be rolled out in the EU. Hopefully not for long, but at least for the near future.”

With its billions of users, there’s a good chance everyone has friends and family who use Facebook. I’m certain if anyone expresses suicidal thoughts on the platform we’d all want them to receive help as soon as possible.

For many consumers, this situation will be the first to bring awareness to the negative impacts of the EU’s strict data regulations. For businesses, this serves as yet another example.

If you’re having thoughts of suicide or self-harm, please find a list of international helplines here.

What are your thoughts on the EU’s data protection regulations? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Editorial: Facebook excluding the EU from AI advancement heralds a trend appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2017/11/28/facebook-eu-ai-advancement/feed/ 0