hallucinations Archives - AI News https://www.artificialintelligence-news.com/tag/hallucinations/ Artificial Intelligence News Mon, 29 Apr 2024 08:45:04 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png hallucinations Archives - AI News https://www.artificialintelligence-news.com/tag/hallucinations/ 32 32 OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information... Read more »

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/29/openai-faces-complaint-over-fictional-outputs/feed/ 0
Igor Jablokov, Pryon: Building a responsible AI future https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/ https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/#respond Thu, 25 Apr 2024 14:13:22 +0000 https://www.artificialintelligence-news.com/?p=14743 As artificial intelligence continues to rapidly advance, ethical concerns around the development and deployment of these world-changing innovations are coming into sharper focus. In an interview ahead of the AI & Big Data Expo North America, Igor Jablokov, CEO and founder of AI company Pryon, addressed these pressing issues head-on. Critical ethical challenges in AI... Read more »

The post Igor Jablokov, Pryon: Building a responsible AI future appeared first on AI News.

]]>
As artificial intelligence continues to rapidly advance, ethical concerns around the development and deployment of these world-changing innovations are coming into sharper focus.

In an interview ahead of the AI & Big Data Expo North America, Igor Jablokov, CEO and founder of AI company Pryon, addressed these pressing issues head-on.

Critical ethical challenges in AI

“There’s not one, maybe there’s almost 20 plus of them,” Jablokov stated when asked about the most critical ethical challenges. He outlined a litany of potential pitfalls that must be carefully navigated—from AI hallucinations and emissions of falsehoods, to data privacy violations and intellectual property leaks from training on proprietary information.

Bias and adversarial content seeping into training data is another major worry, according to Jablokov. Security vulnerabilities like embedded agents and prompt injection attacks also rank highly on his list of concerns, as well as the extreme energy consumption and climate impact of large language models.

Pryon’s origins can be traced back to the earliest stirrings of modern AI over two decades ago. Jablokov previously led an advanced AI team at IBM where they designed a primitive version of what would later become Watson. “They didn’t greenlight it. And so, in my frustration, I departed, stood up our last company,” he recounted. That company, also called Pryon at the time, went on to become Amazon’s first AI-related acquisition, birthing what’s now Alexa.

The current incarnation of Pryon has aimed to confront AI’s ethical quandaries through responsible design focused on critical infrastructure and high-stakes use cases. “[We wanted to] create something purposely hardened for more critical infrastructure, essential workers, and more serious pursuits,” Jablokov explained.

A key element is offering enterprises flexibility and control over their data environments. “We give them choices in terms of how they’re consuming their platforms…from multi-tenant public cloud, to private cloud, to on-premises,” Jablokov said. This allows organisations to ring-fence highly sensitive data behind their own firewalls when needed.

Pryon also emphasises explainable AI and verifiable attribution of knowledge sources. “When our platform reveals an answer, you can tap it, and it always goes to the underlying page and highlights exactly where it learned a piece of information from,” Jablokov described. This allows human validation of the knowledge provenance.

In some realms like energy, manufacturing, and healthcare, Pryon has implemented human-in-the-loop oversight before AI-generated guidance goes to frontline workers. Jablokov pointed to one example where “supervisors can double-check the outcomes and essentially give it a badge of approval” before information reaches technicians.

Ensuring responsible AI development

Jablokov strongly advocates for new regulatory frameworks to ensure responsible AI development and deployment. While welcoming the White House’s recent executive order as a start, he expressed concerns about risks around generative AI like hallucinations, static training data, data leakage vulnerabilities, lack of access controls, copyright issues, and more.  

Pryon has been actively involved in these regulatory discussions. “We’re back-channelling to a mess of government agencies,” Jablokov said. “We’re taking an active hand in terms of contributing our perspectives on the regulatory environment as it rolls out…We’re showing up by expressing some of the risks associated with generative AI usage.”

On the potential for an uncontrolled, existential “AI risk” – as has been warned about by some AI leaders – Jablokov struck a relatively sanguine tone about Pryon’s governed approach: “We’ve always worked towards verifiable attribution…extracting out of enterprises’ own content so that they understand where the solutions are coming from, and then they decide whether they make a decision with it or not.”

The CEO firmly distanced Pryon’s mission from the emerging crop of open-ended conversational AI assistants, some of which have raised controversy around hallucinations and lacking ethical constraints.

“We’re not a clown college. Our stuff is designed to go into some of the more serious environments on planet Earth,” Jablokov stated bluntly. “I think none of you would feel comfortable ending up in an emergency room and having the medical practitioners there typing in queries into a ChatGPT, a Bing, a Bard…”

He emphasised the importance of subject matter expertise and emotional intelligence when it comes to high-stakes, real-world decision-making. “You want somebody that has hopefully many years of experience treating things similar to the ailment that you’re currently undergoing. And guess what? You like the fact that there is an emotional quality that they care about getting you better as well.”

At the upcoming AI & Big Data Expo, Pryon will unveil new enterprise use cases showcasing its platform across industries like energy, semiconductors, pharmaceuticals, and government. Jablokov teased that they will also reveal “different ways to consume the Pryon platform” beyond the end-to-end enterprise offering, including potentially lower-level access for developers.

As AI’s domain rapidly expands from narrow applications to more general capabilities, addressing the ethical risks will become only more critical. Pryon’s sustained focus on governance, verifiable knowledge sources, human oversight, and collaboration with regulators could offer a template for more responsible AI development across industries.

You can watch our full interview with Igor Jablokov below:

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Igor Jablokov, Pryon: Building a responsible AI future appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/25/igor-jablokov-pryon-building-responsible-ai-future/feed/ 0
Paul O’Sullivan, Salesforce: Transforming work in the GenAI era https://www.artificialintelligence-news.com/2023/11/21/paul-osullivan-salesforce-transforming-work-genai-era/ https://www.artificialintelligence-news.com/2023/11/21/paul-osullivan-salesforce-transforming-work-genai-era/#respond Tue, 21 Nov 2023 10:20:49 +0000 https://www.artificialintelligence-news.com/?p=13931 In the wake of the generative AI (GenAI) revolution, UK businesses find themselves at a crossroads between unprecedented opportunities and inherent challenges. Paul O’Sullivan, Senior Vice President of Solution Engineering (UKI) at Salesforce, sheds light on the complexities of this transformative landscape, urging businesses to tread cautiously while embracing the potential of artificial intelligence. Unprecedented... Read more »

The post Paul O’Sullivan, Salesforce: Transforming work in the GenAI era appeared first on AI News.

]]>
In the wake of the generative AI (GenAI) revolution, UK businesses find themselves at a crossroads between unprecedented opportunities and inherent challenges.

Paul O’Sullivan, Senior Vice President of Solution Engineering (UKI) at Salesforce, sheds light on the complexities of this transformative landscape, urging businesses to tread cautiously while embracing the potential of artificial intelligence.

Unprecedented opportunities

Generative AI has stormed the scene with remarkable speed. ChatGPT, for example, amassed 100 million users in a mere two months.

“If you put that into context, it took 10 years to reach 100 million users on Netflix,” says O’Sullivan.

This rapid adoption signals a seismic shift, promising substantial economic growth. O’Sullivan estimates that generative AI has the potential to contribute a staggering £3.5 trillion ($4.4 trillion) to the global economy.

“Again, if you put that into context, that’s about as much tax as the entire US takes in,” adds O’Sullivan.

One of its key advantages lies in driving automation, with the prospect of automating up to 40 percent of the average workday—leading to significant productivity gains for businesses.

The AI trust gap

However, amid the excitement, there looms a significant challenge: the AI trust gap. 

O’Sullivan acknowledges that despite being a top priority for C-suite executives, over half of customers remain sceptical about the safety and security of AI applications.

Addressing this gap will require a multi-faceted approach including grappling with issues related to data quality and ensuring that AI systems are built on reliable, unbiased, and representative datasets. 

“Companies have struggled with data quality and data hygiene. So that’s a key area of focus,” explains O’Sullivan.

Safeguarding data privacy is also paramount, with stringent measures needed to prevent the misuse of sensitive customer information.

“Both customers and businesses are worried about data privacy—we can’t let large language models store and learn from sensitive customer data,” says O’Sullivan. “Over half of customers and their customers don’t believe AI is safe and secure today.”

Ethical considerations

AI also prompts ethical considerations. Concerns about hallucinations – where AI systems generate inaccurate or misleading information – must be addressed meticulously.

Businesses must confront biases and toxicities embedded in AI algorithms, ensuring fairness and inclusivity. Striking a balance between innovation and ethical responsibility is pivotal to gaining customer trust.

“A trustworthy AI should consistently meet expectations, adhere to commitments, and create a sense of dependability within the organisation,” explains O’Sullivan. “It’s crucial to address the limitations and the potential risks. We’ve got to be open here and lead with integrity.”

As businesses embrace AI, upskilling the workforce will also be imperative.

O’Sullivan advocates for a proactive approach, encouraging employees to master the art of prompt writing. Crafting effective prompts is vital, enabling faster and more accurate interactions with AI systems and enhancing productivity across various tasks.

Moreover, understanding AI lingo is essential to foster open conversations and enable informed decision-making within organisations.

A collaborative future

Crucially, O’Sullivan emphasises a collaborative future where AI serves as a co-pilot rather than a replacement for human expertise.

“AI, for now, lacks cognitive capability like empathy, reasoning, emotional intelligence, and ethics—and these are absolutely critical business skills that humans need to bring to the table,” says O’Sullivan.

This collaboration fosters a sense of trust, as humans act as a check and balance to ensure the responsible use of AI technology.

By addressing the AI trust gap, upskilling the workforce, and fostering a harmonious collaboration between humans and AI, businesses can harness the full potential of generative AI while building trust and confidence among customers.

You can watch our full interview with Paul O’Sullivan below:

Looking to revamp your intelligent automation strategy? Learn more about theIntelligent Automation Event & Conference, to discover the latest insights surrounding unbiased algorithyms, future trends, RPA, Cognitive Automation and more!

The post Paul O’Sullivan, Salesforce: Transforming work in the GenAI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/21/paul-osullivan-salesforce-transforming-work-genai-era/feed/ 0