chatbots Archives - AI News https://www.artificialintelligence-news.com/tag/chatbots/ Artificial Intelligence News Wed, 12 Jun 2024 15:48:24 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png chatbots Archives - AI News https://www.artificialintelligence-news.com/tag/chatbots/ 32 32 Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/ https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/#respond Wed, 12 Jun 2024 15:45:08 +0000 https://www.artificialintelligence-news.com/?p=14988 Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process. Musk had initially sued OpenAI in March 2024,... Read more »

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process.

Musk had initially sued OpenAI in March 2024, alleging breach of contracts, unfair business practices, and failure in fiduciary duty. He claimed that his contributions to the company were made “in exchange for and in reliance on promises that those assets were irrevocably dedicated to building AI for public benefit, with only safety as a countervailing concern.”

The lawsuit sought remedies for “breach of contract, promissory estoppel, breach of fiduciary duty, unfair business practices, and accounting,” as well as specific performance, restitution, and damages.

However, Musk’s filings to withdraw the case provided no explanation for abandoning the lawsuit. OpenAI had previously called Musk’s claims “incoherent” and that his inability to produce a contract made his breach claims difficult to prove, stating that documents provided by Musk “contradict his allegations as to the alleged terms of the agreement.”

The withdrawal of the lawsuit comes at a time when Musk is strongly opposing Apple’s plans to integrate ChatGPT into its operating systems.

During Apple’s keynote event announcing Apple Intelligence for iOS 18, iPadOS 18, and macOS Sequoia, Musk threatened to ban Apple devices from his companies, calling the integration “an unacceptable security violation.”

Despite assurances from Apple and OpenAI that user data would only be shared with explicit consent and that interactions would be secure, Musk questioned Apple’s ability to ensure data security, stating, “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”

Since bringing the lawsuit against OpenAI, Musk has also created his own AI company, xAI, and secured over $6 billion in funding for his plans to advance the Grok chatbot on his social network, X.

While Musk’s reasoning for dropping the OpenAI lawsuit remains unclear, his actions suggest a potential shift in focus towards advancing his own AI endeavours while continuing to vocalise his criticism of OpenAI through social media rather than the courts.

See also: DuckDuckGo releases portal giving private access to AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/feed/ 0
OpenAI launches GPT Store for custom AI assistants https://www.artificialintelligence-news.com/2024/01/11/openai-launches-gpt-store-custom-ai-assistants/ https://www.artificialintelligence-news.com/2024/01/11/openai-launches-gpt-store-custom-ai-assistants/#respond Thu, 11 Jan 2024 16:47:47 +0000 https://www.artificialintelligence-news.com/?p=14175 OpenAI has launched its new GPT Store providing users with access to custom AI assistants. Since the announcement of custom ‘GPTs’ two months ago, OpenAI says users have already created over three million custom assistants. Builders can now share their creations in the dedicated store. The store features assistants focused on a wide range of... Read more »

The post OpenAI launches GPT Store for custom AI assistants appeared first on AI News.

]]>
OpenAI has launched its new GPT Store providing users with access to custom AI assistants.

Since the announcement of custom ‘GPTs’ two months ago, OpenAI says users have already created over three million custom assistants. Builders can now share their creations in the dedicated store.

The store features assistants focused on a wide range of topics including art, research, programming, education, lifestyle, and more. OpenAI is highlighting assistants it deems most useful, including:

  • Personal trail recommendations from AllTrails
  • Searching academic papers with Consensus
  • Expanding coding skills via Khan Academy’s Code Tutor
  • Designing presentations with Canva, book recommendations from Books
  • Maths help from CK-12 Flexi

OpenAI says making an assistant is simple and requires no coding knowledge. To share one, builders currently need to make it accessible to ‘Anyone with the link’ and verify their profile.

OpenAI introduced new usage policies and brand guidelines to ensure compliance. A review system combines human and automated checking before assistants are listed. Users can also flag concerning content.  

From Q1 2024, OpenAI will pay qualifying US-based builders for user engagement with their assistants. More details on exact payment criteria will be shared closer to launch.

For enterprise users, OpenAI announced ChatGPT Team plans for teams of all sizes. These provide access to a private store section containing company-specific assistants published securely to their workspace.

ChatGPT Enterprise customers will soon get admin controls for internal sharing and selecting which external assistants can be used by employees. As with all ChatGPT Team and Enterprise content, conversations are not used to improve OpenAI’s models.

Few apps have ever achieved the adoption rate of ChatGPT. OpenAI will be hoping its new stores and revenue opportunities will build upon this momentum by incentivising builders to create assistants that provide value to consumers and enterprises alike.

(Image Credit: OpenAI)

See also: OpenAI: Copyrighted data ‘impossible’ to avoid for AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI launches GPT Store for custom AI assistants appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/11/openai-launches-gpt-store-custom-ai-assistants/feed/ 0
OpenAI’s GPT Store to launch next week after delays https://www.artificialintelligence-news.com/2024/01/05/openai-gpt-store-launch-next-week-after-delays/ https://www.artificialintelligence-news.com/2024/01/05/openai-gpt-store-launch-next-week-after-delays/#respond Fri, 05 Jan 2024 14:08:32 +0000 https://www.artificialintelligence-news.com/?p=14155 OpenAI has announced that its GPT Store, a platform where users can sell and share custom AI agents created using OpenAI’s GPT-4 large language model, will finally launch next week. An email was sent to individuals enrolled as GPT Builders that urges them to ensure their GPT creations align with brand guidelines and advises them... Read more »

The post OpenAI’s GPT Store to launch next week after delays appeared first on AI News.

]]>
OpenAI has announced that its GPT Store, a platform where users can sell and share custom AI agents created using OpenAI’s GPT-4 large language model, will finally launch next week.

An email was sent to individuals enrolled as GPT Builders that urges them to ensure their GPT creations align with brand guidelines and advises them to make their models public:

The GPT Store was unveiled at OpenAI’s November developers conference, revealing the company’s plan to enable users to build AI agents using the powerful GPT-4 model. This feature is exclusively available to ChatGPT Plus and enterprise subscribers, empowering individuals to craft personalised versions of ChatGPT-style chatbots.

The upcoming store allows users to share and monetise their GPTs. OpenAI envisions compensating GPT creators based on the usage of their AI agents on the platform, although detailed information about the payment structure is yet to be disclosed.

Originally slated for a November launch, the GPT Store faced delays due to the company’s busy month—including the firing and subsequent rehiring of CEO Sam Altman. Initially pushed to December, the launch date experienced further postponements.

Now, with the official announcement of the imminent launch, users eagerly anticipate the opportunity to showcase and profit from their unique GPT creations.

(Photo by shark ovski on Unsplash)

See also: MyShell releases OpenVoice voice cloning AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI’s GPT Store to launch next week after delays appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/05/openai-gpt-store-launch-next-week-after-delays/feed/ 0
NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/ https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/#respond Wed, 30 Aug 2023 10:50:59 +0000 https://www.artificialintelligence-news.com/?p=13544 The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences. The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language... Read more »

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences.

The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language models that underpin chatbots.

Chatbots have become integral in various applications such as online banking and shopping due to their capacity to handle simple requests. Large language models (LLMs) – including those powering OpenAI’s ChatGPT and Google’s AI chatbot Bard – have been trained extensively on datasets that enable them to generate human-like responses to user prompts.

The NCSC has highlighted the escalating risks associated with malicious prompt injection, as chatbots often facilitate the exchange of data with third-party applications and services.

“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC explained.

“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”

If users input unfamiliar statements or exploit word combinations to override a model’s original script, the model can execute unintended actions. This could potentially lead to the generation of offensive content, unauthorised access to confidential information, or even data breaches.

Oseloka Obiora, CTO at RiverSafe, said: “The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks. 

“Chatbots have already been proven to be susceptible to manipulation and hijacking for rogue commands, a fact which could lead to a sharp rise in fraud, illegal transactions, and data breaches.”

Microsoft’s release of a new version of its Bing search engine and conversational bot drew attention to these risks.

A Stanford University student, Kevin Liu, successfully employed prompt injection to expose Bing Chat’s initial prompt. Additionally, security researcher Johann Rehberger discovered that ChatGPT could be manipulated to respond to prompts from unintended sources, opening up possibilities for indirect prompt injection vulnerabilities.

The NCSC advises that while prompt injection attacks can be challenging to detect and mitigate, a holistic system design that considers the risks associated with machine learning components can help prevent the exploitation of vulnerabilities.

A rules-based system is suggested to be implemented alongside the machine learning model to counteract potentially damaging actions. By fortifying the entire system’s security architecture, it becomes possible to thwart malicious prompt injections.

The NCSC emphasises that mitigating cyberattacks stemming from machine learning vulnerabilities necessitates understanding the techniques used by attackers and prioritising security in the design process.

Jake Moore, Global Cybersecurity Advisor at ESET, commented: “When developing applications with security in mind and understanding the methods attackers use to take advantage of the weaknesses in machine learning algorithms, it’s possible to reduce the impact of cyberattacks stemming from AI and machine learning.

“Unfortunately, speed to launch or cost savings can typically overwrite standard and future-proofing security programming, leaving people and their data at risk of unknown attacks. It is vital that people are aware that what they input into chatbots is not always protected.”

As chatbots continue to play an integral role in various online interactions and transactions, the NCSC’s warning serves as a timely reminder of the imperative to guard against evolving cybersecurity threats.

(Photo by Google DeepMind on Unsplash)

See also: OpenAI launches ChatGPT Enterprise to accelerate business operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/feed/ 0
Meta bets on AI chatbots to retain users https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/ https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/#respond Tue, 01 Aug 2023 11:44:17 +0000 https://www.artificialintelligence-news.com/?p=13411 Meta is planning to release AI chatbots that possess human-like personalities, a move aimed at enhancing user retention efforts. Insiders familiar with the matter revealed that prototypes of these advanced chatbots have been under development, with the final products capable of engaging in discussions with users on a human level. The diverse range of chatbots... Read more »

The post Meta bets on AI chatbots to retain users appeared first on AI News.

]]>
Meta is planning to release AI chatbots that possess human-like personalities, a move aimed at enhancing user retention efforts.

Insiders familiar with the matter revealed that prototypes of these advanced chatbots have been under development, with the final products capable of engaging in discussions with users on a human level. The diverse range of chatbots will showcase various personalities and are expected to be rolled out as early as next month.

Referred to as “personas” by Meta staff, these chatbots will take on the form of different characters, each embodying a distinct persona. For instance, insiders mentioned that Meta has explored the creation of a chatbot that mimics the speaking style of former US President Abraham Lincoln, as well as another designed to offer travel advice with the laid-back language of a surfer.

While the primary objective of these chatbots will be to offer personalised recommendations and improved search functionality, they are also being positioned as a source of entertainment for users to enjoy. The chatbots are expected to engage users in playful and interactive conversations, a move that could potentially increase user engagement and retention.

However, with such sophisticated AI capabilities, concerns arise about the potential for rule-breaking speech and inaccuracies. In response, sources mentioned that Meta may implement automated checks on the chatbots’ outputs to ensure accuracy and compliance with platform rules.

This strategic development comes at a time when Meta is doubling down on user retention efforts.

During the company’s 2023 second-quarter earnings call on July 26, CEO Mark Zuckerberg highlighted the positive response to the company’s latest product, Threads, which aims to rival X (formerly Twitter.)

Zuckerberg expressed satisfaction with the increased number of users returning to Threads daily and confirmed that Meta’s primary focus was on the platform’s user retention.

Meta’s chatbots venture raises concerns about data privacy and security. The company will gain access to a treasure trove of user data that has already led to legal challenges for AI companies such as OpenAI.

Whether these chatbots will revolutionise user experiences and boost Meta’s ailing user retention – or just present new challenges for data privacy – remains to be seen. For now, users and experts alike will be closely monitoring Meta’s next moves.

(Photo by Edge2Edge Media on Unsplash)

See also: Meta launches Llama 2 open-source LLM

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta bets on AI chatbots to retain users appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/feed/ 0
Bill Gates: AI will be teaching kids literacy within 18 months https://www.artificialintelligence-news.com/2023/04/24/bill-gates-ai-teaching-kids-literacy-within-18-months/ https://www.artificialintelligence-news.com/2023/04/24/bill-gates-ai-teaching-kids-literacy-within-18-months/#respond Mon, 24 Apr 2023 15:35:06 +0000 https://www.artificialintelligence-news.com/?p=12985 AI chatbots could be used to improve children’s reading and writing skills within the next 18 months, according to Microsoft co-founder Bill Gates. In a fireside chat at the ASU+GSV Summit in San Diego, Gates explained that the “AIs will get to that ability, to be as good a tutor as any human ever could.”... Read more »

The post Bill Gates: AI will be teaching kids literacy within 18 months appeared first on AI News.

]]>
AI chatbots could be used to improve children’s reading and writing skills within the next 18 months, according to Microsoft co-founder Bill Gates.

In a fireside chat at the ASU+GSV Summit in San Diego, Gates explained that the “AIs will get to that ability, to be as good a tutor as any human ever could.”

AI chatbots such as OpenAI’s ChatGPT and Google’s Bard have developed rapidly in recent months and can now compete with human-level intelligence on some standardised tests.

Teaching writing skills has traditionally been difficult for computers, as they lack the cognitive ability to replicate human thought processes, Gates said. However, AI chatbots are able to recognise and recreate human-like language.

New York Times tech columnist Kevin Roose has already used ChatGPT to improve his writing, using the AI’s ability to quickly search through online style guides. Some academics have also been impressed by chatbots’ ability to summarise and offer feedback on text or even to write full essays.

The technology must improve before it can become a viable tutor, and Gates said that AI must get better at reading and recreating human language to better motivate students.

While it may be surprising that chatbots are expected to excel at reading and writing before maths, the latter is often used to develop AI technology and chatbots have difficulties with mathematical calculations.

If a solved math equation already exists within the datasets that the chatbot is trained on, it can provide the answer. However, calculating its own solution is more complex and requires improved reasoning abilities, Gates explained.

Gates is confident that the technology will improve within the next two years and he believes that it could help make private tutoring available to a wide range of students who may not otherwise be able to afford it.

While some free versions of chatbots already exist, Gates expects that more advanced versions will be available for a fee, although he believes that they will be more affordable and accessible than one-on-one tutoring with a human instructor.

You can watch the full talk with Bill Gates below:

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Bill Gates: AI will be teaching kids literacy within 18 months appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/24/bill-gates-ai-teaching-kids-literacy-within-18-months/feed/ 0
Omdia: The chatbot market will remain healthily diverse https://www.artificialintelligence-news.com/2022/10/12/omdia-chatbot-market-remain-healthily-diverse/ https://www.artificialintelligence-news.com/2022/10/12/omdia-chatbot-market-remain-healthily-diverse/#respond Wed, 12 Oct 2022 12:13:11 +0000 https://www.artificialintelligence-news.com/?p=12367 Omdia analysts have assessed that the chatbot market will remain “served by a robust, diverse ecosystem of vendors”. The report highlights that it’s contrary to the assessment of vendor assessments and traditional technology market trends. Mark Beccue, Principal Analyst at Omdia, commented: “There are several reasons for a robust chatbot solutions market. One, there is... Read more »

The post Omdia: The chatbot market will remain healthily diverse appeared first on AI News.

]]>
Omdia analysts have assessed that the chatbot market will remain “served by a robust, diverse ecosystem of vendors”.

The report highlights that it’s contrary to the assessment of vendor assessments and traditional technology market trends.

Mark Beccue, Principal Analyst at Omdia, commented:

“There are several reasons for a robust chatbot solutions market.

One, there is persistent market demand for solutions which address a broad spectrum of complexity, from pro developer Do It Yourself (DIY) tools and no code SaaS to bespoke end-to-end solutions.

Two, it’s likely there will be new market disruptors because of evolving technology, particularly the potential emergence of affordable NLU and training from open-source Large Language Models (LLM).

Three, the total addressable market is very large and very complex, led by broad market drivers for CX and workflow automation. The market opportunity is nowhere near saturated or commoditised, leaving the door open for a variety of vendors to succeed and prosper.”

Enterprise spending on chatbots and virtual digital assistants (VDAs) is set to continue growing at a healthy pace through 2026:

Omdia claims that increasing demand for chatbots in more complex roles, growing importance of Business Process Outsourcers (BPOs) in the ecosystem, and the legitimacy of the use of chatbots in messaging channels are driving their upwards trajectory.

(Photo by Jason Leung on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Omdia: The chatbot market will remain healthily diverse appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/10/12/omdia-chatbot-market-remain-healthily-diverse/feed/ 0
Why AI needs human intervention https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/ https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/#respond Wed, 19 Jan 2022 17:07:47 +0000 https://artificialintelligence-news.com/?p=11586 In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to... Read more »

The post Why AI needs human intervention appeared first on AI News.

]]>
In today’s tight labour market and hybrid work environment, organizations are increasingly turning to AI to support various functions within their business, from delivering more personalized experiences to improving operations and productivity to helping organizations make better and faster decisions. That is why the worldwide market for AI software, hardware, and services is expected to surpass $500 billion by 2024, according to IDC.

Yet, many enterprises aren’t ready to have their AI systems run independently and entirely without human intervention – nor should they do so. 

In many instances, enterprises simply don’t have sufficient expertise in the systems they use as AI technologies are extraordinarily complex. In other instances, rudimentary AI is built into enterprise software. These can be fairly static and remove control over the parameters of the data most organizations need. But even the most AI savvy organizations keep humans in the equation to avoid risks and reap the maximum benefits of AI. 

AI Checks and Balances

There are clear ethical, regulatory, and reputational reasons to keep humans in the loop. Inaccurate data can be introduced over time leading to poor decisions or even dire circumstances in some cases. Biases can also creep into the system whether it is introduced while training the AI model, as a result of changes in the training environment, or due to trending bias where the AI system reacts to recent activities more than previous ones. Moreover, AI is often incapable of understanding the subtleties of a moral decision. 

Take healthcare for instance. The industry perfectly illustrates how AI and humans can work together to improve outcomes or cause great harm if humans are not fully engaged in the decision-making process. For example, in diagnosing or recommending a care plan for a patient, AI is ideal for making the recommendation to the doctor, who then evaluates if that recommendation is sound and then gives the counsel to the patient.

Having a way for people to continually monitor AI responses and accuracy will avoid flaws that could lead to harm or catastrophe while providing a means for continuous training of the models so they get continuously better and better. That’s why IDC expects more than 70% of G2000 companies will have formal programs to monitor their digital trustworthiness by 2022.

Models for Human-AI Collaboration

Human-in-the-Loop (HitL) Reinforcement Learning and Conversational AI are two examples of how human intervention supports AI systems in making better decisions.

HitL allows AI systems to leverage machine learning to learn by observing humans dealing with real-life work and use cases. HitL models are like traditional AI models except they are continuously self-developing and improving based on human feedback while, in some cases, augmenting human interactions. It provides a controlled environment that limits the inherent risk of biases—such as the bandwagon effect—that can have devastating consequences, especially in crucial decision-making processes.

We can see the value of the HitL model in industries that manufacture critical parts for vehicles or aircraft requiring equipment that is up to standard. In situations like this, machine learning increases the speed and accuracy of inspections, while human oversight provides added assurances that parts are safe and secure for passengers.

Conversational AI, on the other hand, provides near-human-like communication. It can offload work from employees in handling simpler problems while knowing when to escalate an issue to humans for solving more complex issues. Contact centres provide a primary example.

When a customer reaches out to a contact centre, they have the option to call, text, or chat virtually with a representative. The virtual agent listens and understands the needs of the customer and engages back and forth in a conversation. It uses machine learning and AI to decide what needs to be done based on what it has learned from prior experience. Most AI systems within contact centres generate speech to help communicate with the customer and mimic the feeling of a human doing the typing or talking.

For most situations, a virtual agent is enough to help service customers and resolve their problems. However, there are cases where AI can stop typing or talking and then make a seamless transfer to a live representative to take over the call or chat.  Even in these examples, the AI system can shift from automation to augmentation, by still listening to the conversation and providing recommendations to the live representative to aid them in their decisions

Going beyond conversational AI with cognitive AI, these systems can learn to understand the emotional state of the other party, handle complex dialogue, provide real-time translation and even adjust based on the behaviour of the other person, taking human assistance to the next level of sophistication.

Blending Automation and Human Interaction Leads to Augmented Intelligence

AI is best applied when it is both monitored by and augments people. When that happens, people move up the skills continuum, taking on more complex challenges, while the AI continually learns, improves, and is kept in check, avoiding potentially harmful effects. Using models like HitL, conversational AI, and cognitive AI in collaboration with real people who possess expertise, ingenuity, empathy and moral judgment ultimately leads to augmented intelligence and more positive outcomes.

(Photo by Arteum.ro on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Why AI needs human intervention appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/19/why-ai-needs-human-intervention/feed/ 0
Editorial: Our predictions for the AI industry in 2022 https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/ https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/#respond Thu, 23 Dec 2021 11:59:08 +0000 https://artificialintelligence-news.com/?p=11547 The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits. As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022. Tackling bias Our... Read more »

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits.

As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022.

Tackling bias

Our ‘Ethics & Society’ category got more use than most others this year, and with good reason. AI cannot thrive when it’s not trusted.

Biases are present in algorithms that are already causing harm. They’ve been the subject of many headlines, including a number of ours, and must be addressed for the public to have confidence in wider adoption.

Explainable AI (XAI) is a partial solution to the problem. XAI is artificial intelligence in which the results of the solution can be understood by humans.

Robert Penman, Associate Analyst at GlobalData, comments:

“2022 will see the further rollout of XAI, enabling companies to identify potential discrimination in their systems’ algorithms. It is essential that companies correct their models to mitigate bias in data. Organisations that drag their feet will face increasing scrutiny as AI continues to permeate our society, and people demand greater transparency. For example, in the Netherlands, the government’s use of AI to identify welfare fraud was found to violate European human rights.

Reducing human bias present in training datasets is a huge challenge in XAI implementation. Even tech giant Amazon had to scrap its in-development hiring tool because it was claimed to be biased against women.

Further, companies will be desperate to improve their XAI capabilities—the potential to avoid a PR disaster is reason enough.”

To that end, expect a large number of acquisitions of startups specialising in synthetic data training in 2022.

Smoother integration

Many companies don’t know how to get started on their AI journeys. Around 30 percent of enterprises plan to incorporate AI into their company within the next few years, but 91 percent foresee significant barriers and roadblocks.

If the confusion and anxiety that surrounds AI can be tackled, it will lead to much greater adoption.

Dr Max Versace, PhD, CEO and Co-Founder of Neurala, explains:

“Similar to what happened with the introduction of WordPress for websites in early 2000, platforms that resemble a ‘WordPress for AI’ will simplify building and maintaining AI models. 

In manufacturing for example, AI platforms will provide integration hooks, hardware flexibility, ease of use by non-experts, the ability to work with little data, and, crucially, a low-cost entry point to make this technology viable for a broad set of customers.”

AutoML platforms will thrive in 2022 and beyond.

From the cloud to the edge

The migration of AI from the cloud to the edge will accelerate in 2022.

Edge processing has a plethora of benefits over relying on cloud servers including speed, reliability, privacy, and lower costs.

Versace commented:

“Increasingly, companies are realising that the way to build a truly efficient AI algorithm is to train it on their own unique data, which might vary substantially over time. To do that effectively, the intelligence needs to directly interface with the sensors producing the data. 

From there, AI should run at a compute edge, and interface with cloud infrastructure only occasionally for backups and/or increased functionality. No critical process – for example,  in a manufacturing plant – should exclusively rely on cloud AI, exposing the manufacturing floor to connectivity/latency issues that could disrupt production.”

Expect more companies to realise the benefits of migrating from cloud to edge AI in 2022.

Doing more with less

Among the early concerns about the AI industry is that it would be dominated by “big tech” due to the gargantuan amount of data they’ve collected.

However, innovative methods are now allowing algorithms to be trained with less information. Training using smaller but more unique datasets for each deployment could prove to be more effective.

We predict more startups will prove the world doesn’t have to rely on big tech in 2022.

Human-powered AI

While XAI systems will provide results which can be understood by humans, the decisions made by AIs will be more useful because they’ll be human-powered.

Varun Ganapathi, PhD, Co-Founder and CTO at AKASA, said:

“For AI to truly be useful and effective, a human has to be present to help push the work to the finish line. Without guidance, AI can’t be expected to succeed and achieve optimal productivity. This is a trend that will only continue to increase.

Ultimately, people will have machines report to them. In this world, humans will be the managers of staff – both other humans and AIs – that will need to be taught and trained to be able to do the tasks they’re needed to do.

Just like people, AI needs to constantly be learning to improve performance.”

Greater human input also helps to build wider trust in AI. Involving humans helps to counter narratives about AI replacing jobs and concerns that decisions about people’s lives could be made without human qualities such as empathy and compassion.

Expect human input to lead to more useful AI decisions in 2022.

Avoiding captivity

The telecoms industry is currently pursuing an innovation called Open RAN which aims to help operators avoid being locked to specific vendors and help smaller competitors disrupt the relative monopoly held by a small number companies.

Enterprises are looking to avoid being held in captivity by any AI vendor.

Doug Gilbert, CIO and Chief Digital Officer at Sutherland, explains:

“Early adopters of rudimentary enterprise AI embedded in ERP / CRM platforms are starting to feel trapped. In 2022, we’ll see organisations take steps to avoid AI lock-in. And for good reason. AI is extraordinarily complex.

When embedded in, say, an ERP system, control, transparency, and innovation is handed over to the vendor not the enterprise. AI shouldn’t be treated as a product or feature: it’s a set of capabilities. AI is also evolving rapidly, with new AI capabilities and continuously improved methods of training algorithms.

To get the most powerful results from AI, more enterprises will move toward a model of combining different AI capabilities to solve unique problems or achieve an outcome. That means they’ll be looking to spin up more advanced and customizable options and either deprioritising AI features in their enterprise platforms or winding down those expensive but basic AI features altogether.”

In 2022 and beyond, we predict enterprises will favour AI solutions that avoid lock-in.

Chatbots get smart

Hands up if you’ve ever screamed (internally or externally) that you just want to speak to a human when dealing with a chatbot—I certainly have, more often than I’d care to admit.

“Today’s chatbots have proven beneficial but have very limited capabilities. Natural language processing will start to be overtaken by neural voice software that provides near real time natural language understanding (NLU),” commented Gilbert.

“With the ability to achieve comprehensive understanding of more complex sentence structures, even emotional states, break down conversations into meaningful content, quickly perform keyword detection and named entity recognition, NLU will dramatically improve the accuracy and the experience of conversational AI.”

In theory, this will have two results:

  • Augmenting human assistance in real-time, such as suggesting responses based on behaviour or based on skill level.
  • Change how a customer or client perceives they’re being treated with NLU delivering a more natural and positive experience.  

In 2022, chatbots will get much closer to offering a human-like experience.

It’s not about size, it’s about the quality

A robust AI system requires two things: a functioning model and underlying data to train that model. Collecting huge amounts of data is a waste of time if it’s not of high quality and labeled correctly.

Gabriel Straub, Chief Data Scientist at Ocado Technology, said:

“Andrew Ng has been speaking about data-centric AI, about how improving the quality of your data can often lead to better outcomes than improving your algorithms (at least for the same amount of effort.)

So, how do you do this in practice? How do you make sure that you manage the quality of data at least as carefully as the quantity of data you collect?

There are two things that will make a big difference: 1) making sure that data consumers are always at the heart of your data thinking and 2) ensuring that data governance is a function that enables you to unlock the value in your data, safely, rather than one that focuses on locking down data.”

Expect the AI industry to make the quality of data a priority in 2022.

(Photo by Michael Dziedzic on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/feed/ 0
Stefano Somenzi, Athics: On no-code AI and deploying conversational bots https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/ https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/#respond Fri, 12 Nov 2021 16:47:39 +0000 https://artificialintelligence-news.com/?p=11369 No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic. AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual... Read more »

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic.

AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual agents.

AI News: Do you think “no-code” will help more businesses to begin their AI journeys?

Stefano Somenzi: The real advantage of “no code” is not just the reduced effort required for businesses to get things done, it is also centered around changing the role of the user who will build the AI solution. In our case, a conversational AI agent.

“No code” means that the AI solution is built not by a data scientist but by the process owner. The process owner is best-suited to know what the AI solution should deliver and how. But, if you need coding, this means that the process owner needs to translate his/her requirements into a data scientist’s language.

This requires much more time and is affected by the “lost in translation” syndrome that hinders many IT projects. That’s why “no code” will play a major role in helping companies approach AI.

AN: Research from PwC found that 71 percent of US consumers would rather interact with a human than a chatbot or some other automated process. How can businesses be confident that bots created through your Crafter.ai platform will improve the customer experience rather than worsen it?

SS: Even the most advanced conversational AI agents, like ours, are not suited to replace a direct consumer-to-human interaction if what the consumer is looking for is the empathy that today only a human is able to show during a conversation.

At the same time, inefficiencies, errors, and lack of speed are among the most frequent causes for consumer dissatisfaction that hamper customer service performances.

Advanced conversational AI agents are the right tool to reduce these inefficiencies and errors while delivering strong customer service performances at light speed.

AN: What kind of real-time feedback is provided to your clients about their customers’ behaviour?

SS: Recognising the importance of a hybrid environment, where human and machine interaction are wisely mixed to leverage the best of both worlds, our Crafter.ai platform has been designed from the ground up with a module that manages the handover of the conversations between the bot and the call centre agents.

During a conversation, a platform user – with the right authorisation levels – can access an insights dashboard to check the key performance indicators that have been identified for the bot.

This is also true during the handover when agents and their supervisors receive real-time information on the customer behaviour during the company site navigation. Such information includes – and is not limited to – visited pages, form field contents, and clicked CTAs, and can be complemented with data collected from the company CRM.

AN: Europe is home to some of the strictest data regulations in the world. As a European organisation, do you think such regulations are too strict, not strict enough, or about right?

SS: We think that any company that wants to gain the trust of their customers should do their best to go beyond the strict regulations requirements.

AN: As conversational AIs progress to human-like levels, should it always be made clear that a person is speaking to an AI bot?

SS: Yes, a bot should always make clear that it is not human. In the end, this can help realise how amazing they can perform.

AN: What’s next for Athics?

SS: We have a solid roadmap for Crafter.ai with many new features and improvements that we bring every three months to our platform.

Our sole focus is on advanced conversational AI agents. We are currently working to include more and more domain specific capabilities to our bots.

Advanced profiling capabilities is a great area of interest where, thanks to our collaboration with universities and international research centres, we expect to deliver truly innovative solutions to our customers.

AN: Athics is sponsoring and exhibiting at this year’s AI & Big Data Expo Europe. What can attendees expect from your presence at the event? 

SS: Conversational AI agents allow businesses to obtain a balance between optimising resources and giving a top-class customer experience. Although there is no doubt regarding the benefits of adopting virtual agents, the successful integration across a company’s conversational streams needs to be correctly assessed, planned, and executed in order to leverage the full potential.

Athics will be at stand number 280 to welcome attending companies and give an overview of the advantages of integrating a conversational agent, explain how to choose the right product, and how to create a conversational vision that can scale and address organisational goals.

(Photo by Jason Leung on Unsplash)

Athics will be sharing their invaluable insights during this year’s AI & Big Data Expo Global which runs from 23-24 November 2021. Athics’ booth number is 280. Find out more about the event here.

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/feed/ 0