customer service Archives - AI News https://www.artificialintelligence-news.com/tag/customer-service/ Artificial Intelligence News Fri, 17 May 2024 07:01:57 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png customer service Archives - AI News https://www.artificialintelligence-news.com/tag/customer-service/ 32 32 IBM and Tech Mahindra unveil new era of trustworthy AI with watsonx https://www.artificialintelligence-news.com/2024/05/17/ibm-and-tech-mahindra-unveil-new-era-of-trustworthy-ai-with-watsonx/ https://www.artificialintelligence-news.com/2024/05/17/ibm-and-tech-mahindra-unveil-new-era-of-trustworthy-ai-with-watsonx/#respond Fri, 17 May 2024 07:01:54 +0000 https://www.artificialintelligence-news.com/?p=14835 Tech Mahindra, a global provider of technology consulting and digital solutions, has collaborated with IBM to help organisations sustainably accelerate generative AI use worldwide. This collaboration combines Tech Mahindra’s range of AI offerings, TechM amplifAI0->∞, and IBM’s watsonx AI and data platform with AI Assistants. Customers can now combine IBM watsonx’s capabilities with Tech Mahindra’s AI consulting and engineering skills... Read more »

The post IBM and Tech Mahindra unveil new era of trustworthy AI with watsonx appeared first on AI News.

]]>
Tech Mahindra, a global provider of technology consulting and digital solutions, has collaborated with IBM to help organisations sustainably accelerate generative AI use worldwide.

This collaboration combines Tech Mahindra’s range of AI offerings, TechM amplifAI0->∞, and IBM’s watsonx AI and data platform with AI Assistants.

Customers can now combine IBM watsonx’s capabilities with Tech Mahindra’s AI consulting and engineering skills to access a variety of new generative AI services, frameworks, and solution architectures. This enables the development of AI apps in which organisations can use their trusted data to automate processes. It also provides a basis for businesses to create trustworthy AI models, promotes explainability to help manage risk and bias, and enables scalable AI adoption across hybrid cloud and on-premises environments.

According to Kunal Purohit, Tech Mahindra’s chief digital services officer, organisations focus on responsible AI practices, and incorporating generative AI technologies to revitalise enterprises. 

“Our work with IBM can help advance digital transformation for organisations, adoption of GenAI, modernisation, and ultimately foster business growth for our global customers,” Purohit added.

To further enhance business capabilities in AI, Tech Mahindra has established a virtual watsonx Centre of Excellence (CoE), which is already operational. This CoE functions as a co-innovation centre, with a dedicated team tasked with maximising synergies between the two companies and producing unique offerings and solutions based on their combined capabilities.

The collaborative offerings and solutions developed through this partnership could help enterprises achieve their goals of constructing machine learning models using open-source frameworks while also enabling them to scale and accelerate the impact of generative AI. These AI-driven solutions have the potential to aid organisations enhance efficiency and productivity responsibly. 

Kate Woolley, GM of IBM Ecosystem, emphasised the collaboration’s potential, adding that generative AI may serve as a catalyst for innovation, unlocking new market opportunities when built on a foundation of explainability, transparency, and trust. 

Woolley said: “Our work with Tech Mahindra is expected to expand the reach of watsonx, allowing even more customers to build trustworthy AI as we seek to combine our technology and expertise to support enterprise use cases such as code modernisation, digital labour, and customer service.”

This collaboration aligns with Tech Mahindra’s continuous endeavour to transform enterprises with advanced AI-led offerings and solutions, including their recent additions like Vision amplifAIer, Ops amplifAIer, Email amplifAIer, Enterprise Knowledge Search offering, Evangelize Pair Programming, and Generative AI Studio.

It is worth mentioning that the two companies have previously collaborated. Earlier this year, Tech Mahindra announced the opening of a Synergy Lounge in conjunction with IBM on the company’s Singapore campus. This Lounge seeks to accelerate digital adoption for APAC organisations. It aids in operationalising and leveraging next-generation technologies such as AI, intelligent automation, hybrid cloud, 5G, edge computing, and cybersecurity.

Beyond Tech Mahindra, IBM watsonx has been used in other collaborations to speed up the deployment of generative AI. Also happened early this year, the GSMA and IBM announced a new partnership to support the use and capabilities of generative AI in the telecom industry by launching GSMA Advance’s AI Training program and the GSMA Foundry Generative AI program.

In addition, there is a digital version of the program that covers both the commercial strategy and technology fundamentals of generative AI. This initiative uses IBM watsonx to provide hands-on training for architects and developers seeking in-depth practical generative AI knowledge.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IBM and Tech Mahindra unveil new era of trustworthy AI with watsonx appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/17/ibm-and-tech-mahindra-unveil-new-era-of-trustworthy-ai-with-watsonx/feed/ 0
Large language models could ‘revolutionise the finance sector within two years’ https://www.artificialintelligence-news.com/2024/03/27/large-language-models-could-revolutionsise-the-finance-sector-within-two-years/ https://www.artificialintelligence-news.com/2024/03/27/large-language-models-could-revolutionsise-the-finance-sector-within-two-years/#respond Wed, 27 Mar 2024 06:07:00 +0000 https://www.artificialintelligence-news.com/?p=14612 Large Language Models (LLMs) have the potential to improve efficiency and safety in the finance sector by detecting fraud, generating financial insights and automating customer service, according to research by The Alan Turing Institute. Because LLMs have an ability to analyse large amounts of data quickly and generate coherent text, there is growing understanding of... Read more »

The post Large language models could ‘revolutionise the finance sector within two years’ appeared first on AI News.

]]>
Large Language Models (LLMs) have the potential to improve efficiency and safety in the finance sector by detecting fraud, generating financial insights and automating customer service, according to research by The Alan Turing Institute.

Because LLMs have an ability to analyse large amounts of data quickly and generate coherent text, there is growing understanding of the potential to improve services across a range of sectors including healthcare, law, education and in financial services including banking, insurance and financial planning.

This report, which is the first to explore the adoption of LLMs across the finance ecosystem, shows that people working in this area have already begun to use LLMs to support a variety of internal processes, such as the review of regulations, and are assessing its potential for supporting external activity like the delivery of advisory and trading services.

Alongside a literature survey, researchers held a workshop of 43 professionals from major high street and investment banks, regulators, insurers, payment service providers, government and legal professions.

The majority of workshop participants (52%) are already using these models to enhance performance in information-orientated tasks, from the management of meeting notes to cyber security and compliance insight, while 29% use them to boost critical thinking skills, and another 16% employ them to break down complex tasks.

The sector is also already establishing systems to enhance productivity through rapid analysis of large amount of text to simplify decision making processes, risk profiling and to improve investment research and back-office operations.

When asked about the future of LLMs in the finance sector, participants felt that LLMs would be integrated into services like investment banking and venture capital strategy development within two years.

They also thought it likely that LLMs would be integrated to improve interactions between people and machines, for example dictation and embedded AI assistants could reduce the complexity of knowledge intensive tasks such as the review of regulations.

But participants also acknowledged that the technology poses risks which will limit its usage. Financial institutions are subject to extensive regulatory standards and obligations which limits their ability to use AI systems that they cannot explain and do not generate output predictably, consistently or without risk of error.

Based on their findings, the authors recommend that financial services professionals, regulators and policy makers collaborate across the sector to share and develop knowledge about implementing and using LLMs, particularly related to safety concerns. They also suggest that the growing interest in open-source models should be explored and could be used and maintained effectively, but that mitigating security and privacy concerns would be a high priority.

Professor Carsten Maple, lead author and Turing Fellow at The Alan Turing Institute, said: “Banks and other financial institutions have always been quick to adopt new technologies to make their operations more efficient and the emergence of LLMs is no different. By bringing together experts across the finance ecosystem, we have managed to create a common understanding of the use cases, risks, value and timeline for implementation of these technologies at scale.”

Professor Lukasz Szpruch, programme director for Finance and Economics at The Alan Turing Institute, said: “It’s really positive that the financial sector is benefiting from the emergence of large language models and their implementation into this highly regulated sector has the potential to provide best practices for other sectors. This study demonstrates the benefit of research institutes and industry working together to assess the vast opportunities as well as the practical and ethical challenges of new technologies to ensure they are implemented safely.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Large language models could ‘revolutionise the finance sector within two years’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/27/large-language-models-could-revolutionsise-the-finance-sector-within-two-years/feed/ 0
Stefano Somenzi, Athics: On no-code AI and deploying conversational bots https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/ https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/#respond Fri, 12 Nov 2021 16:47:39 +0000 https://artificialintelligence-news.com/?p=11369 No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic. AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual... Read more »

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
No-code AI solutions are helping more businesses to get started on their AI journeys than ever. Athics, through its Crafter.ai platform for deploying conversational bots, knows a thing or two about the topic.

AI News caught up with Stefano Somenzi, CTO at Athics, to get his thoughts on no-code AI and the development of virtual agents.

AI News: Do you think “no-code” will help more businesses to begin their AI journeys?

Stefano Somenzi: The real advantage of “no code” is not just the reduced effort required for businesses to get things done, it is also centered around changing the role of the user who will build the AI solution. In our case, a conversational AI agent.

“No code” means that the AI solution is built not by a data scientist but by the process owner. The process owner is best-suited to know what the AI solution should deliver and how. But, if you need coding, this means that the process owner needs to translate his/her requirements into a data scientist’s language.

This requires much more time and is affected by the “lost in translation” syndrome that hinders many IT projects. That’s why “no code” will play a major role in helping companies approach AI.

AN: Research from PwC found that 71 percent of US consumers would rather interact with a human than a chatbot or some other automated process. How can businesses be confident that bots created through your Crafter.ai platform will improve the customer experience rather than worsen it?

SS: Even the most advanced conversational AI agents, like ours, are not suited to replace a direct consumer-to-human interaction if what the consumer is looking for is the empathy that today only a human is able to show during a conversation.

At the same time, inefficiencies, errors, and lack of speed are among the most frequent causes for consumer dissatisfaction that hamper customer service performances.

Advanced conversational AI agents are the right tool to reduce these inefficiencies and errors while delivering strong customer service performances at light speed.

AN: What kind of real-time feedback is provided to your clients about their customers’ behaviour?

SS: Recognising the importance of a hybrid environment, where human and machine interaction are wisely mixed to leverage the best of both worlds, our Crafter.ai platform has been designed from the ground up with a module that manages the handover of the conversations between the bot and the call centre agents.

During a conversation, a platform user – with the right authorisation levels – can access an insights dashboard to check the key performance indicators that have been identified for the bot.

This is also true during the handover when agents and their supervisors receive real-time information on the customer behaviour during the company site navigation. Such information includes – and is not limited to – visited pages, form field contents, and clicked CTAs, and can be complemented with data collected from the company CRM.

AN: Europe is home to some of the strictest data regulations in the world. As a European organisation, do you think such regulations are too strict, not strict enough, or about right?

SS: We think that any company that wants to gain the trust of their customers should do their best to go beyond the strict regulations requirements.

AN: As conversational AIs progress to human-like levels, should it always be made clear that a person is speaking to an AI bot?

SS: Yes, a bot should always make clear that it is not human. In the end, this can help realise how amazing they can perform.

AN: What’s next for Athics?

SS: We have a solid roadmap for Crafter.ai with many new features and improvements that we bring every three months to our platform.

Our sole focus is on advanced conversational AI agents. We are currently working to include more and more domain specific capabilities to our bots.

Advanced profiling capabilities is a great area of interest where, thanks to our collaboration with universities and international research centres, we expect to deliver truly innovative solutions to our customers.

AN: Athics is sponsoring and exhibiting at this year’s AI & Big Data Expo Europe. What can attendees expect from your presence at the event? 

SS: Conversational AI agents allow businesses to obtain a balance between optimising resources and giving a top-class customer experience. Although there is no doubt regarding the benefits of adopting virtual agents, the successful integration across a company’s conversational streams needs to be correctly assessed, planned, and executed in order to leverage the full potential.

Athics will be at stand number 280 to welcome attending companies and give an overview of the advantages of integrating a conversational agent, explain how to choose the right product, and how to create a conversational vision that can scale and address organisational goals.

(Photo by Jason Leung on Unsplash)

Athics will be sharing their invaluable insights during this year’s AI & Big Data Expo Global which runs from 23-24 November 2021. Athics’ booth number is 280. Find out more about the event here.

The post Stefano Somenzi, Athics: On no-code AI and deploying conversational bots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/12/stefano-somenzi-athics-no-code-ai-deploying-conversational-bots/feed/ 0