Duncan MacRae, Author at AI News https://www.artificialintelligence-news.com Artificial Intelligence News Wed, 12 Jun 2024 08:28:54 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Duncan MacRae, Author at AI News https://www.artificialintelligence-news.com 32 32 ChatGPT Prompt Generator: Unleashing the power of AI conversations https://www.artificialintelligence-news.com/2024/06/12/chatgpt-prompt-generator-unleashing-the-power-of-ai-conversations/ https://www.artificialintelligence-news.com/2024/06/12/chatgpt-prompt-generator-unleashing-the-power-of-ai-conversations/#respond Wed, 12 Jun 2024 08:25:10 +0000 https://www.artificialintelligence-news.com/?p=14972 In the ever-evolving digital landscape, where AI is rapidly transforming the way we interact and communicate, WebUtility’s ChatGPT Prompt Generator emerges as a game-changer. This innovative tool empowers users to harness the full potential of ChatGPT, one of the most advanced language models developed by OpenAI. At its core, the ChatGPT Prompt Generator is designed... Read more »

The post ChatGPT Prompt Generator: Unleashing the power of AI conversations appeared first on AI News.

]]>
In the ever-evolving digital landscape, where AI is rapidly transforming the way we interact and communicate, WebUtility’s ChatGPT Prompt Generator emerges as a game-changer. This innovative tool empowers users to harness the full potential of ChatGPT, one of the most advanced language models developed by OpenAI.

At its core, the ChatGPT Prompt Generator is designed to simplify the process of crafting tailored prompts for ChatGPT. By leveraging the tool’s intuitive interface, users can effortlessly create prompts that align with their specific needs, whether they’re seeking assistance with customer support, content creation, or creative writing endeavors.

ChatGPT prompt generator tool features and benefits

The beauty of this tool lies in its user-friendly approach. With just a few clicks, users can select the desired action, such as ‘Create’, ‘Explain’, ‘Analyse’ or ‘Write’, and then specify the focus area. This level of customization ensures that the generated prompts are contextually relevant and tailored to the user’s requirements.

But the true power of the ChatGPT Prompt Generator extends beyond mere convenience. By automating the prompt creation process, the tool saves users valuable time and effort, enabling them to engage with ChatGPT in a more efficient and productive manner. Gone are the days of generic or irrelevant responses – every conversation is now tailored to the user’s specific needs.

One of the standout features of this tool is its ability to understand natural language and adapt to various contexts. Powered by cutting-edge AI technology, the ChatGPT Prompt Generator ensures that the generated prompts are thoughtful, contextually appropriate, and designed to elicit meaningful responses from ChatGPT.

Whether you’re a business professional seeking to streamline customer interactions, a content creator looking to generate engaging material, or a writer exploring new creative avenues, the ChatGPT Prompt Generator is your ultimate companion. By harnessing the power of AI, this tool empowers you to unlock the limitless potential of ChatGPT and elevate your conversations to new heights.

For those seeking to explore the vast realm of AI tools further, the AI Tools Directory at AI Parabellum is a treasure trove of resources. This comprehensive directory curates a wide range of AI-powered tools, spanning various domains and applications, ensuring that users can find the perfect solution for their specific needs.

Final words

In the rapidly evolving world of AI, the WebUtiliy’s ChatGPT Prompt Generator stand as beacon of innovation, empowering users to harness the power of cutting-edge technology and unlock new realms of possibility. Embrace the future of AI-driven conversations.

The post ChatGPT Prompt Generator: Unleashing the power of AI conversations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/12/chatgpt-prompt-generator-unleashing-the-power-of-ai-conversations/feed/ 0
Gil Pekelman, Atera: How businesses can harness the power of AI https://www.artificialintelligence-news.com/2024/05/28/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/ https://www.artificialintelligence-news.com/2024/05/28/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/#respond Tue, 28 May 2024 15:32:37 +0000 https://www.artificialintelligence-news.com/?p=14888 TechForge recently caught up with Gil Pekelman, CEO of all-in-one IT management platform, Atera, to discuss how AI is becoming the IT professionals’ number one companion. Can you tell us a little bit about Atera and what it does? We launched the Atera all-in-one platform for IT management in 2016, so quite a few years... Read more »

The post Gil Pekelman, Atera: How businesses can harness the power of AI appeared first on AI News.

]]>
TechForge recently caught up with Gil Pekelman, CEO of all-in-one IT management platform, Atera, to discuss how AI is becoming the IT professionals’ number one companion.

Can you tell us a little bit about Atera and what it does?

We launched the Atera all-in-one platform for IT management in 2016, so quite a few years ago. And it’s very broad. It’s everything from technical things like patching and security to ongoing support, alerts, automations, ticket management, reports, and analytics, etc. 

Atera is a single platform that manages all your IT in a single pane of glass. The power of it – and we’re the only company that does this – is it’s a single codebase and single database for all of that. The alternative, for many years now, has been to buy four or five different products, and have them all somehow connected, which is usually very difficult. 

Here, the fact is it’s a single codebase and a single database. Everything is connected and streamlined and very intuitive. So, in essence, you sign up or start a trial and within five minutes, you’re already running with it and onboarding. It’s that intuitive.

We have 12,000+ customers in 120 countries around the world. The UK is our second-largest country in terms of business, currently. The US is the first, but the UK is right behind them.

What are the latest trends you’re seeing develop in AI this year?

From the start, we’ve been dedicated to integrating AI into our company’s DNA. Our goal has always been to use data to identify problems and alert humans so they can fix or avoid issues. Initially, we focused on leveraging data to provide solutions.

Over the past nine years, we’ve aimed to let AI handle mundane IT tasks, freeing up professionals for more engaging work. With early access to Chat GPT and Open AI tools a year and a half ago, we’ve been pioneering a new trend we call Action AI.

Unlike generic Generative AI, which creates content like songs or emails, Action AI operates in the real world, interacting with hardware and software to perform tasks autonomously. Our AI can understand IT problems and resolve them on its own, moving beyond mere dialogue to real-world action.

Atera offers Copilot and Autopilot. Could you explain what these are?

Autopilot is autonomous. It understands a problem you might have on your computer. It’s a widget on your computer, and it will communicate with you and fix the problem autonomously. However, it has boundaries on what it’s allowed to fix and what it’s not allowed to fix. And everything it’s allowed to deal with has to be bulletproof. 100% secure or private. No opportunity to do any damage or anything like that. 

So if a ticket is opened up, or a complaint is raised, if it’s outside of these boundaries, it will then activate the Copilot. The Copilot augments the IT professional.

They’re both companions. The Autopilot is a companion that takes away password resets, printer issues, installs software, etc. – mundane and repetitive issues – and the Copilot is a companion that will help the IT professional deal with the issues they deal with on a day-to-day basis. And it has all kinds of different tools. 

The Copilot is very elaborate. If you have a problem, you can ask it and it will not only give you an answer like ChatGPT, but it will research and run all kinds of tests on the network, the computer, and the printer, and it will come to a conclusion, and create the action that is required to solve it. But it won’t solve it. It will still leave that to the IT professional to think about the different information and decide what they want to do. 

Copilot can save IT professionals nearly half of their workday. While it’s been tested in the field for some time, we’re excited to officially launch it now. Meanwhile, Autopilot is still in the beta phase.

What advice would you give to any companies that are thinking about integrating AI technologies into their business operations?

I strongly recommend that companies begin integrating AI technologies immediately, but it is crucial to research and select the right and secure generative AI tools. Incorporating AI offers numerous advantages: it automates routine tasks, enhances efficiency and productivity, improves accuracy by reducing human error, and speeds up problem resolution. That being said, it’s important to pick the right generative AI tool to help you reap the benefits without compromising on security. For example, with our collaboration with Microsoft, our customers’ data is secure—it stays within the system, and the AI doesn’t use it for training or expanding its database. This ensures safety while delivering substantial benefits.

Our incorporation of AI into our product focuses on two key aspects. First, your IT team no longer has to deal with mundane, frustrating tasks. Second, for end users, issues like non-working printers, forgotten passwords, or slow internet are resolved in seconds or minutes instead of hours. This provides a measurable and significant improvement in efficiency.

There are all kinds of AIs out there. Some of them are more beneficial, some are less. Some are just Chat GPT in disguise, and it’s a very thin layer. What we do literally changes the whole interaction with IT. And we know, when IT has a problem things stop working, and you stop working. Our solution ensures everything keeps running smoothly.

What can we expect from AI over the next few years?

AI is set to become significantly more intelligent and aware. One remarkable development is its growing ability to reason, predict, and understand data. This capability enables AI to foresee issues and autonomously resolve them, showcasing an astonishing level of reasoning.

We anticipate a dual advancement: a rapid acceleration in AI’s intelligence and a substantial enhancement in its empathetic interactions, as demonstrated in the latest OpenAI release. This evolution will transform how humans engage with AI.

Our work exemplifies this shift. When non-technical users interact with our software to solve problems, AI responds with a highly empathetic, human-like approach. Users feel as though they are speaking to a real IT professional, ensuring a seamless and comforting experience.

As AI continues to evolve, it will become increasingly powerful and capable. Recent breakthroughs in understanding AI’s mechanisms will not only enhance its functionality but also ensure its security and ethical use, reinforcing its role as a force for good.

What plans does Atera have for the next year?

We are excited to announce the upcoming launch of Autopilot, scheduled for release in a few months. While Copilot, our comprehensive suite of advanced tools designed specifically for IT professionals, has already been instrumental in enhancing efficiency and effectiveness, Autopilot represents the next significant advancement.

Currently in beta so whoever wants to try it already can, Autopilot directly interacts with end users, automating and resolving common IT issues that typically burden IT staff, such as password resets and printer malfunctions. By addressing these routine tasks, Autopilot allows IT professionals to focus on more strategic and rewarding activities, ultimately improving overall productivity and job satisfaction.

For more information, visit atera.com

Atera is a sponsor of TechEx North America 2024 on June 5-6 in Santa Clara, US. Visit the Atera team at booth 237 for a personalised demo, or to test your IT skills with the company’s first-of-kind AIT game, APOLLO IT, for a chance to win a prize.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Gil Pekelman, Atera: How businesses can harness the power of AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/28/gil-pekelman-atera-how-businesses-can-harness-the-power-of-ai/feed/ 0
Is AI going to upend the face of gambling? https://www.artificialintelligence-news.com/2024/05/20/is-ai-going-to-upend-the-face-of-gambling/ https://www.artificialintelligence-news.com/2024/05/20/is-ai-going-to-upend-the-face-of-gambling/#respond Mon, 20 May 2024 14:35:37 +0000 https://www.artificialintelligence-news.com/?p=14862 AI has made some – to put it mildly – big changes in our world in recent years, and although nobody can say for sure what it’s going to do next or what kind of impact it’s going to have on many aspects of our lives, people are eagerly looking out for those alterations (some... Read more »

The post Is AI going to upend the face of gambling? appeared first on AI News.

]]>
AI has made some – to put it mildly – big changes in our world in recent years, and although nobody can say for sure what it’s going to do next or what kind of impact it’s going to have on many aspects of our lives, people are eagerly looking out for those alterations (some with slight trepidation). What does it mean for those who love the buzz and thrill of the casino world?

Precise predictions are, of course, challenging to make, but it’s likely that AI is going to make major alterations soon to the way the gambling world works. Now, we’re going to caveat the whole article by reminding people that AI is a long way from perfect at the moment (and that’s generating some serious concerns and major complaints), but it’s also pretty powerful and has a lot of potential… so let’s unpack that!

What’s AI Already Doing?

You might (or might not) be surprised to learn that AI is already doing a lot of work in the online casino; it’s responsible for a whole lot of the computing that goes on behind the scenes (where, let’s be honest, most players aren’t looking), and it helps casinos across the planet function effectively, ensuring their games work – and work well.

We all know that AI doing a lot of the legwork when it comes to making online games play – it’s got to be able to respond effectively to player behavior, make intelligent decisions, and present even the best players with worthy opponents that can test their skills. And does it do so? Yes… staggeringly well at times.

AI does some other major jobs too, one of them being to analyze player behavior for both safety and personalization reasons. In terms of safety, it’s got some hard-hitting advantages: it can learn how an individual player generally behaves when gambling online and look for discrepancies that could indicate fraud is occurring. The account can then be paused while an investigation takes place – potentially saving players from losing large amounts of money to scammers! Sounds pretty good to us.

Next, AI bumps up the relevance of offers and bonuses, again by watching what players do online. They’ll note what promos and deals you click on, respond to your preferences, and create offers that have been honed to exactly match your tastes! This is a great way to enhance the casino experience for players; you’re not just getting any old offer in your inbox, but a proper, useful offer that aligns with the games you play and the things you like (well, most of the time; we all know this isn’t perfect yet!).

The new power casinos have to analyze player behavior can also lead to better regulations and more responsible gambling, creating win-wins for everyone involved.

What’s Coming Next?

Next, let’s turn our sights to that exciting horizon, and the anticipation of what we might see in the future. Well, firstly, it’s going to become better and better at playing games, as mentioned above – the more we train AIs, the better they become at their various tasks. So even the games where they currently don’t excel are likely to become better in the future.

We’re also likely to see better graphics hitting the scene, as AIs improve their ability to generate images well. Got a favorite online live poker game, for example? The graphics might already seem swish but wait until a few years down the line… they’ll be unbelievable, especially if they get coupled with VR at some point! We’re looking forward to seeing how Australian online casino games (and other casino games, for that matter!) develop as time passes.

Their player analysis is likely to improve too. We’ve all laughed at how bad computers seem to be at predicting our likes and dislikes at times (we’re sure you’ve experienced that moment where you get a dozen ads for a product you’ve already purchased, for example), but it is getting better… and we don’t see that trend going away any time soon. Of course, the more data the AI has, the better its predictions become, so it’s likely that casinos are going to bump up their focus on customer loyalty even more, which is no bad thing. It often leads to increased offers and bonuses to encourage you to stick with them, and nobody is complaining about that!

It’s hard to predict what else may come out of these exciting trends, but one thing is for sure: this isn’t slowing down or going away, and we’re expecting to see some intriguing developments in the next few years as more and more companies adopt and invest in AI. Casinos are just one small element, but they’re a fantastic example of where AI is now and what we might see coming as time progresses!

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Is AI going to upend the face of gambling? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/20/is-ai-going-to-upend-the-face-of-gambling/feed/ 0
SAS aims to make AI accessible regardless of skill set with packaged AI models https://www.artificialintelligence-news.com/2024/04/17/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/ https://www.artificialintelligence-news.com/2024/04/17/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/#respond Wed, 17 Apr 2024 23:37:00 +0000 https://www.artificialintelligence-news.com/?p=14696 SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on. Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency. Chandana Gopal, research director, Future... Read more »

The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.

]]>
SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on.

Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency.

Chandana Gopal, research director, Future of Intelligence, IDC, said: “SAS is evolving its portfolio to meet wider user needs and capture market share with innovative new offerings,

“An area that is ripe for SAS is productising models built on SAS’ core assets, talent and IP from its wealth of experience working with customers to solve industry problems.”

In today’s market, the consumption of models is primarily focused on large language models (LLMs) for generative AI. In reality, LLMs are a very small part of the modelling needs of real-world production deployments of AI and decision making for businesses. With the new offering, SAS is moving beyond LLMs and delivering industry-proven deterministic AI models for industries that span use cases such as fraud detection, supply chain optimization, entity management, document conversation and health care payment integrity and more.

Unlike traditional AI implementations that can be cumbersome and time-consuming, SAS’ industry-specific models are engineered for quick integration, enabling organisations to operationalise trustworthy AI technology and accelerate the realisation of tangible benefits and trusted results.

Expanding market footprint

Organisations are facing pressure to compete effectively and are looking to AI to gain an edge. At the same time, staffing data science teams has never been more challenging due to AI skills shortages. Consequently, businesses are demanding agility in using AI to solve problems and require flexible AI solutions to quickly drive business outcomes. SAS’ easy-to-use, yet powerful models tuned for the enterprise enable organisations to benefit from a half-century of SAS’ leadership across industries.

Delivering industry models as packaged offerings is one outcome of SAS’ commitment of $1 billion to AIpowered industry solutions. As outlined in the May 2023 announcement, the investment in AI builds on SAS’ decades-long focus on providing packaged solutions to address industry challenges in banking, government, health care and more.

Udo Sglavo, VP for AI and Analytics, SAS, said: “Models are the perfect complement to our existing solutions and SAS Viya platform offerings and cater to diverse business needs across various audiences, ensuring that innovation reaches every corner of our ecosystem. 

“By tailoring our approach to understanding specific industry needs, our frameworks empower businesses to flourish in their distinctive Environments.”

Bringing AI to the masses

SAS is democratising AI by offering out-of-the-box, lightweight AI models – making AI accessible regardless of skill set – starting with an AI assistant for warehouse space optimisation. Leveraging technology like large language models, these assistants cater to nontechnical users, translating interactions into optimised workflows seamlessly and aiding in faster planning decisions.

Sgvalo said: “SAS Models provide organisations with flexible, timely and accessible AI that aligns with industry challenges.

“Whether you’re embarking on your AI journey or seeking to accelerate the expansion of AI across your enterprise, SAS offers unparalleled depth and breadth in addressing your business’s unique needs.”

The first SAS Models are expected to be generally available later this year.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/17/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/feed/ 0
80% of AI decision makers are worried about data privacy and security https://www.artificialintelligence-news.com/2024/04/17/80-of-ai-decision-makers-are-worried-about-data-privacy-and-security/ https://www.artificialintelligence-news.com/2024/04/17/80-of-ai-decision-makers-are-worried-about-data-privacy-and-security/#respond Wed, 17 Apr 2024 22:25:00 +0000 https://www.artificialintelligence-news.com/?p=14692 Organisations are enthusiastic about generative AI’s potential for increasing their business and people productivity, but lack of strategic planning and talent shortages are preventing them from realising its true value. This is according to a study conducted in early 2024 by Coleman Parkes Research and sponsored by data analytics firm SAS, which surveyed 300 US... Read more »

The post 80% of AI decision makers are worried about data privacy and security appeared first on AI News.

]]>
Organisations are enthusiastic about generative AI’s potential for increasing their business and people productivity, but lack of strategic planning and talent shortages are preventing them from realising its true value.

This is according to a study conducted in early 2024 by Coleman Parkes Research and sponsored by data analytics firm SAS, which surveyed 300 US GenAI strategy or data analytics decision makers to pulse check major areas of investment and the hurdles organisations are facing.

Marinela Profi, strategic AI advisor at SAS, said: “Organisations are realising that large language models (LLMs) alone don’t solve business challenges. 

“GenAI should be treated as an ideal contributor to hyper automation and the acceleration of existing processes and systems rather than the new shiny toy that will help organisations realise all their business aspirations. Time spent developing a progressive strategy and investing in technology that offers integration, governance and explainability of LLMs are crucial steps all organisations should take before jumping in with both feet and getting ‘locked in.’”

Organisations are hitting stumbling blocks in four key areas of implementation:

• Increasing trust in data usage and achieving compliance. Only one in 10 organisations has a reliable system in place to measure bias and privacy risk in LLMs. Moreover, 93% of U.S. businesses lack a comprehensive governance framework for GenAI, and the majority are at risk of noncompliance when it comes to regulation.

• Integrating GenAI into existing systems and processes. Organisations reveal they’re experiencing compatibility issues when trying to combine GenAI with their current systems.

• Talent and skills. In-house GenAI is lacking. As HR departments encounter a scarcity of suitable hires, organisational leaders worry they don’t have access to the necessary skills to make the most of their GenAI investment.

• Predicting costs. Leaders cite prohibitive direct and indirect costs associated with using LLMs. Model creators provide a token cost estimate (which organisations now realise is prohibitive). But the costs for private knowledge preparation, training and ModelOps management are lengthy and complex.

Profi added: “It’s going to come down to identifying real-world use cases that deliver the highest value and solve human needs in a sustainable and scalable manner. 

“Through this study, we’re continuing our commitment to helping organisations stay relevant, invest their money wisely and remain resilient. In an era where AI technology evolves almost daily, competitive advantage is highly dependent on the ability to embrace the resiliency rules.”

Details of the study were unveiled today at SAS Innovate in Las Vegas, SAS Software’s AI and analytics conference for business leaders, technical users and SAS partners.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post 80% of AI decision makers are worried about data privacy and security appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/17/80-of-ai-decision-makers-are-worried-about-data-privacy-and-security/feed/ 0
Kamal Ahluwalia, Ikigai Labs: How to take your business to the next level with generative AI https://www.artificialintelligence-news.com/2024/04/17/kamal-ahluwalia-ikigai-labs-how-to-take-your-business-to-the-next-level-with-generative-ai/ https://www.artificialintelligence-news.com/2024/04/17/kamal-ahluwalia-ikigai-labs-how-to-take-your-business-to-the-next-level-with-generative-ai/#respond Wed, 17 Apr 2024 12:36:48 +0000 https://www.artificialintelligence-news.com/?p=14699 AI News caught up with president of Ikigai Labs, Kamal Ahluwalia, to discuss all things gen AI, including top tips on how to adopt and utilise the tech, and the importance of embedding ethics into AI design. Could you tell us a little bit about Ikigai Labs and how it can help companies? Ikigai is... Read more »

The post Kamal Ahluwalia, Ikigai Labs: How to take your business to the next level with generative AI appeared first on AI News.

]]>
AI News caught up with president of Ikigai Labs, Kamal Ahluwalia, to discuss all things gen AI, including top tips on how to adopt and utilise the tech, and the importance of embedding ethics into AI design.

Could you tell us a little bit about Ikigai Labs and how it can help companies?

Ikigai is helping organisations transform sparse, siloed enterprise data into predictive and actionable insights with a generative AI platform specifically designed for structured, tabular data.  

A significant portion of enterprise data is structured, tabular data, residing in systems like SAP and Salesforce. This data drives the planning and forecasting for an entire business. While there is a lot of excitement around Large Language Models (LLMs), which are great for unstructured data like text, Ikigai’s patented Large Graphical Models (LGMs), developed out of MIT, are focused on solving problems using structured data.  

Ikigai’s solution focuses particularly on time-series datasets, as enterprises run on four key time series: sales, products, employees, and capital/cash. Understanding how these time series come together in critical moments, such as launching a new product or entering a new geography, is crucial for making better decisions that drive optimal outcomes. 

How would you describe the current generative AI landscape, and how do you envision it developing in the future? 

The technologies that have captured the imagination, such as LLMs from OpenAI, Anthropic, and others, come from a consumer background. They were trained on internet-scale data, and the training datasets are only getting larger, which requires significant computing power and storage. It took $100m to train GPT4, and GP5 is expected to cost $2.5bn. 

This reality works in a consumer setting, where costs can be shared across a very large user set, and some mistakes are just part of the training process. But in the enterprise, mistakes cannot be tolerated, hallucinations are not an option, and accuracy is paramount. Additionally, the cost of training a model on internet-scale data is just not affordable, and companies that leverage a foundational model risk exposure of their IP and other sensitive data.  

While some companies have gone the route of building their own tech stack so LLMs can be used in a safe environment, most organisations lack the talent and resources to build it themselves. 

In spite of the challenges, enterprises want the kind of experience that LLMs provide. But the results need to be accurate – even when the data is sparse – and there must be a way to keep confidential data out of a foundational model. It’s also critical to find ways to lower the total cost of ownership, including the cost to train and upgrade the models, reliance on GPUs, and other issues related to governance and data retention. All of this leads to a very different set of solutions than what we currently have. 

How can companies create a strategy to maximise the benefits of generative AI? 

While much has been written about Large Language Models (LLMs) and their potential applications, many customers are asking “how do I build differentiation?”  

With LLMs, nearly everyone will have access to the same capabilities, such as chatbot experiences or generating marketing emails and content – if everyone has the same use cases, it’s not a differentiator. 

The key is to shift the focus from generic use cases to finding areas of optimisation and understanding specific to your business and circumstances. For example, if you’re in manufacturing and need to move operations out of China, how do you plan for uncertainty in logistics, labour, and other factors? Or, if you want to build more eco-friendly products, materials, vendors, and cost structures will change. How do you model this? 

These use cases are some of the ways companies are attempting to use AI to run their business and plan in an uncertain world. Finding specificity and tailoring the technology to your unique needs is probably the best way to use AI to find true competitive advantage.  

What are the main challenges companies face when deploying generative AI and how can these be overcome? 

Listening to customers, we’ve learned that while many have experimented with generative AI, only a fraction have pushed things through to production due to prohibitive costs and security concerns. But what if your models could be trained just on your own data, running on CPUs rather than requiring GPUs, with accurate results and transparency around how you’re getting those results? What if all the regulatory and compliance issues were addressed, leaving no questions about where the data came from or how much data is being retrained? This is what Ikigai is bringing to the table with Large Graphical Models.  

One challenge we’ve helped businesses address is the data problem. Nearly 100% of organisations are working with limited or imperfect data, and in many cases, this is a barrier to doing anything with AI. Companies often talk about data clean-up, but in reality, waiting for perfect data can hinder progress. AI solutions that can work with limited, sparse data are essential, as they allow companies to learn from what they have and account for change management. 

The other challenge is how internal teams can partner with the technology for better outcomes. Especially in regulated industries, human oversight, validation, and reinforcement learning are necessary. Adding an expert in the loop ensures that AI is not making decisions in a vacuum, so finding solutions that incorporate human expertise is key. 

To what extent do you think adopting generative AI successfully requires a shift in company culture and mindset? 

Successfully adopting generative AI requires a significant shift in company culture and mindset, with strong commitment from executive and continuous education. I saw this firsthand at Eightfold when we were bringing our AI platform to companies in over 140 countries. I always recommend that teams first educate executives on what’s possible, how to do it, and how to get there. They need to have the commitment to see it through, which involves some experimentation and some committed course of action. They must also understand the expectations placed on colleagues, so they can be prepared for AI becoming a part of daily life. 

Top-down commitment, and communication from executives goes a long way, as there’s a lot of fear-mongering suggesting that AI will take jobs, and executives need to set the tone that, while AI won’t eliminate jobs outright, everyone’s job is going to change in the next couple of years, not just for people at the bottom or middle levels, but for everyone. Ongoing education throughout the deployment is key for teams learning how to get value from the tools, and adapt the way they work to incorporate the new skillsets.  

It’s also important to adopt technologies that play to the reality of the enterprise. For example, you have to let go of the idea that you need to get all your data in order to take action. In time-series forecasting, by the time you’ve taken four quarters to clean up data, there’s more data available, and it’s probably a mess. If you keep waiting for perfect data, you won’t be able to use your data at all. So AI solutions that can work with limited, sparse data are crucial, as you have to be able to learn from what you have. 

Another important aspect is adding an expert in the loop. It would be a mistake to assume AI is magic. There are a lot of decisions, especially in regulated industries, where you can’t have AI just make the decision. You need oversight, validation, and reinforcement learning – this is exactly how consumer solutions became so good.  

Are there any case studies you could share with us regarding companies successfully utilising generative AI? 

One interesting example is a Marketplace customer that is using us to rationalise their product catalogue. They’re looking to understand the optimal number of SKUs to carry, so they can reduce their inventory carrying costs while still meeting customer needs. Another partner does workforce planning, forecasting, and scheduling, using us for labour balancing in hospitals, retail, and hospitality companies. In their case, all their data is sitting in different systems, and they must bring it into one view so they can balance employee wellness with operational excellence. But because we can support a wide variety of use cases, we work with clients doing everything from forecasting product usage as part of a move to a consumption-based model, to fraud detection. 

You recently launched an AI Ethics Council. What kind of people are on this council and what is its purpose? 

Our AI Ethics Council is all about making sure that the AI technology we’re building is grounded in ethics and responsible design. It’s a core part of who we are as a company, and I’m humbled and honoured to be a part of it alongside such an impressive group of individuals. Our council includes luminaries like Dr. Munther Dahleh, the Founding Director of the Institute for Data Systems and Society (IDSS) and a Professor at MIT; Aram A. Gavoor, Associate Dean at George Washington University and a recognised scholar in administrative law and national security; Dr. Michael Kearns, the National Center Chair for Computer and Information Science at the University of Pennsylvania; and Dr. Michael I. Jordan, a Distinguished Professor at UC Berkeley in the Departments of Electrical Engineering and Computer Science, and Statistics. I am also honoured to serve on this council alongside these esteemed individuals.  

The purpose of our AI Ethics Council is to tackle pressing ethical and security issues impacting AI development and usage. As AI rapidly becomes central to consumers and businesses across nearly every industry, we believe it is crucial to prioritise responsible development and cannot ignore the need for ethical considerations. The council will convene quarterly to discuss important topics such as AI governance, data minimisation, confidentiality, lawfulness, accuracy and more. Following each meeting, the council will publish recommendations for actions and next steps that organisations should consider moving forward. As part of Ikigai Labs’ commitment to ethical AI deployment and innovation, we will implement the action items recommended by the council. 

Ikigai Labs raised $25m funding in August last year. How will this help develop the company, its offerings and, ultimately, your customers? 

We have a strong foundation of research and innovation coming out of our core team with MIT, so the funding this time is focused on making the solution more robust, as well as bringing on the team that works with the clients and partners.  

We can solve a lot of problems but are staying focused on solving just a few meaningful ones through time-series super apps. We know that every company runs on four time series, so the goal is covering these in depth and with speed: things like sales forecasting, consumption forecasting, discount forecasting, how to sunset products, catalogue optimisation, etc. We’re excited and looking forward to putting GenAI for tabular data into the hands of as many customers as possible. 

Kamal will take part in a panel discussion titled ‘Barriers to Overcome: People, Processes and Technology’ at the AI & Big Data Expo in Santa Clara on June 5, 2024. You can find all the details here.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Kamal Ahluwalia, Ikigai Labs: How to take your business to the next level with generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/17/kamal-ahluwalia-ikigai-labs-how-to-take-your-business-to-the-next-level-with-generative-ai/feed/ 0
Large language models could ‘revolutionise the finance sector within two years’ https://www.artificialintelligence-news.com/2024/03/27/large-language-models-could-revolutionsise-the-finance-sector-within-two-years/ https://www.artificialintelligence-news.com/2024/03/27/large-language-models-could-revolutionsise-the-finance-sector-within-two-years/#respond Wed, 27 Mar 2024 06:07:00 +0000 https://www.artificialintelligence-news.com/?p=14612 Large Language Models (LLMs) have the potential to improve efficiency and safety in the finance sector by detecting fraud, generating financial insights and automating customer service, according to research by The Alan Turing Institute. Because LLMs have an ability to analyse large amounts of data quickly and generate coherent text, there is growing understanding of... Read more »

The post Large language models could ‘revolutionise the finance sector within two years’ appeared first on AI News.

]]>
Large Language Models (LLMs) have the potential to improve efficiency and safety in the finance sector by detecting fraud, generating financial insights and automating customer service, according to research by The Alan Turing Institute.

Because LLMs have an ability to analyse large amounts of data quickly and generate coherent text, there is growing understanding of the potential to improve services across a range of sectors including healthcare, law, education and in financial services including banking, insurance and financial planning.

This report, which is the first to explore the adoption of LLMs across the finance ecosystem, shows that people working in this area have already begun to use LLMs to support a variety of internal processes, such as the review of regulations, and are assessing its potential for supporting external activity like the delivery of advisory and trading services.

Alongside a literature survey, researchers held a workshop of 43 professionals from major high street and investment banks, regulators, insurers, payment service providers, government and legal professions.

The majority of workshop participants (52%) are already using these models to enhance performance in information-orientated tasks, from the management of meeting notes to cyber security and compliance insight, while 29% use them to boost critical thinking skills, and another 16% employ them to break down complex tasks.

The sector is also already establishing systems to enhance productivity through rapid analysis of large amount of text to simplify decision making processes, risk profiling and to improve investment research and back-office operations.

When asked about the future of LLMs in the finance sector, participants felt that LLMs would be integrated into services like investment banking and venture capital strategy development within two years.

They also thought it likely that LLMs would be integrated to improve interactions between people and machines, for example dictation and embedded AI assistants could reduce the complexity of knowledge intensive tasks such as the review of regulations.

But participants also acknowledged that the technology poses risks which will limit its usage. Financial institutions are subject to extensive regulatory standards and obligations which limits their ability to use AI systems that they cannot explain and do not generate output predictably, consistently or without risk of error.

Based on their findings, the authors recommend that financial services professionals, regulators and policy makers collaborate across the sector to share and develop knowledge about implementing and using LLMs, particularly related to safety concerns. They also suggest that the growing interest in open-source models should be explored and could be used and maintained effectively, but that mitigating security and privacy concerns would be a high priority.

Professor Carsten Maple, lead author and Turing Fellow at The Alan Turing Institute, said: “Banks and other financial institutions have always been quick to adopt new technologies to make their operations more efficient and the emergence of LLMs is no different. By bringing together experts across the finance ecosystem, we have managed to create a common understanding of the use cases, risks, value and timeline for implementation of these technologies at scale.”

Professor Lukasz Szpruch, programme director for Finance and Economics at The Alan Turing Institute, said: “It’s really positive that the financial sector is benefiting from the emergence of large language models and their implementation into this highly regulated sector has the potential to provide best practices for other sectors. This study demonstrates the benefit of research institutes and industry working together to assess the vast opportunities as well as the practical and ethical challenges of new technologies to ensure they are implemented safely.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Large language models could ‘revolutionise the finance sector within two years’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/27/large-language-models-could-revolutionsise-the-finance-sector-within-two-years/feed/ 0
STFC Hartree Centre signs agreement with Lenovo for state-of-the-art supercomputer https://www.artificialintelligence-news.com/2024/03/26/stfc-hartree-centre-signs-agreement-with-lenovo-for-state-of-the-art-supercomputer/ https://www.artificialintelligence-news.com/2024/03/26/stfc-hartree-centre-signs-agreement-with-lenovo-for-state-of-the-art-supercomputer/#respond Tue, 26 Mar 2024 09:50:35 +0000 https://www.artificialintelligence-news.com/?p=14608 The Science and Technology Facilities Council (STFC), a UK government agency that carries out research in science and engineering, has signed an agreement with Lenovo for the installation of a powerful new supercomputer for the STFC Hartree Centre. Ten times more powerful than its predecessor, but using less electricity thanks to Lenovo’s direct water cooling,... Read more »

The post STFC Hartree Centre signs agreement with Lenovo for state-of-the-art supercomputer appeared first on AI News.

]]>
The Science and Technology Facilities Council (STFC), a UK government agency that carries out research in science and engineering, has signed an agreement with Lenovo for the installation of a powerful new supercomputer for the STFC Hartree Centre.

Ten times more powerful than its predecessor, but using less electricity thanks to Lenovo’s direct water cooling, the new supercomputer will power AI research for UK industry

The new supercomputer is part of the Hartree Centre’s £210 million Hartree National Centre for Digital Innovation (HNCDI) programme, which provides UK industry access to state-of-the-art digital technologies and expertise and is complementary to investments in the wider AI Research Resource (AIRR). 

It will support the HNCDI’s rapidly expanding supercomputing and AI activities, and will be installed later this year at its new £30 million supercomputing centre, currently under construction.   

A leap in supercomputing processing power

A 44.7 petaflop system, the Lenovo ThinkSystem Neptune will perform more than 44 quadrillion floating point operations (calculations) per second.

To put this into context, if you were to carry out one calculation per second, it would take nearly 1400 million years to reach this number.

The new GPU-based system (graphics processing unit) is perfect for AI workloads, and marks a significant leap for the Hartree Centre’s capabilities, with ten times the processing power of its current system, Scafell Pike. Furthermore, it will be more power-efficient, taking up less space and using less electricity per unit of performance.

The new supercomputer uses innovative warm water cooling which can reduce energy demands by up to 40% while boosting performance by up to 10%.

Powering UK Industry with AI

Located at STFC’s Daresbury Laboratory, at Sci-Tech Daresbury in the Liverpool City Region, the Hartree Centre is the UK’s only supercomputing centre dedicated to industry engagement. This capability will drive forward innovation in industry use cases and applications.

Its HNCDI programme plays a vital role in equipping businesses with skills and technical knowledge to adopt emerging digital technologies, including supercomputing, quantum computing, and AI.

It is enabling productivity, innovation and growth in UK organisations through access to these advanced supercomputing technologies, which are typically available only to academia and large-scale industry,

Solving global challenges

At the HNCDI, the new supercomputer will be strategically positioned to contribute to discovery-led industrial research, focusing on solutions to global challenges in areas such as:

  • weather and climate modelling
  • cleaner energy initiatives
  • drug discovery
  • health technologies
  • new materials
  • automotive advancements
  • legal applications

This includes the Hartree Centre’s continued collaboration with the UK Atomic Energy Authority, which is using the Centre to research new reactors for clean nuclear fusion energy.

Ultimately, the new supercomputer will reduce the time and cost associated with making research breakthroughs, including for organisations such as the Met Office, Unilever and Rolls Royce, that the Hartree Centre has continued to work with over the last decade.

Kate Royse, director, STFC Hartree Centre, said: “We are very excited to be working with Lenovo on our next generation of supercomputer at the Hartree Centre. Our mission is to equip UK industry with the knowledge, skills and compute needed to fully unlock the potential of advanced digital technologies. With our new supercomputer we will be able to support UK industry in the use of big data and AI technologies to enable UK businesses to take a leading role internationally on the responsible adoption and exploitation of AI technology.”

Noam Rosen, EMEA director HPC/AI, Lenovo, said: “Lenovo is equally enthusiastic about our collaboration with the Hartree Centre on its ambitious journey to revolutionize HPC and AI capabilities in the UK. Our collaboration is not just about delivering a state-of-the-art supercomputer; it’s about building a versatile, robust, and powerful system tailored to meet the Centre’s diverse and evolving needs. From advanced modeling and simulation in various scientific disciplines to pioneering work in AI and machine learning, this new power-efficient supercomputer will be a cornerstone for innovation, pushing the boundaries of big data and AI technologies to bolster the UK industry’s global leadership in responsible and ethical technology adoption.”

Mark Thomson, Executive Chair, STFC, said: “STFC’s agreement with Lenovo is an exciting milestone in our mission to provide UK businesses with access to the vital infrastructure and expertise that will help them to grow and succeed on a global scale, which in turn will drive productivity and job creation. By enabling UK industry to adopt advanced digital technologies, we are supporting the government ambitions to build a competitive and innovative digital economy that will both turbo drive economic growth and reap societal benefits for the UK, as well as for the UK to be a global AI superpower.”

The HNCDI new Lenovo supercomputer in numbers:

  • The ThinkSystem Neptune can perform the same number of calculations as 20,790 top of the range smartphones
  • An hour of calculation on a market leading smartphone would take 0.17 seconds
  • It can hold 4500 hours of 4k video in its working memory
  • It can hold 60,000 hours of 4k video in its hard disks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post STFC Hartree Centre signs agreement with Lenovo for state-of-the-art supercomputer appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/26/stfc-hartree-centre-signs-agreement-with-lenovo-for-state-of-the-art-supercomputer/feed/ 0
Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ https://www.artificialintelligence-news.com/2024/03/25/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/ https://www.artificialintelligence-news.com/2024/03/25/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/#respond Mon, 25 Mar 2024 10:40:00 +0000 https://www.artificialintelligence-news.com/?p=14604 Stanhope AI – a company applying decades of neuroscience research to teach machines how to make human-like decisions in the real world – has raised £2.3m in seed funding led by the UCL Technology Fund. Creator Fund also participated, along with, MMC Ventures, Moonfire Ventures and Rockmount Capital and leading angel investors.  Stanhope AI was... Read more »

The post Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ appeared first on AI News.

]]>
Stanhope AI – a company applying decades of neuroscience research to teach machines how to make human-like decisions in the real world – has raised £2.3m in seed funding led by the UCL Technology Fund.

Creator Fund also participated, along with, MMC Ventures, Moonfire Ventures and Rockmount Capital and leading angel investors. 

Stanhope AI was founded as a spinout from University College London, supported by UCL Business, by three of the most eminent names in neuroscience and AI research – CEO Professor Rosalyn Moran (former Deputy Director of King’s Institute for Artificial Intelligence), Director Karl Friston, Professor at the UCL Queen Square Institute of Neurology and Technical Advisor Dr Biswa Sengupta (MD of AI and Cloud products at JP Morgan Chase). 

By using key neuroscience principles and applying them to AI and mathematics, Stanhope AI is at the forefront of the new generation of AI technology known as ‘agentic’ AI.  The team has built algorithms that, like the human brain, are always trying to guess what will happen next; learning from any discrepancies between predicted and actual events to continuously update their “internal models of the world.” Instead of training vast LLMs to make decisions based on seen data, Stanhope agentic AI’s models are in charge of their own learning. They autonomously decode their environments and rebuild and refine their “world models” using real-time data, continuously fed to them via onboard sensors.  

The rise of agentic AI

This approach, and Stanhope AI’s technology, are based on the neuroscience principle of Active Inference – the idea that our brains, in order to minimise free energy, are constantly making predictions about incoming sensory data around us. As this data changes, our brains adapt and update our predictions in response to rebuild and refine our world view. 

This is very different to the traditional machine learning methods used to train today’s AI systems such as LLMs. Today’s models can only operate within the realms of the training they are given, and can only make best-guess decisions based on the information they have. They can’t learn on the go. They require extreme amounts of processing power and energy to train and run, as well as vast amounts of seen data.  

By contrast, Stanhope AI’s Active Inference models are truly autonomous. They can constantly rebuild and refine their predictions. Uncertainty is minimised by default, which removes the risk of hallucinations about what the AI thinks is true, and this moves Stanhope’s unique models towards reasoning and human-like decision-making. What’s more, by drastically reducing the size and energy required to run the models and the machines, Stanhope AI’s models can operate on small devices such as drones and similar.  

“The most all-encompassing idea since natural selection”

Stanhope AI’s approach is possible because of its founding team’s extensive research into the neuroscience principles of Active Inference, as well as free energy. Director Indeed Professor Friston, a world-renowned neuroscientist at UCL whose work has been cited twice as many times as Albert Einstein, is the inventor of the Free Energy Theory Principle. 

Friston’s principle theory centres on how our brains minimise surprise and uncertainty. It explains that all living things are driven to minimise free energy, and thus the energy needed to predict and perceive the world. Such is its impact, the Free Energy Theory Principle has been described as the “most all-encompassing idea since the theory of natural selection.” Active Inference sits within this theory to explain the process our brains use in order to minimise this energy. This idea infuses Stanhope AI’s work, led by Professor Moran, a specialist in Active Inference and its application through AI; and Dr Biswa Sengupta, whose doctoral research was in dynamical systems, optimisation and energy efficiency from the University of Cambridge. 

Real-world application

In the immediate term, the technology is being tested with delivery drones and autonomous machines used by partners including Germany’s Federal Agency for Disruptive Innovation and the Royal Navy. In the long term, the technology holds huge promise in the realms of manufacturing, industrial robotics and embodied AI. The investment will be used to further the company’s development of its agentic AI models and the practical application of its research.  

Professor Rosalyn Moran, CEO and co-founder of Stanhope AI, said: “Our mission at Stanhope AI is to bridge the gap between neuroscience and artificial intelligence, creating a new generation of AI systems that can think, adapt, and decide like humans. We believe this technology will transform the capabilities of AI and robotics and make them more impactful in real-world scenarios. We trust the math and we’re delighted to have the backing of investors like UCL Technology Fund who deeply understand the science behind this technology and their support will be significant on our journey to revolutionise AI technology.”

David Grimm, partner UCL Technology Fund, said: “AI startups may be some of the hottest investments right now but few have the calibre and deep scientific and technical know-how as the Stanhope AI team. This is emblematic of their unique approach, combining neuroscience insights with advanced AI, which presents a groundbreaking opportunity to advance the field and address some of the most challenging problems in AI today. We can’t wait to see what this team achieves.” 

Marina Santilli, sasociate director UCL Business, added “The promise offered by Stanhope AI’s approach to Artificial Intelligence is hugely exciting, providing hope for powerful whilst energy-light models. UCLB is delighted to have been able to support the formation of a company built on the decades of fundamental research at UCL led by Professor Friston, developing the Free Energy Principle.” 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Stanhope raises £2.3m for AI that teaches machines to ‘make human-like decisions’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/25/stanhope-raises-2-3m-for-ai-that-teaches-machines-to-make-human-like-decisions/feed/ 0
Jaromir Dzialo, Exfluency: How companies can benefit from LLMs https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/ https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/#respond Fri, 20 Oct 2023 15:13:43 +0000 https://www.artificialintelligence-news.com/?p=13726 Can you tell us a little bit about Exfluency and what the company does? Exfluency is a tech company providing hybrid intelligence solutions for multilingual communication. By harnessing AI and blockchain technology we provide tech-savvy companies with access to modern language tools. Our goal is to make linguistic assets as precious as any other corporate... Read more »

The post Jaromir Dzialo, Exfluency: How companies can benefit from LLMs appeared first on AI News.

]]>
Can you tell us a little bit about Exfluency and what the company does?

Exfluency is a tech company providing hybrid intelligence solutions for multilingual communication. By harnessing AI and blockchain technology we provide tech-savvy companies with access to modern language tools. Our goal is to make linguistic assets as precious as any other corporate asset.

What tech trends have you noticed developing in the multilingual communication space?

As in every other walk of life, AI in general and ChatGPT specifically is dominating the agenda. Companies operating in the language space are either panicking or scrambling to play catch-up. The main challenge is the size of the tech deficit in this vertical. Innovation and, more especially AI-innovation is not a plug-in.

What are some of the benefits of using LLMs?

Off the shelf LLMs (ChatGPT, Bard, etc.) have a quick-fix attraction. Magically, it seems, well formulated answers appear on your screen. One cannot fail to be impressed.

The true benefits of LLMs will be realised by the players who can provide immutable data with which feed the models. They are what we feed them.

What do LLMs rely on when learning language?

Overall, LLMs learn language by analysing vast amounts of text data, understanding patterns and relationships, and using statistical methods to generate contextually appropriate responses. Their ability to generalise from data and generate coherent text makes them versatile tools for various language-related tasks.

Large Language Models (LLMs) like GPT-4 rely on a combination of data, pattern recognition, and statistical relationships to learn language. Here are the key components they rely on:

  1. Data: LLMs are trained on vast amounts of text data from the internet. This data includes a wide range of sources, such as books, articles, websites, and more. The diverse nature of the data helps the model learn a wide variety of language patterns, styles, and topics.
  2. Patterns and Relationships: LLMs learn language by identifying patterns and relationships within the data. They analyze the co-occurrence of words, phrases, and sentences to understand how they fit together grammatically and semantically.
  3. Statistical Learning: LLMs use statistical techniques to learn the probabilities of word sequences. They estimate the likelihood of a word appearing given the previous words in a sentence. This enables them to generate coherent and contextually relevant text.
  4. Contextual Information: LLMs focus on contextual understanding. They consider not only the preceding words but also the entire context of a sentence or passage. This contextual information helps them disambiguate words with multiple meanings and produce more accurate and contextually appropriate responses.
  5. Attention Mechanisms: Many LLMs, including GPT-4, employ attention mechanisms. These mechanisms allow the model to weigh the importance of different words in a sentence based on the context. This helps the model focus on relevant information while generating responses.
  6. Transfer Learning: LLMs use a technique called transfer learning. They are pretrained on a large dataset and then fine-tuned on specific tasks. This allows the model to leverage its broad language knowledge from pretraining while adapting to perform specialised tasks like translation, summarisation, or conversation.
  7. Encoder-Decoder Architecture: In certain tasks like translation or summarisation, LLMs use an encoder-decoder architecture. The encoder processes the input text and converts it into a context-rich representation, which the decoder then uses to generate the output text in the desired language or format.
  8. Feedback Loop: LLMs can learn from user interactions. When a user provides corrections or feedback on generated text, the model can adjust its responses based on that feedback over time, improving its performance.

What are some of the challenges of using LLMs?

A fundamental issue, which has been there ever since we started giving away data to Google, Facebook and the like, is that “we” are the product. The big players are earning untold billions on our rush to feed their apps with our data. ChatGPT, for example, is enjoying the fastest growing onboarding in history. Just think how Microsoft has benefitted from the millions of prompts people have already thrown at it.

The open LLMs hallucinate and, because answers to prompts are so well formulated, one can be easily duped into believing what they tell you.
And to make matters worse, there are no references/links to tell you from where they sourced their answers.

How can these challenges be overcome?

LLMs are what we feed them. Blockchain technology allows us to create an immutable audit trail and with it immutable, clean data. No need to trawl the internet. In this manner we are in complete control of what data is going in, can keep it confidential, and support it with a wealth of useful meta data. It can also be multilingual!

Secondly, as this data is stored in our databases, we can also provide the necessary source links. If you can’t quite believe the answer to your prompt, open the source data directly to see who wrote it, when, in which language and which context.

What advice would you give to companies that want to utilise private, anonymised LLMs for multilingual communication?

Make sure your data is immutable, multilingual, of a high quality – and stored for your eyes only. LLMs then become a true game changer.

What do you think the future holds for multilingual communication?

As in many other walks of life, language will embrace forms of hybrid intelligence. For example, in the Exfluency ecosystem, the AI-driven workflow takes care of 90% of the translation – our fantastic bilingual subject matter experts then only need to focus on the final 10%. This balance will change over time – AI will take an ever-increasing proportion of the workload. But the human input will remain crucial. The concept is encapsulated in our strapline: Powered by technology, perfected by people.

What plans does Exfluency have for the coming year?

Lots! We aim to roll out the tech to new verticals and build communities of SMEs to serve them. There is also great interest in our Knowledge Mining app, designed to leverage the information hidden away in the millions of linguistic assets. 2024 is going to be exciting!

  • Jaromir Dzialo is the co-founder and CTO of Exfluency, which offers affordable AI-powered language and security solutions with global talent networks for organisations of all sizes.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Jaromir Dzialo, Exfluency: How companies can benefit from LLMs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/20/jaromir-dzialo-exfluency-how-companies-can-benefit-from-llms/feed/ 0