James Bourne, Author at AI News https://www.artificialintelligence-news.com Artificial Intelligence News Fri, 17 May 2024 09:33:30 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png James Bourne, Author at AI News https://www.artificialintelligence-news.com 32 32 The rise of intelligent automation as a strategic differentiator https://www.artificialintelligence-news.com/2024/05/17/the-rise-of-intelligent-automation-as-a-strategic-differentiator/ https://www.artificialintelligence-news.com/2024/05/17/the-rise-of-intelligent-automation-as-a-strategic-differentiator/#respond Fri, 17 May 2024 09:33:27 +0000 https://www.artificialintelligence-news.com/?p=14842 Intelligent automation (IA) technologies are graduating from being operational to highly strategic. In terms of the bottom line, it’s even more impressive. A study from SS&C Blue Prism, conducted by Forrester Consulting and published in April, put together a composite organisation representative of five customers interviewed. The conclusion was that, over three years, there were... Read more »

The post The rise of intelligent automation as a strategic differentiator appeared first on AI News.

]]>
Intelligent automation (IA) technologies are graduating from being operational to highly strategic. In terms of the bottom line, it’s even more impressive.

A study from SS&C Blue Prism, conducted by Forrester Consulting and published in April, put together a composite organisation representative of five customers interviewed. The conclusion was that, over three years, there were key gains in IA from greater productivity to compliance cost avoidance, to improved employee experience and retention. This represented an overall net present value of $53.4 million (£42.5m) per customer.

Yet this may just be the tip of the iceberg. Dan Segura, enterprise sales manager at SS&C Blue Prism, notes one healthcare client who, in what is described as a conservative estimate, delivered savings of more than $140m overall on cost avoidance and recoup. Another healthcare client delivered a use case with a claimed $43m benefit on its own; a bot which recouped overtime pay for nurses and staff during the pandemic.

“They built it in an afternoon,” Segura explains. “It’s a perfect example of being in the right place at the right time; and having the right skills and technology being ready.”

Many of the technologies which comprise intelligent automation have been around for a long time, such as classic RPA (robotic process automation) or OCR (optical character recognition). SS&C Blue Prism’s document automation, which forms part of the latter, is described as a ‘game-changer’ by Segura. “There’s a lot of these processes, whether it’s going to be executed by a robot or a human,” he says. “First things first, we’ve got to get data off documents.

“Automation is not just doing simple tasks anymore thanks to the introduction of AI and generative AI” he adds. “There’s now more understanding, whether it’s assessing information from documents, information from a message, structuring things that are semi-structured or unstructured, to drive the process or complete the process.”

Segura describes wider business process management (BPM) and process orchestration tool Chorus, meanwhile, as ‘one of the world’s best kept secrets.’ Or, at least, it was; in November analyst Everest Group named the tool as a leader and star performer in its Process Orchestration Products PEAK Matrix.

The tool is now getting leverage outside the traditional finance and insurance fields. “It is how millions and millions of transactions and pieces of work are getting done every day,” says Segura. “We’re now seeing adoption [elsewhere] alongside automation to orchestrate their work and give them that end-to-end work orchestration, visibility, and efficiency gains with whatever they have going on.”

So how does a use case come to life? It is often a mixture of inspiration and perspiration. Where SS&C Blue Prism comes in is to ‘help customers catch lightning’, as Segura puts it. “We’ve all been in that situation where it’s like ‘oh if I were running this place, here’s what I would do’,” he says. “Intelligent automation gives you the opportunity to reimagine your processes and transform how you get work done. Once that light switch turns on, and the initial use case is built, that’s really the secret sauce of SS&C Blue Prism; it’s that realisation and awareness of what intelligent automation can deliver.

“We’re always learning from our customers,” adds Segura. “It’s at their direction because they know their business and processes better than anybody. Combine their business expertise with the transformational power of intelligent automation and its digital workforce, then that’s where the magic happens.”

Any organisation, argues Segura, regardless of the industry, has change agents and citizen builders in waiting. Don’t think that’s a misnomer; the term is definitely ‘builder’.

“I hear about these citizen developer programmes, and they’ll say, ‘here we have 500, 1000 citizen developers.’ What I don’t hear is, ‘and with this army of citizen developers we’ve achieved this’,” says Segura. “Whereas I have customers where two people have basically become citizen builders with more of a robust type of approach.” The $43m healthcare single use case is a case in point. “It is the whole mantra of SS&C Blue Prism,” adds Segura. “We’re designed to go after those higher value chain automations that can have a tangible impact on some of the company’s key objectives.”

So, you have the idea, the value proposition, and the capability to build it out. How do you make it stick?  Every organisation is different; though if your company has a continuous process improvement department then that can be a good place to start. Segura likens it to offshoring processes. “You don’t just wave it goodbye and never think about it again,” he explains. “At the end of the day, it still has to function.

“You’re not just ‘digital-shoring’ [automation] and it will essentially be taken care of by digital. Someone has to continuously improve the process; someone has to mind when something changes with the business rules or regulatory compliance; somebody has to be responsible for making sure that those changes are kept up in an agile way.”

SS&C Blue Prism has a longstanding, large US retail customer that combines that lightning capture with the right internal culture around automation. This is a company that has 72,000 employees, as well as 60 ‘digital workers’ executing more than 150 automations. One such automation, through using OCR technology, lets the company automate the processing of inbound customer orders received by digital fax.

The overall result is 6.2 million transactions processed to date, and 250,000 hours of work returned to the business. But there is one extra ingredient required, particularly for a big company: discipline.

“It took them a while to get to that point in maturity,” explains Segura. “They do have a very central function when it comes to the intelligent automation team, [but] keep in mind one of those processes is in supply chain. That process is regularly reviewing 4.2 million purchase orders; it’s minding 50 million inventory case volume; it’s going through two million SKUs for 8000 suppliers.

“This is highly iterative, but it’s that process of having that lightning rod to capture the requirements and give people who are not necessarily technical a platform and a methodology to iterate very closely with the intelligent automation team,” adds Segura.

Think of what SS&C Blue Prism does therefore as providing a superhero cape for those who don’t otherwise get the chance to step into the limelight. It is a message the company will look to broadcast at the Intelligent Automation event in Santa Clara on 5-6 June.

“SS&C Blue Prism opens up that door to enable your citizen builders really make an impact and deliver strategic benefits to the company,” says Segura. “You’re not just playing with a pilot, not just fooling around with something; you’re really getting into the strategic objectives of the company.”

Photo by Tara Winstead

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

The post The rise of intelligent automation as a strategic differentiator appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/17/the-rise-of-intelligent-automation-as-a-strategic-differentiator/feed/ 0
Pace of innovation in AI is fierce – but is ethics able to keep up? https://www.artificialintelligence-news.com/2024/03/07/pace-of-innovation-in-ai-is-fierce-but-is-ethics-able-to-keep-up/ https://www.artificialintelligence-news.com/2024/03/07/pace-of-innovation-in-ai-is-fierce-but-is-ethics-able-to-keep-up/#respond Thu, 07 Mar 2024 09:41:57 +0000 https://www.artificialintelligence-news.com/?p=14494 If a week is traditionally a long time in politics, it is a yawning chasm when it comes to AI. The pace of innovation from the leading providers is one thing; the ferocity of innovation as competition hots up is quite another. But are the ethical implications of AI technology being left behind by this... Read more »

The post Pace of innovation in AI is fierce – but is ethics able to keep up? appeared first on AI News.

]]>
If a week is traditionally a long time in politics, it is a yawning chasm when it comes to AI. The pace of innovation from the leading providers is one thing; the ferocity of innovation as competition hots up is quite another. But are the ethical implications of AI technology being left behind by this fast pace?

Anthropic, creators of Claude, released Claude 3 this week and claimed it to be a ‘new standard for intelligence’, surging ahead of competitors such as ChatGPT and Google’s Gemini. The company says it has also achieved ‘near human’ proficiency in various tasks. Indeed, as Anthropic prompt engineer Alex Albert pointed out, during the testing phase of Claude 3 Opus, the most potent LLM (large language model) variant, the model exhibited signs of awareness that it was being evaluated.

Moving to text-to-image, Stability AI announced an early preview of Stable Diffusion 3 at the end of February, just days after OpenAI unveiled Sora, a brand new AI model capable of generating almost realistic, high definition videos from simple text prompts.

While progress marches on, perfection remains difficult to attain. Google’s Gemini model was criticised for producing historically inaccurate images which, as this publication put it, ‘reignited concerns about bias in AI systems.’

Getting this right is a key priority for everyone. Google responded to the Gemini concerns by, for the time being, pausing the image generation of people. In a statement, the company said that Gemini’s AI image generation ‘does generate a wide range of people… and that’s generally a good thing because people around the world use it. But it’s missing the mark here.’ Stability AI, in previewing Stable Diffusion 3, noted that the company believed in safe, responsible AI practices. “Safety starts when we begin training our model and continues throughout the testing, evaluation, and deployment,” as a statement put it. OpenAI is adopting a similar approach with Sora; in January, the company announced an initiative to promote responsible AI usage among families and educators.

That is from the vendor perspective – but how are major organisations tackling this issue? Take a look at how the BBC is looking to utilise generative AI and ensure it puts its values first. In October, Rhodri Talfan Davies, the BBC’s director of nations, noted a three-pronged strategy: always acting in the best interests of the public; always prioritising talent and creativity; and being open and transparent.

Last week, more meat was put on these bones with the BBC outlining a series of pilots based on these principles. One example is reformatting existing content in a way to widen its appeal, such as taking a live sport radio commentary and changing it rapidly to text. In addition, editorial guidance on AI has been updated to note that ‘all AI usage has active human oversight.’

It is worth noting as well that the BBC does not believe that its data should be scraped without permission in order to train other generative AI models, therefore banning crawlers from the likes of OpenAI and Common Crawl. This will be another point of convergence on which stakeholders need to agree going forward.

Another major company which takes its responsibilities for ethical AI seriously is Bosch. The appliance manufacturer has five guidelines in its code of ethics. The first is that all Bosch AI products should reflect the ‘invented for life’ ethos which combines a quest for innovation with a sense of social responsibility. The second apes the BBC; AI decisions that affect people should not be made without a human arbiter. The other three principles, meanwhile, explore safe, robust and explainable AI products; trust; and observing legal requirements and orienting to ethical principles.

When the guidelines were first announced, the company hoped its AI code of ethics would contribute to public debate around artificial intelligence. “AI will change every aspect of our lives,” said Volkmar Denner, then-CEO of Bosch at the time. “For this reason, such a debate is vital.”

It is in this ethos with which the free virtual AI World Solutions Summit event, brought to you by TechForge Media, is taking place on March 13. Sudhir Tiku, VP, Singapore Asia Pacific region at Bosch, is a keynote speaker whose session at 1245 GMT will be exploring the intricacies of safely scaling AI, navigating the ethical considerations, responsibilities, and governance surrounding its implementation. Another session, at 1445 GMT explores longer-term impact on society and how business culture and mindset can be shifted to foster greater trust in AI.

Book your free pass to access the live virtual sessions today.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Photo by Jonathan Chng on Unsplash

The post Pace of innovation in AI is fierce – but is ethics able to keep up? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/07/pace-of-innovation-in-ai-is-fierce-but-is-ethics-able-to-keep-up/feed/ 0
Wolfram Research: Injecting reliability into generative AI https://www.artificialintelligence-news.com/2023/11/15/wolfram-research-injecting-reliability-into-generative-ai/ https://www.artificialintelligence-news.com/2023/11/15/wolfram-research-injecting-reliability-into-generative-ai/#respond Wed, 15 Nov 2023 10:30:00 +0000 https://www.artificialintelligence-news.com/?p=13886 The hype surrounding generative AI and the potential of large language models (LLMs), spearheaded by OpenAI’s ChatGPT, appeared at one stage to be practically insurmountable. It was certainly inescapable. More than one in four dollars invested in US startups this year went to an AI-related company, while OpenAI revealed at its recent developer conference that... Read more »

The post Wolfram Research: Injecting reliability into generative AI appeared first on AI News.

]]>
The hype surrounding generative AI and the potential of large language models (LLMs), spearheaded by OpenAI’s ChatGPT, appeared at one stage to be practically insurmountable. It was certainly inescapable. More than one in four dollars invested in US startups this year went to an AI-related company, while OpenAI revealed at its recent developer conference that ChatGPT continues to be one of the fastest-growing services of all time.

Yet something continues to be amiss. Or rather, something amiss continues to be added in.

One of the biggest issues with LLMs are their ability to hallucinate. In other words, it makes things up. Figures vary, but one frequently-cited rate is at 15%-20%. One Google system notched up 27%. This would not be so bad if it did not come across so assertively while doing so. Jon McLoone, Director of Technical Communication and Strategy at Wolfram Research, likens it to the ‘loudmouth know-it-all you meet in the pub.’ “He’ll say anything that will make him seem clever,” McLoone tells AI News. “It doesn’t have to be right.”

The truth is, however, that such hallucinations are an inevitability when dealing with LLMs. As McLoone explains, it is all a question of purpose. “I think one of the things people forget, in this idea of the ‘thinking machine’, is that all of these tools are designed with a purpose in mind, and the machinery executes on that purpose,” says McLoone. “And the purpose was not to know the facts.

“The purpose that drove its creation was to be fluid; to say the kinds of things that you would expect a human to say; to be plausible,” McLoone adds. “Saying the right answer, saying the truth, is a very plausible thing, but it’s not a requirement of plausibility.

“So you get these fun things where you can say ‘explain why zebras like to eat cacti’ – and it’s doing its plausibility job,” says McLoone. “It says the kinds of things that might sound right, but of course it’s all nonsense, because it’s just being asked to sound plausible.”

What is needed, therefore, is a kind of intermediary which is able to inject a little objectivity into proceedings – and this is where Wolfram comes in. In March, the company released a ChatGPT plugin, which aims to ‘make ChatGPT smarter by giving it access to powerful computation, accurate math[s], curated knowledge, real-time data and visualisation’. Alongside being a general extension to ChatGPT, the Wolfram plugin can also synthesise code.

“It teaches the LLM to recognise the kinds of things that Wolfram|Alpha might know – our knowledge engine,” McLoone explains. “Our approach on that is completely different. We don’t scrape the web. We have human curators who give the data meaning and structure, and we lay computation on that to synthesise new knowledge, so you can ask questions of data. We’ve got a few thousand data sets built into that.”

Wolfram has always been on the side of computational technology, with McLoone, who describes himself as a ‘lifelong computation person’, having been with the company for almost 32 of its 36-year history. When it comes to AI, Wolfram therefore sits on the symbolic side of the fence, which suits logical reasoning use cases, rather than statistical AI, which suits pattern recognition and object classification.

The two systems appear directly opposed, but with more commonality than you may think. “Where I see it, [approaches to AI] all share something in common, which is all about using the machinery of computation to automate knowledge,” says McLoone. “What’s changed over that time is the concept of at what level you’re automating knowledge.

“The good old fashioned AI world of computation is humans coming up with the rules of behaviour, and then the machine is automating the execution of those rules,” adds McLoone. “So in the same way that the stick extends the caveman’s reach, the computer extends the brain’s ability to do these things, but we’re still solving the problem beforehand.

“With generative AI, it’s no longer saying ‘let’s focus on a problem and discover the rules of the problem.’ We’re now starting to say, ‘let’s just discover the rules for the world’, and then you’ve got a model that you can try and apply to different problems rather than specific ones.

“So as the automation has gone higher up the intellectual spectrum, the things have become more general, but in the end, it’s all just executing rules,” says McLoone.

What’s more, as the differing approaches to AI share a common goal, so do the companies on either side. As OpenAI was building out its plugin architecture, Wolfram was asked to be one of the first providers. “As the LLM revolution started, we started doing a bunch of analysis on what they were really capable of,” explains McLoone. “And then, as we came to this understanding of what the strengths or weaknesses were, it was about that point that OpenAI were starting to work on their plugin architecture.

“They approached us early on, because they had a little bit longer to think about this than us, since they’d seen it coming for two years,” McLoone adds. “They understood exactly this issue themselves already.”

McLoone will be demonstrating the plugin with examples at the upcoming AI & Big Data Expo Global event in London on November 30-December 1, where he is speaking. Yet he is keen to stress that there are more varied use cases out there which can benefit from the combination of ChatGPT’s mastery of unstructured language and Wolfram’s mastery of computational mathematics.

One such example is performing data science on unstructured GP medical records. This ranges from correcting peculiar transcriptions on the LLM side – replacing ‘peacemaker’ with ‘pacemaker’ as one example – to using old-fashioned computation and looking for correlations within the data. “We’re focused on chat, because it’s the most amazing thing at the moment that we can talk to a computer. But the LLM is not just about chat,” says McLoone. “They’re really great with unstructured data.”

How does McLoone see LLMs developing in the coming years? There will be various incremental improvements, and training best practices will see better results, not to mention potentially greater speed with hardware acceleration. “Where the big money goes, the architectures follow,” McLoone notes. A sea-change on the scale of the last 12 months, however, can likely be ruled out. Partly because of crippling compute costs, but also because we may have peaked in terms of training sets. If copyright rulings go against LLM providers, then training sets will shrink going forward.

The reliability problem for LLMs, however, will be forefront in McLoone’s presentation. “Things that are computational are where it’s absolutely at its weakest, it can’t really follow rules beyond really basic things,” he explains. “For anything where you’re synthesising new knowledge, or computing with data-oriented things as opposed to story-oriented things, computation really is the way still to do that.”

Yet while responses may vary – one has to account for ChatGPT’s degree of randomness after all – the combination seems to be working, so long as you give the LLM strong instructions. “I don’t know if I’ve ever seen [an LLM] actually override a fact I’ve given it,” says McLoone. “When you’re putting it in charge of the plugin, it often thinks ‘I don’t think I’ll bother calling Wolfram for this, I know the answer’, and it will make something up.

“So if it’s in charge you have to give really strong prompt engineering,” he adds. “Say ‘always use the tool if it’s anything to do with this, don’t try and go it alone’. But when it’s the other way around – when computation generates the knowledge and injects it into the LLM – I’ve never seen it ignore the facts.

“It’s just like the loudmouth guy at the pub – if you whisper the facts in his ear, he’ll happily take credit for them.”

Wolfram will be at AI & Big Data Expo Global. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Wolfram Research: Injecting reliability into generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/15/wolfram-research-injecting-reliability-into-generative-ai/feed/ 0
OutSystems: How AI-based development reduces backlogs https://www.artificialintelligence-news.com/2023/09/08/outsystems-how-ai-based-development-reduces-backlogs/ https://www.artificialintelligence-news.com/2023/09/08/outsystems-how-ai-based-development-reduces-backlogs/#respond Fri, 08 Sep 2023 11:56:57 +0000 https://www.artificialintelligence-news.com/?p=13575 OutSystems may be best known for its low-code development platform expertise. But the company has steadily been moving to a specialism in AI-assisted software development – and the parallels between the two are evident. In June, the company unveiled its generative AI roadmap, codenamed ‘Project Morpheus,’ with benefits including instant app generation using conversational prompts... Read more »

The post OutSystems: How AI-based development reduces backlogs appeared first on AI News.

]]>
OutSystems may be best known for its low-code development platform expertise. But the company has steadily been moving to a specialism in AI-assisted software development – and the parallels between the two are evident.

In June, the company unveiled its generative AI roadmap, codenamed ‘Project Morpheus,’ with benefits including instant app generation using conversational prompts and an AI-powered app editor offering suggestions across the stack.  The mission remains clear: ‘developer productivity without trade-offs’, as founder and CEO Paulo Rosado puts it.

Project Morpheus, in the words of Nuno Carneiro, OutSystems AI product manager, is ‘the next generation of software development.’ “What we’re doing is building a completely new development experience, based on this premise that AI will give you suggestions. You do not have to code practically anything, and the AI is suggesting what to do,” says Carneiro.

“You have a What-You-See-Is-What-You-Get visual experience in terms of software development where you can change the application directly in your development environment. – On top of that, AI gives you suggestions about what you might want to change so that you don’t need to code things manually.”

This means the artificial intelligence is there to tweak, rather than take over. The company’s main offering in the space to date has been the OutSystems AI Mentor System. From code, to architecture, to performance, the developer is in control, but always has an on-call expert to hand.

Scepticism is naturally there, as it was with the rise of low-code platforms. But having slayed the dragon once before, is the job easier this time? “We see the same patterns of people being sceptical of AI in software development,” explains Carneiro. “We’ve been through this process of educating and showing the value of automation in software development before. We now feel like we’re in a good spot to communicate the current transformation in the industry due to the rise of AI.”

The key factor is that the OutSystems platform guards against some of the less salubrious aspects of artificial intelligence technology. Hallucination – where an AI confidently gives an incorrect response – and creating code riddled with vulnerabilities are just two of the pitfalls which could result if given full control. This is where the parallels between low-code and AI-assisted software development are especially striking; even if the code has been generated by AI, you can visually understand what you are building.

“The solutions we see out there at the moment still don’t solve this problem,” says Carneiro. “Because if AI is just writing a bunch of code automatically, and the person in charge of seeing the code and building it doesn’t understand what’s behind it, that’s not going to be a solution for any serious organisation to use. Low-code solves this problem with its visual development experience and the AI Mentor System constantly checks for security vulnerabilities, no matter who, or what, wrote the code.”

The bottom line for businesses is that AI-based development with a low code platform will allow them to complete projects in weeks which would otherwise take months, or even years, to develop. Carneiro gives a theoretical example of a company who wants to do a proof of concept for a new piece of software managing HR internally; a project which could take a week with OutSystems. For wider transformational projects, such as rebuilding an entire supply chain, it would take a few months maximum.

There is another benefit too for larger firms. “We’ve also seen a lot of clients build Centres of Excellence around low-code software development that they then export to their organisations around the world,” says Carneiro. “Using the AI Mentor System means they can then export this and innovate quickly across their whole business.”

Improving the process of software development is only one aspect of a digital transformation journey, however, with OutSystems committed to enabling businesses to adopt AI themselves. Image recognition is one such use case, or using cognitive services that users can add to their applications to solve business problems from unstructured data. This was factored into one part of the generative AI roadmap update, with a new connector announced for Azure OpenAI, built in partnership with Microsoft, to enable the use of large language models in development. “Part of our roadmap here is to help customers build the foundations for AI adoption in their businesses, so they’re not caught off guard,” notes Carneiro.

OutSystems is participating at AI & Big Data Expo Europe, in Amsterdam on September 26-27, and AI and wider digital transformation journeys will be a major part of the agenda. “A typical digital transformation challenge is to connect different data sources, and that’s another place where we believe OutSystems comes in. We’re at the right spot to help businesses solve this,” explains Carneiro. “We naturally help you connect with different data sources, and it’s something we’ve been optimising over the years to help our customers bring in all types of databases and sources – we have tools that help customers connect to integrations and integrate different data sources.

“These challenges might not be obvious before you embark on an AI adoption journey,” Carneiro adds. “But I’m pretty sure anyone who’s tried will recognise them – and we hope they also recognise that OutSystems is a good partner for that.”

Photo by Marc Sendra Martorell on Unsplash

Looking to revamp your intelligent automation strategy? Learn more about the Intelligent Automation Event & Conference, to discover the latest insights surrounding unbiased algorithyms, future trends, RPA, Cognitive Automation and more!

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OutSystems: How AI-based development reduces backlogs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/08/outsystems-how-ai-based-development-reduces-backlogs/feed/ 0
Basil Faruqui, BMC: Why DataOps needs orchestration to make it work https://www.artificialintelligence-news.com/2023/08/29/basil-faruqui-bmc-why-data-operationalisation-needs-orchestration-to-make-it-work/ https://www.artificialintelligence-news.com/2023/08/29/basil-faruqui-bmc-why-data-operationalisation-needs-orchestration-to-make-it-work/#respond Tue, 29 Aug 2023 14:21:59 +0000 https://www.artificialintelligence-news.com/?p=13540 Data has long been the currency on which the enterprise operates – and it goes right to the very top. Analysts and thought leaders almost universally urge the importance of the CEO being actively involved in data initiatives. But what gets buried in the small print is the acknowledgement that many data projects never make... Read more »

The post Basil Faruqui, BMC: Why DataOps needs orchestration to make it work appeared first on AI News.

]]>
Data has long been the currency on which the enterprise operates – and it goes right to the very top. Analysts and thought leaders almost universally urge the importance of the CEO being actively involved in data initiatives. But what gets buried in the small print is the acknowledgement that many data projects never make it to production. In 2016, Gartner assessed it at only 15%.

The operationalisation of data projects has been a key factor in helping organisations turn a data deluge into a workable digital transformation strategy, and DataOps carries on from where DevOps started. But there is a further Gartner warning: organisations who lack a sustainable data and analytics operationalisation framework by 2024 will see their initiatives set back by up to two years.

Operationalisation needs good orchestration to make it work, as Basil Faruqui, director of solutions marketing at BMC, explains. “If you think about building a data pipeline, whether you’re doing a simple BI project or a complex AI or machine learning project, you’ve got data ingestion, data storage and processing, and data insight – and underneath all of those four stages, there’s a variety of different technologies being used,” explains Faruqui. “And everybody agrees that in production, this should be automated.”

This is where Control-M from BMC, and in particular BMC Helix Control-M comes in. Control-M has been an integral part of many organisations for upwards of three decades, enabling businesses to run hundreds of thousands of batch jobs daily and help optimise complex operations such as supply chain management. But an increasingly complex technological landscape, across on-premises to cloud, as well as a greater usage of SaaS-based orchestration alongside consumption, made it a no-brainer to launch BMC Helix Control-M in 2020.

“CRMs and ERPs had been going the SaaS route for a while, but we started seeing more demands from the operations world for SaaS consumption models,” explains Faruqui.

The upshot of being a mature company – BMC was founded in 1980 – is that many customers have simply extended Control-M into more modern use cases. One example of a large organisation – and long-standing BMC customer – running an extremely complex supply chain is food manufacturer Hershey’s.

Apart from the time-sensitive necessity of running a business with perishable, delicate goods, the company has significantly adopted Azure, moving some existing ETL applications to the cloud, while Hershey’s operations are built on a complex SAP environment. Amid this infrastructure Control-M, in the words of Hershey’s analyst Todd Lightner, ‘literally runs our business.’

Faruqui returns to the stages of data ingestion, storage, processing, and insight to explain how Hershey’s would tackle a significant holiday campaign, or decide where to ship product. “It’s all data driven,” Faruqui explains. “They’re ingesting data from lots of systems of record, that are ingesting data from outside of the company; they’re pulling all that into massive data lakes where they’re running AI and ML algorithms to figure out a lot of these outcomes, and feeding into the analytics layer where business executives can look at dashboards and reports to make important decisions.

“They’re a really good example of somebody who has used orchestration and automation with Control-M as a strategic option for them,” adds Faruqui.

Yet this leads into another important point. DataOps is an important part of BMC’s strategy, but it is not the only part. “Data pipelines are dependent on a layer of applications both above and below them,” says Faruqui. “If you think about Hershey’s, trying to figure out what kind of promotion they should run, a lot of that data may be coming from SAP. And SAP is not a static system; it’s a system that is constantly being updated with workflows.

“So how does the data pipeline know that SAP is actually done and the data is ready for the data pipeline to start? And when they figure out the strategy, all that information needs to go back to SAP because the ordering of raw materials and everything is not going to happen in the data pipeline, it’s going to happen in ERPs,” adds Faruqui.

“So Control-M is able to connect across this layer, which is different from many of the tools that exist in the DataOps space.”

Faruqui is speaking at the AI & Big Data Expo Europe in Amsterdam in September around how orchestration and operationalisation is the next step in organisations’ DataOps journeys. So expect not only stories and best practices on what a successful journey looks like, and how to create data pipeline orchestration across hybrid environments combining multiple clouds with on-prem, but also a look at the future – and according to Faruqui, the complexity is only going one way.

“I think one of the things that will continue to be challenging is there’s just lots of different tools and capabilities that are coming up in the AI and ML space,” he explains. “If you look at AWS, Azure, Google, and you go to their website, and you click on their AI/ML offerings, it is quite extensive, and every event they do, they announce new capabilities and services. So that’s on the vendor side.

“On the customer side, what we’re seeing is they want to rapidly test and figure out which [tools] are going to be of use to them,” Faruqui adds. “So as an orchestration vendor, and orchestration in general within DataOps, this is both the challenge and the opportunity.

“The challenge is you’re going to have to keep up with this because orchestration doesn’t work if you can’t integrate into something new – but the opportunity here is that our customers are asking for this.

“They don’t want to have to reinvent the orchestration wheel every time they go and adopt new technology.”

Photo by Larisa Birta on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Basil Faruqui, BMC: Why DataOps needs orchestration to make it work appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/29/basil-faruqui-bmc-why-data-operationalisation-needs-orchestration-to-make-it-work/feed/ 0
Meta calls up generative AI squad as Snap releases ChatGPT-powered bot https://www.artificialintelligence-news.com/2023/02/28/meta-calls-up-generative-ai-squad-as-snap-releases-chatgpt-powered-bot/ https://www.artificialintelligence-news.com/2023/02/28/meta-calls-up-generative-ai-squad-as-snap-releases-chatgpt-powered-bot/#respond Tue, 28 Feb 2023 15:39:04 +0000 https://www.artificialintelligence-news.com/?p=12778 Generative AI is firmly in the sights of social media. Meta is forming a new product group around generative AI to focus on ‘building delightful experiences’ into all of the company’s products, while Snap has unveiled a new chatbot running on OpenAI’s GPT technology. CEO Mark Zuckerberg confirmed the seat-shuffling in a Facebook post, stating... Read more »

The post Meta calls up generative AI squad as Snap releases ChatGPT-powered bot appeared first on AI News.

]]>
Generative AI is firmly in the sights of social media. Meta is forming a new product group around generative AI to focus on ‘building delightful experiences’ into all of the company’s products, while Snap has unveiled a new chatbot running on OpenAI’s GPT technology.

CEO Mark Zuckerberg confirmed the seat-shuffling in a Facebook post, stating that teams currently working on generative AI will be pulled together. Detail was light on the scale of projects, but Zuckerberg noted a focus on ‘developing AI personas’ longer term, and experimentations taking place on chat in WhatsApp and Messenger, as well as with images, such as creative Instagram filters and advertising formats. The team will report to chief product officer Chris Cox, as reported by multiple sources.

“In the short term, we’ll focus on building creative and expressive tools,” wrote Zuckerberg. “Over the longer term, we’ll focus on developing AI personas that can help people in a variety of ways.

“We have a lot of foundational work to do before getting to the really futuristic experiences, but I’m excited about all of the new things we’ll build along the way,” he added.

Meanwhile, Snap this week announced the launch of My AI for Snapchat, a chatbot powered by the latest version of ChatGPT. The bot is available as an ‘experimental’ feature for Snapchat+ paid subscribers, and among the potential use cases include recommendations, organisation, and content creation.

The announcements from Meta and Snap serve as another tinder bundle with which to ignite the positioning taking place from big tech around generative AI. As this publication has explored, many of the major players are making moves, from Microsoft, to Google, to Amazon. Not everything has gone smoothly to say the least, but this remains the hottest of hot spaces right now. At MWC, taking place this week in Barcelona, one analyst said AI was being ‘mentioned in relation to pretty much everything.’

The recent blunders experienced by Microsoft and Alphabet – the latter wiping a cool $120 billion off the company’s value – were top of mind for Snap, who took the unusual step of apologising in advance for any foot-in-mouth moments users may experience.

The choice quote reads: “As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything. Please be aware of its many deficiencies and sorry in advance! While My AI is designed to avoid biased, incorrect, harmful or misleading information, mistakes may occur. Please do not share any secrets with My AI and do not rely on it for advice.”

Picture credit: Pixabay

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta calls up generative AI squad as Snap releases ChatGPT-powered bot appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/02/28/meta-calls-up-generative-ai-squad-as-snap-releases-chatgpt-powered-bot/feed/ 0
AWS and Hugging Face expand partnership to make AI more accessible https://www.artificialintelligence-news.com/2023/02/23/aws-and-hugging-face-expand-partnership-to-make-ai-more-accessible/ https://www.artificialintelligence-news.com/2023/02/23/aws-and-hugging-face-expand-partnership-to-make-ai-more-accessible/#respond Thu, 23 Feb 2023 15:39:25 +0000 https://www.artificialintelligence-news.com/?p=12772 Amazon Web Services (AWS) and Hugging Face have announced an expanded collaboration to accelerate the training and deployment of models for generative AI applications. Hugging Face has as its mission the need ‘to democratise good machine learning, one commit at a time.’ The company is best known for its Transformers library for PyTorch, TensorFlow and... Read more »

The post AWS and Hugging Face expand partnership to make AI more accessible appeared first on AI News.

]]>
Amazon Web Services (AWS) and Hugging Face have announced an expanded collaboration to accelerate the training and deployment of models for generative AI applications.

Hugging Face has as its mission the need ‘to democratise good machine learning, one commit at a time.’ The company is best known for its Transformers library for PyTorch, TensorFlow and JAX, which can support tasks ranging from natural language processing, to computer vision, to audio.

There are more than 100,000 free and accessible machine learning models on Hugging Face, which are altogether downloaded more than one million times per day by researchers, data scientists, and machine learning engineers.

In terms of the partnership, AWS will become the preferred cloud provider for Hugging Face, meaning developers can access tools from Amazon SageMaker, to AWS Trainium, to AWS Inferentia, and optimise the performance of their models for specific use cases at a lower cost.

The need to make AI open and accessible to all is at the heart of this announcement, as both companies noted. Hugging Face said that the two companies will ‘contribute next-generation models to the global AI community and democratise machine learning.’

“Building, training, and deploying large language and vision models is an expensive and time-consuming process that requires deep expertise in machine learning,” an AWS blog noted. “Since the models are very complex and can contain hundreds of billions of parameters, generative AI is largely out of reach for many developers.”

“The future of AI is here, but it’s not evenly distributed,” said Clement Delangue, CEO of Hugging Face, in a company blog. “Accessibility and transparency are the keys to sharing progress and creating tools to use these new capabilities wisely and responsibly.”

Readers of AI News will know of the democratisation of machine learning from the AWS perspective. Speaking in September, Felipe Chies outlined the proposition:

“Many of our API services require no machine learning for customers, and in some cases, end users may not even realise machine learning is being used to power experiences. The services make it really easy to incorporate AI into applications without having to build and train ML algorithms.

“If we want machine learning to be as expansive as we really want it to be, we need to make it much more accessible to people who aren’t machine learning practitioners. So when we built [for example] Amazon SageMaker, we designed it as a fully managed service that removes the heavy lifting, complexity, and guesswork from each step of the machine learning process, empowering everyday developers and scientists to successfully use machine learning.”

This announcement can be seen not just in the context of democratising the technology, but from a competitive standpoint. Microsoft’s moves in the market with OpenAI, and its ChatGPT-influenced Bing – albeit with the odd hiccup – have created waves; likewise Google with Bard, again not entirely error-free. Either way, the stakes for the biggest of big tech have increased and the battle ground for the ‘AI wars’ have intensified. Hugging Face has an existing relationship with Microsoft, announcing an endpoints service to securely deploy and scale Transformer models on Azure in May.

Picture credit: Hugging Face

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AWS and Hugging Face expand partnership to make AI more accessible appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/02/23/aws-and-hugging-face-expand-partnership-to-make-ai-more-accessible/feed/ 0
Lucy 4 is moving ahead with generative AI for knowledge management https://www.artificialintelligence-news.com/2023/02/03/lucy-4-is-moving-ahead-generative-ai-for-knowledge-management/ https://www.artificialintelligence-news.com/2023/02/03/lucy-4-is-moving-ahead-generative-ai-for-knowledge-management/#respond Fri, 03 Feb 2023 17:00:21 +0000 https://www.artificialintelligence-news.com/?p=12697 When it comes to workplace bugbears, wasting time fruitlessly searching shared drives for a particular resource has to be up there. Yet would it not be easier to lighten the workload through an answer engine with a sprinkling of generative AI?   Machine learning software, by definition, is self-learning. As users ask more questions of an... Read more »

The post Lucy 4 is moving ahead with generative AI for knowledge management appeared first on AI News.

]]>
When it comes to workplace bugbears, wasting time fruitlessly searching shared drives for a particular resource has to be up there. Yet would it not be easier to lighten the workload through an answer engine with a sprinkling of generative AI?  

Machine learning software, by definition, is self-learning. As users ask more questions of an AI, and the AI provides answers, feedback loops are developed which help the product get stronger and the return on investment become greater. 

“It’s really cool that a proper AI solution is self-learning,” Scott Litman, founder and chief operating officer of AI-powered answer engine Lucy, explains. “The AI is growing with them. If the AI misses, it’s a teachable moment, and [it] will be smarter tomorrow.” 

With generative AI, the stakes are now so much higher. Generative AI is defined as algorithms which can be used to create new content, from text, to code, to audio. ChatGPT, from OpenAI, has understandably garnered a fleet of headlines because it appears to have opened up a world of possibility for content creation.  

Yet it is not all plain sailing. For one, users have delighted in pointing out the fallibilities of ChatGPT, which is fine – it is always learning after all. But other users have spotted the software’s tendency to make up a response if it is unsure. “The smug confidence with which [the] AI asserts totally incorrect information is striking,” the writer Ted Gioia noted. “A con artist could not do better.” 

Lucy’s job is not to make incorrect assertions, but to ‘liberate corporate knowledge’: put simply, get the right answer to the right person at the right time in seconds, regardless of where that answer lives. Much of this will primarily involve sifting through reams of PDFs, PowerPoints and Word documents and point to the most relevant detail, but this liberation can turn up insights in previously forgotten places, such as video training courses. 

With the recent release of Lucy 4, the next generation of its platform, and Lucy Synopsis, there is a further push towards generative AI – but without the drawbacks. Lucy can not only point a user to an answer, but provide a unique two-to-three sentence summary which directly answers the question. Crucially, as Steve Frederickson, director of product management points out, Lucy’s summations are there solely to help the user, not offer a spurious alternative. 

One of the key elements of Lucy 4, again involving the generative AI element, is expanded integration with Microsoft Teams and Slack, where users can mention Lucy in a chat. This reflects not just greater ease of use for employees, but a wider trend around search.  

“One of the things we realised last year was that, along with the inefficiency of searching, people in some cases have given up on the idea of searching,” explains Litman. The result is that users are more likely to fire out a message on the chat apps than waste time on a frustrating scavenger hunt. “Which sometimes works – human intelligence is a great thing,” says Litman. “But if you’re the subject matter expert answering all the questions, you’re constantly being disrupted.” 

“We come at it from our own perspective – we have a core value of experimentation,” adds Frederickson. “Lucy has always had the tenet of going above and beyond search. We hold ourselves to that higher standard.” 

It is best to think of Lucy as like a new employee. No matter how glittering your recruit’s CV is, it will take time for a new starter to get used to the role, the systems, and the culture. But they will get better. Unlike human employees though, Lucy can hit the ground running. Frederickson notes that Lucy’s goal is ultimately to ‘give time back to the world’, and a more intuitive user interface and improved navigation help with this.  

Enhanced collaboration is another important aspect of Lucy 4, and again relates to user behaviour. “What do users do once they’ve found the answer?” notes Frederickson. “Do they grab a quote? Do they share it with co-workers? Do they put it in their deck? What is the destination for this knowledge?” Annotating and adding context within the tool all help to retain the knowledge which has been liberated.  

Ultimately, companies survive and thrive on their data literacy. While it is easy to be attracted to big, expansive projects and technologies, adding generative AI to a slick answer engine will help employees, continually improve ROI – and represents the next generation of knowledge management. 

Find out more about Lucy 4 here.

The post Lucy 4 is moving ahead with generative AI for knowledge management appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/02/03/lucy-4-is-moving-ahead-generative-ai-for-knowledge-management/feed/ 0
Bill Gates calls AI ‘quite revolutionary’ – but is less sure about the metaverse https://www.artificialintelligence-news.com/2023/01/13/bill-gates-calls-ai-quite-revolutionary-but-is-less-sure-about-the-metaverse/ https://www.artificialintelligence-news.com/2023/01/13/bill-gates-calls-ai-quite-revolutionary-but-is-less-sure-about-the-metaverse/#respond Fri, 13 Jan 2023 17:07:56 +0000 https://www.artificialintelligence-news.com/?p=12611 Bill Gates has given his verdict on some of tech’s biggest buzzwords – and proffered that while the metaverse is lukewarm, AI is ‘quite revolutionary.’  The Microsoft co-founder was participating in his annual Reddit Ask Me Anything (AMA) session and was asked about major technology shifts. AI, Gates noted, was in his opinion ‘the big... Read more »

The post Bill Gates calls AI ‘quite revolutionary’ – but is less sure about the metaverse appeared first on AI News.

]]>
Bill Gates has given his verdict on some of tech’s biggest buzzwords – and proffered that while the metaverse is lukewarm, AI is ‘quite revolutionary.’ 

The Microsoft co-founder was participating in his annual Reddit Ask Me Anything (AMA) session and was asked about major technology shifts. AI, Gates noted, was in his opinion ‘the big one.’ 

“I don’t think Web3 was that big or that metaverse stuff alone was revolutionary, but AI is quite revolutionary,” Gates wrote

With regard to generative AI, a specific kind of AI focused on generating new content, from text, to images, to music, Gates was particularly interested. “I am quite impressed with the rate of improvement in these AIs. I think they will have a huge impact,” he wrote.  

Gates added he continues to work with Microsoft so is following this area ‘very closely.’ “Thinking of it in the Gates Foundation context we want to have tutors that help kids learn math and stay interested. We want medical help for people in Africa who can’t access a doctor,” he added. 

Previous missives from Gates have been more optimistic in terms of the impact of the metaverse. At the end of 2021, in his personal blog, Gates noted he was ‘super impressed’ by the improvements with regard to spatial audio in particular. This enables more immersive meetings, where the sound is coming from the direction of a colleague as per face-to-face discussion. “There’s still some work to do, but we’re approaching a threshold where the technology begins to truly replicate the experience of being together in the office,” he wrote at the time

Microsoft has been gradually exploring the metaverse as part of its strategy to ‘bridge the digital and physical worlds.’ October saw a partnership with Meta on platform and software to ‘deliver immersive experiences for the future of work and play.’ The company cited Work Trend Index data which showed half of Gen Z and millennials surveyed envisioned doing some of their work in the metaverse in the next two years.

(Image Credit: Kuhlmann /MSC under CC BY 3.0 DE license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Bill Gates calls AI ‘quite revolutionary’ – but is less sure about the metaverse appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/13/bill-gates-calls-ai-quite-revolutionary-but-is-less-sure-about-the-metaverse/feed/ 0
AI & Big Data Expo: Exploring ethics in AI and the guardrails required  https://www.artificialintelligence-news.com/2022/12/16/ai-big-data-expo-exploring-ethics-in-ai-and-the-guardrails-required/ https://www.artificialintelligence-news.com/2022/12/16/ai-big-data-expo-exploring-ethics-in-ai-and-the-guardrails-required/#respond Fri, 16 Dec 2022 11:14:27 +0000 https://www.artificialintelligence-news.com/?p=12565 The tipping point between acceptability and antipathy when it comes to ethical implications of artificial intelligence have long been thrashed out. Recently, the lines feel increasingly blurred; AI-generated art, or photography, not to mention the possibilities of OpenAI’s ChatGPT, reveals a greater sophistication of the technology. But at what cost?  A recent panel session at... Read more »

The post AI & Big Data Expo: Exploring ethics in AI and the guardrails required  appeared first on AI News.

]]>
The tipping point between acceptability and antipathy when it comes to ethical implications of artificial intelligence have long been thrashed out. Recently, the lines feel increasingly blurred; AI-generated art, or photography, not to mention the possibilities of OpenAI’s ChatGPT, reveals a greater sophistication of the technology. But at what cost? 

A recent panel session at the AI & Big Data Expo` in London explored these ethical grey areas, from beating inbuilt bias to corporate mechanisms and mitigating the risk of job losses. 

James Fletcher leads the responsible application of AI at the BBC. His job is to, as he puts it, ‘make sure what [the BBC] is doing with AI aligns with our values.’  He says that AI’s purpose, within the context of the BBC, is automating decision making. Yet ethics are a serious challenge and one that is easier to talk about than act upon – partly down to the pace of change. Fletcher took three months off for parental leave, and the changes upon his return, such as Stable Diffusion, ‘blew his mind [as to] how quickly this technology is progressing.’ 

“I kind of worry that the train is pulling away a bit in terms of technological advancement, from the effort required in order to solve those difficult problems,” said Fletcher. “This is a socio-technical challenge, and it is the socio part of it that is really hard. We have to engage not just as technologists, but as citizens.” 

Daniel Gagar of PA Consulting, who moderated the session, noted the importance of ‘where the buck stops’ in terms of responsibility, and for more serious consequences such as law enforcement. Priscila Chaves Martinez, director at the Transformation Management Office, was keen to point out inbuilt inequalities which would be difficult to solve.  

“I think it’s a great improvement, the fact we’ve been able to progress from a principled standpoint,” she said. “What concerns me the most is that this wave of principles will be diluted without a basic sense that it applies differently for every community and every country.” In other words, what works in Europe or the US may not apply to the global south. “Everywhere we incorporate humans into the equation, we will get bias,” she added, referring to the socio-technical argument. “So social first, technical afterwards.” 

“There is need for concern and need for having an open dialogue,” commented Elliot Frazier, head of AI infrastructure at the AI for Good Foundation, adding there needed to be introduction of frameworks and principles into the broader AI community. “At the moment, we’re significantly behind in having standard practices, standard ways of doing risk assessments,” Frazier added.  

“I would advocate [that] as a place to start – actually sitting down at the start of any AI project, assessing the potential risks.” Frazier noted that the foundation is looking along these lines with an AI ethics audit programme where organisations can get help on how they construct the correct leading questions of their AI, and to ensure the right risk management is in place. 

For Ghanasham Apte, lead AI developer behaviour analytics and personalisation at BT Group, it is all about guardrails. “We need to realise that AI is a tool – it is a dangerous tool if you apply it in the wrong way,” said Apte. Yet with steps such as explainable AI, or ensuring bias in the data is taken care of, multiple guardrails are ‘the only way we will overcome this problem,’ Apte added.  

Chaves Martinez, to an extent, disagreed. “I don’t think adding more guardrails is sufficient,” she commented. “It’s certainly the right first step, but it’s not sufficient. It’s not a conversation between data scientists and users, or policymakers and big companies; it’s a conversation of the entire ecosystem, and not all the ecosystem is well represented.” 

Guardrails may be a useful step, but Fletcher, to his original point, noted the goalposts continue to shift. “We need to be really conscious of the processes that need to be in place to ensure AI is accountable and contestable; that this is not just a framework where we can tick things off, but ongoing, continual engagement,” said Fletcher. 

“If you think about things like bias, what we think now is not what we thought of it five, 10 years ago. There’s a risk if we take the solutionist approach, we bake a type of bias into AI, then we have problems [and] we would need to re-evaluate our assumptions.” 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI & Big Data Expo: Exploring ethics in AI and the guardrails required  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/12/16/ai-big-data-expo-exploring-ethics-in-ai-and-the-guardrails-required/feed/ 0