xai Archives - AI News https://www.artificialintelligence-news.com/tag/xai/ Artificial Intelligence News Mon, 18 Mar 2024 11:13:16 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png xai Archives - AI News https://www.artificialintelligence-news.com/tag/xai/ 32 32 Elon Musk’s xAI open-sources Grok https://www.artificialintelligence-news.com/2024/03/18/elon-musk-xai-open-sources-grok/ https://www.artificialintelligence-news.com/2024/03/18/elon-musk-xai-open-sources-grok/#respond Mon, 18 Mar 2024 11:13:15 +0000 https://www.artificialintelligence-news.com/?p=14560 Elon Musk’s startup xAI has made its large language model Grok available as open source software. The 314 billion parameter model can now be freely accessed, modified, and distributed by anyone under an Apache 2.0 license. The release fulfils Musk’s promise to open source Grok in an effort to accelerate AI development and adoption. XAI... Read more »

The post Elon Musk’s xAI open-sources Grok appeared first on AI News.

]]>
Elon Musk’s startup xAI has made its large language model Grok available as open source software. The 314 billion parameter model can now be freely accessed, modified, and distributed by anyone under an Apache 2.0 license.

The release fulfils Musk’s promise to open source Grok in an effort to accelerate AI development and adoption.

XAI announced the move in a blog post, stating: “We are releasing the base model weights and network architecture of Grok-1, our large language model. Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI.”

Grok had previously only been available through Musk’s social network X as part of the paid X Premium+ subscription. By open sourcing it, xAI has empowered developers, companies, and enthusiasts worldwide to leverage the advanced language model’s capabilities.

The model’s release includes its weights, which represent the strength of connections between its artificial neurons, as well as documentation and code. However, it omits the original training data and access to real-time data streams that gave the proprietary version an advantage.

Named after a term meaning “understanding” from Douglas Adams’ Hitchhiker’s Guide series, Grok has been positioned as a more open and humorous alternative to OpenAI’s ChatGPT. The move aligns with Musk’s battle against censorship, “woke” ideology displayed by models like Gemini, and his recent lawsuit claiming OpenAI violated its nonprofit principles.

While xAI’s open source release earned praise from open source advocates, some critics raised concerns about potential misuse facilitated by unrestricted access to powerful AI capabilities.

You can find Grok-1 on GitHub here.

(Image Credit: xAI)

See also: Anthropic says Claude 3 Haiku is the fastest model in its class

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Elon Musk’s xAI open-sources Grok appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/18/elon-musk-xai-open-sources-grok/feed/ 0
OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ https://www.artificialintelligence-news.com/2024/03/12/openai-calls-elon-musk-lawsuit-claims-incoherent/ https://www.artificialintelligence-news.com/2024/03/12/openai-calls-elon-musk-lawsuit-claims-incoherent/#respond Tue, 12 Mar 2024 16:36:27 +0000 https://www.artificialintelligence-news.com/?p=14529 OpenAI has hit back at Elon Musk’s lawsuit, deeming his claims “convoluted — often incoherent — factual premises.” Musk’s lawsuit accuses OpenAI of breaching its non-profit status and reneging on a founding agreement to keep the organisation non-profit and release its AI technology publicly. However, OpenAI has refuted these allegations, stating that there is no... Read more »

The post OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ appeared first on AI News.

]]>
OpenAI has hit back at Elon Musk’s lawsuit, deeming his claims “convoluted — often incoherent — factual premises.”

Musk’s lawsuit accuses OpenAI of breaching its non-profit status and reneging on a founding agreement to keep the organisation non-profit and release its AI technology publicly. However, OpenAI has refuted these allegations, stating that there is no such agreement with Musk and branding it as a mere “fiction.”

According to court filings, OpenAI asserts that there is no existing agreement with Musk, contradicting his assertions in the lawsuit.

The organisation further alleges that Musk had actually supported the idea of transitioning OpenAI into a for-profit entity under his control. It is claimed that Musk advocated for full control of the company as CEO, majority equity ownership, and even suggested tethering it to Tesla for financial backing. However, negotiations between Musk and OpenAI did not culminate in an agreement, leading to Musk’s withdrawal from the project.

OpenAI’s rebuttal highlights purported emails exchanged between Musk and the organisation, indicating his prior knowledge and support for its transition to a for-profit model. The company suggests that Musk’s lawsuit is driven by his desire to claim credit for OpenAI’s successes after he disengaged from the project.

In response to Musk’s legal action, OpenAI has portrayed his motives as self-serving rather than altruistic, asserting that his lawsuit is a bid to further his own commercial interests under the guise of championing humanity’s cause.

Meanwhile, Musk’s own foray into the realm of artificial intelligence with his company xAI has drawn attention.

Musk announced xAI’s intention to open source its Grok chatbot shortly after OpenAI’s publication of emails purportedly demonstrating Musk’s prior awareness of its non-open source intentions. While this move could be interpreted as a retaliatory gesture against OpenAI, it also presents an opportunity for xAI to garner feedback from developers and enhance its technology.

The legal clash between Musk and OpenAI underscores the complexities surrounding the development and governance of AI technologies, as well as the competing interests within the tech industry.

(Photo by Tim Mossholder on Unsplash)

See also: OpenAI announces new board lineup and governance structure

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI calls Elon Musk’s lawsuit claims ‘incoherent’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/12/openai-calls-elon-musk-lawsuit-claims-incoherent/feed/ 0
Justin Swansburg, DataRobot: On combining human and machine intelligence https://www.artificialintelligence-news.com/2022/10/04/justin-swansburg-datarobot-on-combining-human-and-machine-intelligence/ https://www.artificialintelligence-news.com/2022/10/04/justin-swansburg-datarobot-on-combining-human-and-machine-intelligence/#respond Tue, 04 Oct 2022 10:55:51 +0000 https://www.artificialintelligence-news.com/?p=12349 Advancements in AI are providing transformational benefits to enterprises, but keeping risks in check and improving consumer sentiment is paramount. Explainable AI (XAI) is the idea that an AI should always provide reasoning for its decisions in a way that makes it easy for humans to comprehend. XAI helps to build trust and ensures that... Read more »

The post Justin Swansburg, DataRobot: On combining human and machine intelligence appeared first on AI News.

]]>
Advancements in AI are providing transformational benefits to enterprises, but keeping risks in check and improving consumer sentiment is paramount.

Explainable AI (XAI) is the idea that an AI should always provide reasoning for its decisions in a way that makes it easy for humans to comprehend. XAI helps to build trust and ensures that issues can be more quickly identified before they cause wider damage.

AI News caught up with Justin Swansburg, VP of Americas Data Science Practice at DataRobot, to discuss how the company is driving AI adoption using concepts like XAI to combine the strengths of human and machine intelligence.

AI News: Can you give us a brief overview of DataRobot’s core solutions?

Justin Swansburg: DataRobot’s AI Cloud platform is uniquely built to democratise and accelerate the use of AI while delivering critical insights that drive clear business results. 

DataRobot helps organisations across industries harness the transformational power of AI, from restoring supply chain resiliency to accelerating the treatment and prevention of disease and enhancing patient care to combating the climate crisis.

As one of the most widely deployed and proven AI platforms in the market today, DataRobot AI Cloud brings together a broad range of data, giving businesses comprehensive insights to drive revenue growth, manage operations, and reduce risk.

DataRobot has delivered over 1.4 trillion predictions for customers around the world, including the U.S. Army, CBS Interactive, and CVS.

AN: What is “augmented intelligence” and how does it differ from artificial intelligence?

JS: Artificial intelligence and augmented intelligence share the same objective but have different ways of accomplishing it.

Augmented intelligence brings together qualities of human intuition and experience with the efficiency and power of machine learning. Whereas artificial intelligence is often used as a replacement or substitute for human processes and decision-making.

AN: Do you need machine learning or programming experience to build predictive analytics with DataRobot?  

JS: DataRobot is a unified platform designed to democratise and accelerate the use of AI. This means that anyone in an organisation – with or without specialist knowledge of AI – can use DataRobot to build, deploy, and manage AI applications to transform their products, services, and operations.

AN: How does DataRobot support the idea of explainable AI and why is that important?

JS: DataRobot Explainable AI helps organisations understand the behaviour of models and gain confidence in their results. When AI is not transparent, it can be difficult to trust the system and translate insights and predictions into business outcomes.

With Explainable AI, users can easily understand the model inputs while bridging the gap between development and actionable results.

AN: DataRobot recently earned a coveted spot among Forrester’s leading AI/ML platforms – what makes you stand out from rivals?

JS: We’re very proud of this achievement. We believe that our innovative platform and customer loyalty sets us apart from competitors.

Over the last year, we’ve focused on improving our AI platform through new tooling and functionality, as well as several acquisitions.

Our main goal is to provide customers with the best possible technology to help solve their business problems and we’ve heard that our platform’s ease of use, model documentation, and explainability have been appreciated by customers. 

AN: Your report, AI and the Power of Perception, found that 72 percent of businesses are positively impacted by AI but consumer scepticism remains – how do you think that can be addressed?

JS: That’s a great question. While there is significant scepticism, we believe that this can be addressed with some form of increased regulatory guidance and education on the benefits of AI for both businesses and consumers.

We believe that increased training for businesses would help to demonstrate to consumers a commitment to higher standards. It would also give consumers more confidence that responsible data practices were being followed.

Other consumer concerns, like the potential of AI to replace jobs, will take longer to address. But, it is too early to make a call on the extent to which these concerns are warranted, overblown, or somewhere in between.

We’re interested to see how perceptions change over time and are hopeful that more and more people will start to realise the great benefits AI has to offer. 

Justin Swansburg and the DataRobot team will be sharing their invaluable insights at this year’s AI & Big Data Expo North America. You can find out more about Justin’s sessions here and be sure to swing by DataRobot’s booth at stand #176

The post Justin Swansburg, DataRobot: On combining human and machine intelligence appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/10/04/justin-swansburg-datarobot-on-combining-human-and-machine-intelligence/feed/ 0
Democrats renew push for ‘algorithmic accountability’ https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/ https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/#respond Fri, 04 Feb 2022 09:04:05 +0000 https://artificialintelligence-news.com/?p=11647 Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms. The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory... Read more »

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms.

The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY)

Concern about bias in algorithms is increasing as they become used for ever more critical decisions. Bias would lead to inequalities being automated—with some people being given more opportunities than others.

“As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalised communities,” said Booker.

A human can always be held accountable for a decision to, say, reject a mortgage/loan application. There’s currently little-to-no accountability for algorithmic decisions.

Representative Yvette Clarke explained:

“When algorithms determine who goes to college, who gets healthcare, who gets a home, and even who goes to prison, algorithmic discrimination must be treated as the highly significant issue that it is.

These large and impactful decisions, which have become increasingly void of human input, are forming the foundation of our American society that generations to come will build upon. And yet, they are subject to a wide range of flaws from programming bias to faulty datasets that can reinforce broader societal discrimination, particularly against women and people of colour.

It is long past time Congress act to hold companies and software developers accountable for their discrimination by automation

With our renewed Algorithmic Accountability Act, large companies will no longer be able to turn a blind eye towards the deleterious impact of their automated systems, intended or not. We must ensure that our 21st Century technologies become tools of empowerment, rather than marginalisation and seclusion.”

The bill would force audits of AI systems; with findings reported to the Federal Trade Commission. A public database would be created so decisions can be reviewed to give confidence to consumers.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” commented Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

In our predictions for the AI industry in 2022, we predicted an increased focus on Explainable AI (XAI). XAI is artificial intelligence in which the results of the solution can be understood by humans and is seen as a partial solution to algorithmic bias.

“Too often, Big Tech’s algorithms put profits before people, from negatively impacting young people’s mental health, to discriminating against people based on race, ethnicity, or gender, and everything in between,” said Senator Tammy Baldwin (D-Wis), who is co-sponsoring the bill.

“It is long past time for the American public and policymakers to get a look under the hood and see how these algorithms are being used and what next steps need to be taken to protect consumers.”

Joining Baldwin in co-sponsoring the Algorithmic Accountability Act are Senators Brian Schatz (D-Hawaii), Mazie Hirono (D-Hawaii), Ben Ray Luján (D-NM), Bob Casey (D-Pa), and Martin Heinrich (D-NM).

A copy of the full bill is available here (PDF)

(Photo by Darren Halstead on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/feed/ 0
Editorial: Our predictions for the AI industry in 2022 https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/ https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/#respond Thu, 23 Dec 2021 11:59:08 +0000 https://artificialintelligence-news.com/?p=11547 The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits. As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022. Tackling bias Our... Read more »

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
The AI industry continued to thrive this year as companies sought ways to support business continuity through rapidly-changing situations. For those already invested, many are now doubling-down after reaping the benefits.

As we wrap up the year, it’s time to look ahead at what to expect from the AI industry in 2022.

Tackling bias

Our ‘Ethics & Society’ category got more use than most others this year, and with good reason. AI cannot thrive when it’s not trusted.

Biases are present in algorithms that are already causing harm. They’ve been the subject of many headlines, including a number of ours, and must be addressed for the public to have confidence in wider adoption.

Explainable AI (XAI) is a partial solution to the problem. XAI is artificial intelligence in which the results of the solution can be understood by humans.

Robert Penman, Associate Analyst at GlobalData, comments:

“2022 will see the further rollout of XAI, enabling companies to identify potential discrimination in their systems’ algorithms. It is essential that companies correct their models to mitigate bias in data. Organisations that drag their feet will face increasing scrutiny as AI continues to permeate our society, and people demand greater transparency. For example, in the Netherlands, the government’s use of AI to identify welfare fraud was found to violate European human rights.

Reducing human bias present in training datasets is a huge challenge in XAI implementation. Even tech giant Amazon had to scrap its in-development hiring tool because it was claimed to be biased against women.

Further, companies will be desperate to improve their XAI capabilities—the potential to avoid a PR disaster is reason enough.”

To that end, expect a large number of acquisitions of startups specialising in synthetic data training in 2022.

Smoother integration

Many companies don’t know how to get started on their AI journeys. Around 30 percent of enterprises plan to incorporate AI into their company within the next few years, but 91 percent foresee significant barriers and roadblocks.

If the confusion and anxiety that surrounds AI can be tackled, it will lead to much greater adoption.

Dr Max Versace, PhD, CEO and Co-Founder of Neurala, explains:

“Similar to what happened with the introduction of WordPress for websites in early 2000, platforms that resemble a ‘WordPress for AI’ will simplify building and maintaining AI models. 

In manufacturing for example, AI platforms will provide integration hooks, hardware flexibility, ease of use by non-experts, the ability to work with little data, and, crucially, a low-cost entry point to make this technology viable for a broad set of customers.”

AutoML platforms will thrive in 2022 and beyond.

From the cloud to the edge

The migration of AI from the cloud to the edge will accelerate in 2022.

Edge processing has a plethora of benefits over relying on cloud servers including speed, reliability, privacy, and lower costs.

Versace commented:

“Increasingly, companies are realising that the way to build a truly efficient AI algorithm is to train it on their own unique data, which might vary substantially over time. To do that effectively, the intelligence needs to directly interface with the sensors producing the data. 

From there, AI should run at a compute edge, and interface with cloud infrastructure only occasionally for backups and/or increased functionality. No critical process – for example,  in a manufacturing plant – should exclusively rely on cloud AI, exposing the manufacturing floor to connectivity/latency issues that could disrupt production.”

Expect more companies to realise the benefits of migrating from cloud to edge AI in 2022.

Doing more with less

Among the early concerns about the AI industry is that it would be dominated by “big tech” due to the gargantuan amount of data they’ve collected.

However, innovative methods are now allowing algorithms to be trained with less information. Training using smaller but more unique datasets for each deployment could prove to be more effective.

We predict more startups will prove the world doesn’t have to rely on big tech in 2022.

Human-powered AI

While XAI systems will provide results which can be understood by humans, the decisions made by AIs will be more useful because they’ll be human-powered.

Varun Ganapathi, PhD, Co-Founder and CTO at AKASA, said:

“For AI to truly be useful and effective, a human has to be present to help push the work to the finish line. Without guidance, AI can’t be expected to succeed and achieve optimal productivity. This is a trend that will only continue to increase.

Ultimately, people will have machines report to them. In this world, humans will be the managers of staff – both other humans and AIs – that will need to be taught and trained to be able to do the tasks they’re needed to do.

Just like people, AI needs to constantly be learning to improve performance.”

Greater human input also helps to build wider trust in AI. Involving humans helps to counter narratives about AI replacing jobs and concerns that decisions about people’s lives could be made without human qualities such as empathy and compassion.

Expect human input to lead to more useful AI decisions in 2022.

Avoiding captivity

The telecoms industry is currently pursuing an innovation called Open RAN which aims to help operators avoid being locked to specific vendors and help smaller competitors disrupt the relative monopoly held by a small number companies.

Enterprises are looking to avoid being held in captivity by any AI vendor.

Doug Gilbert, CIO and Chief Digital Officer at Sutherland, explains:

“Early adopters of rudimentary enterprise AI embedded in ERP / CRM platforms are starting to feel trapped. In 2022, we’ll see organisations take steps to avoid AI lock-in. And for good reason. AI is extraordinarily complex.

When embedded in, say, an ERP system, control, transparency, and innovation is handed over to the vendor not the enterprise. AI shouldn’t be treated as a product or feature: it’s a set of capabilities. AI is also evolving rapidly, with new AI capabilities and continuously improved methods of training algorithms.

To get the most powerful results from AI, more enterprises will move toward a model of combining different AI capabilities to solve unique problems or achieve an outcome. That means they’ll be looking to spin up more advanced and customizable options and either deprioritising AI features in their enterprise platforms or winding down those expensive but basic AI features altogether.”

In 2022 and beyond, we predict enterprises will favour AI solutions that avoid lock-in.

Chatbots get smart

Hands up if you’ve ever screamed (internally or externally) that you just want to speak to a human when dealing with a chatbot—I certainly have, more often than I’d care to admit.

“Today’s chatbots have proven beneficial but have very limited capabilities. Natural language processing will start to be overtaken by neural voice software that provides near real time natural language understanding (NLU),” commented Gilbert.

“With the ability to achieve comprehensive understanding of more complex sentence structures, even emotional states, break down conversations into meaningful content, quickly perform keyword detection and named entity recognition, NLU will dramatically improve the accuracy and the experience of conversational AI.”

In theory, this will have two results:

  • Augmenting human assistance in real-time, such as suggesting responses based on behaviour or based on skill level.
  • Change how a customer or client perceives they’re being treated with NLU delivering a more natural and positive experience.  

In 2022, chatbots will get much closer to offering a human-like experience.

It’s not about size, it’s about the quality

A robust AI system requires two things: a functioning model and underlying data to train that model. Collecting huge amounts of data is a waste of time if it’s not of high quality and labeled correctly.

Gabriel Straub, Chief Data Scientist at Ocado Technology, said:

“Andrew Ng has been speaking about data-centric AI, about how improving the quality of your data can often lead to better outcomes than improving your algorithms (at least for the same amount of effort.)

So, how do you do this in practice? How do you make sure that you manage the quality of data at least as carefully as the quantity of data you collect?

There are two things that will make a big difference: 1) making sure that data consumers are always at the heart of your data thinking and 2) ensuring that data governance is a function that enables you to unlock the value in your data, safely, rather than one that focuses on locking down data.”

Expect the AI industry to make the quality of data a priority in 2022.

(Photo by Michael Dziedzic on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post Editorial: Our predictions for the AI industry in 2022 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/23/editorial-our-predictions-for-the-ai-industry-in-2022/feed/ 0