AGI - AI News https://www.artificialintelligence-news.com/categories/agi/ Artificial Intelligence News Fri, 01 Mar 2024 13:09:27 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AGI - AI News https://www.artificialintelligence-news.com/categories/agi/ 32 32 Elon Musk sues OpenAI over alleged breach of nonprofit agreement https://www.artificialintelligence-news.com/2024/03/01/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/ https://www.artificialintelligence-news.com/2024/03/01/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/#respond Fri, 01 Mar 2024 13:09:25 +0000 https://www.artificialintelligence-news.com/?p=14473 Elon Musk has filed a lawsuit against OpenAI and its CEO, Sam Altman, citing a violation of their nonprofit agreement. The legal battle, unfolding in the Superior Court of California for the County of San Francisco, revolves around OpenAI’s departure from its foundational mission of advancing open-source artificial general intelligence (AGI) for the betterment of... Read more »

The post Elon Musk sues OpenAI over alleged breach of nonprofit agreement appeared first on AI News.

]]>
Elon Musk has filed a lawsuit against OpenAI and its CEO, Sam Altman, citing a violation of their nonprofit agreement.

The legal battle, unfolding in the Superior Court of California for the County of San Francisco, revolves around OpenAI’s departure from its foundational mission of advancing open-source artificial general intelligence (AGI) for the betterment of humanity.

Musk was a co-founder and early backer of OpenAI. According to Musk, Altman and Greg Brockman (another co-founder and current president of OpenAI) convinced him to bankroll the startup in 2015 on promises that it would remain a nonprofit.

In his legal challenge, Musk accuses OpenAI of straying from its principles through a collaboration with Microsoft—alleging that the partnership prioritises proprietary technology over the original ethos of open-source advancement.

Musk’s grievances include claims of contract breach, violation of fiduciary duty, and unfair business practices. He calls upon OpenAI to realign with its nonprofit objectives and seeks an injunction to halt the commercial exploitation of AGI technology.

At the heart of the dispute is OpenAI’s recent launch of GPT-4 in March 2023. Musk contends that unlike its predecessors, GPT-4 represents a shift towards closed-source models—a move he believes favours Microsoft’s financial interests at the expense of OpenAI’s altruistic mission.

Founded in 2015 as a nonprofit AI research lab, OpenAI transitioned into a commercial entity in 2020. OpenAI has now adopted a profit-driven approach, with revenues reportedly surpassing $2 billion annually.

Musk, who has long voiced concerns about the risks posed by AI, has called for robust government regulation and responsible AI development. He questions the technical expertise of OpenAI’s current board and highlights the removal and subsequent reinstatement of Altman in November 2023 as evidence of a profit-oriented agenda aligned with Microsoft’s interests.

See also: Mistral AI unveils LLM rivalling major players

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Elon Musk sues OpenAI over alleged breach of nonprofit agreement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/01/elon-musk-sues-openai-alleged-breach-nonprofit-agreement/feed/ 0
Amazon trains 980M parameter LLM with ’emergent abilities’ https://www.artificialintelligence-news.com/2024/02/15/amazon-trains-980m-parameter-llm-emergent-abilities/ https://www.artificialintelligence-news.com/2024/02/15/amazon-trains-980m-parameter-llm-emergent-abilities/#respond Thu, 15 Feb 2024 14:35:28 +0000 https://www.artificialintelligence-news.com/?p=14410 Researchers at Amazon have trained a new large language model (LLM) for text-to-speech that they claim exhibits “emergent” abilities.  The 980 million parameter model, called BASE TTS, is the largest text-to-speech model yet created. The researchers trained models of various sizes on up to 100,000 hours of public domain speech data to see if they... Read more »

The post Amazon trains 980M parameter LLM with ’emergent abilities’ appeared first on AI News.

]]>
Researchers at Amazon have trained a new large language model (LLM) for text-to-speech that they claim exhibits “emergent” abilities. 

The 980 million parameter model, called BASE TTS, is the largest text-to-speech model yet created. The researchers trained models of various sizes on up to 100,000 hours of public domain speech data to see if they would observe the same performance leaps that occur in natural language processing models once they grow past a certain scale. 

They found that their medium-sized 400 million parameter model – trained on 10,000 hours of audio – showed a marked improvement in versatility and robustness on tricky test sentences.

The test sentences contained complex lexical, syntactic, and paralinguistic features like compound nouns, emotions, foreign words, and punctuation that normally trip up text-to-speech systems. While BASE TTS did not handle them perfectly, it made significantly fewer errors in stress, intonation, and pronunciation than existing models.

“These sentences are designed to contain challenging tasks—none of which BASE TTS is explicitly trained to perform,” explained the researchers. 

The largest 980 million parameter version of the model – trained on 100,000 hours of audio – did not demonstrate further abilities beyond the 400 million parameter version.

While an experimental process, the creation of BASE TTS demonstrates these models can reach new versatility thresholds as they scale—an encouraging sign for conversational AI. The researchers plan further work to identify optimal model size for emergent abilities.

The model is also designed to be lightweight and streamable, packaging emotional and prosodic data separately. This could allow the natural-sounding spoken audio to be transmitted across low-bandwidth connections.

You can find the full BASE TTS paper on arXiv here.

(Photo by Nik on Unsplash)

See also: OpenAI rolls out ChatGPT memory to select users

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon trains 980M parameter LLM with ’emergent abilities’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/15/amazon-trains-980m-parameter-llm-emergent-abilities/feed/ 0
OpenAI introduces team dedicated to stopping rogue AI https://www.artificialintelligence-news.com/2023/07/06/openai-introduces-team-dedicated-stopping-rogue-ai/ https://www.artificialintelligence-news.com/2023/07/06/openai-introduces-team-dedicated-stopping-rogue-ai/#respond Thu, 06 Jul 2023 10:06:02 +0000 https://www.artificialintelligence-news.com/?p=13261 The potential dangers of highly-intelligent AI systems have been a topic of concern for experts in the field. Recently, Geoffrey Hinton – the so-called “Godfather of AI” – expressed his worries about the possibility of superintelligent AI surpassing human capabilities and causing catastrophic consequences for humanity. Similarly, Sam Altman, CEO of OpenAI, the company behind... Read more »

The post OpenAI introduces team dedicated to stopping rogue AI appeared first on AI News.

]]>
The potential dangers of highly-intelligent AI systems have been a topic of concern for experts in the field.

Recently, Geoffrey Hinton – the so-called “Godfather of AI” – expressed his worries about the possibility of superintelligent AI surpassing human capabilities and causing catastrophic consequences for humanity.

Similarly, Sam Altman, CEO of OpenAI, the company behind the popular ChatGPT chatbot, admitted to being fearful of the potential effects of advanced AI on society.

In response to these concerns, OpenAI has announced the establishment of a new unit called Superalignment.

The primary goal of this initiative is to ensure that superintelligent AI does not lead to chaos or even human extinction. OpenAI acknowledges the immense power that superintelligence can possess and the potential dangers it presents to humanity.

While the development of superintelligent AI may still be some years away, OpenAI believes it could be a reality by 2030. Currently, there is no established system for controlling and guiding a potentially superintelligent AI, making the need for proactive measures all the more crucial.

Superalignment aims to build a team of top machine learning researchers and engineers who will work on developing a “roughly human-level automated alignment researcher.” This researcher will be responsible for conducting safety checks on superintelligent AI systems. 

OpenAI acknowledges that this is an ambitious goal and that success is not guaranteed. However, the company remains optimistic that with a focused and concerted effort, the problem of superintelligence alignment can be solved.

The rise of AI tools like OpenAI’s ChatGPT and Google’s Bard has already brought significant changes to the workplace and society. Experts predict that these changes will only intensify in the near future, even before the advent of superintelligent AI.

Recognising the transformative potential of AI, governments worldwide are racing to establish regulations to ensure its safe and responsible deployment. However, the lack of a unified international approach poses challenges. Varying regulations across countries could lead to different outcomes and make achieving Superalignment’s goal even more difficult.

By proactively working towards aligning AI systems with human values and developing necessary governance structures, OpenAI aims to mitigate the dangers that could arise from the immense power of superintelligence.

While the task at hand is undoubtedly complex, OpenAI’s commitment to addressing these challenges and involving top researchers in the field signifies a significant effort towards responsible and beneficial AI development.

(Photo by Zac Wolff on Unsplash)

See also: OpenAI’s first global office will be in London

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI introduces team dedicated to stopping rogue AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/06/openai-introduces-team-dedicated-stopping-rogue-ai/feed/ 0
OpenAI’s first global office will be in London https://www.artificialintelligence-news.com/2023/06/30/openai-first-global-office-in-london/ https://www.artificialintelligence-news.com/2023/06/30/openai-first-global-office-in-london/#respond Fri, 30 Jun 2023 14:07:48 +0000 https://www.artificialintelligence-news.com/?p=13247 OpenAI has announced that it will establish its first international office in London. The strategic move demonstrates OpenAI’s commitment to expanding its operations, embracing diverse perspectives, and accelerating its mission of ensuring that artificial general intelligence (AGI) benefits all of humanity. London, renowned for its exceptional talent pool, was chosen as the ideal location for... Read more »

The post OpenAI’s first global office will be in London appeared first on AI News.

]]>
OpenAI has announced that it will establish its first international office in London.

The strategic move demonstrates OpenAI’s commitment to expanding its operations, embracing diverse perspectives, and accelerating its mission of ensuring that artificial general intelligence (AGI) benefits all of humanity.

London, renowned for its exceptional talent pool, was chosen as the ideal location for OpenAI’s international office. The city’s vibrant technology ecosystem, welcoming regulatory environment, and thriving community of innovators make it the perfect hub for OpenAI to advance its cutting-edge research and engineering capabilities.

The London teams will work closely with local communities and policymakers, fostering collaboration on OpenAI’s mission to create and promote safe AGI.

“We are thrilled to extend our research and development footprint into London,” said Diane Yoon, OpenAI’s VP of People.

“We are eager to build dynamic teams in Research, Engineering, and Go-to-Market functions, as well as other areas, to reinforce our efforts in creating and promoting safe AGI.”

OpenAI has been at the forefront of AI research, creating breakthroughs in natural language processing, reinforcement learning, and other areas. With the establishment of its international office, OpenAI intends to tap into the diverse expertise and perspectives available in London, further bolstering its capabilities and amplifying its impact.

Sam Altman, CEO of OpenAI, also shared his excitement about the future prospects of the London office.

“We see this expansion as an opportunity to attract world-class talent and drive innovation in AGI development and policy,” Altman stated.

“We’re excited about what the future holds and to see the contributions our London office will make towards building and deploying safe AI.”

By establishing a physical presence in London, OpenAI can forge closer partnerships with local institutions, universities, and industry experts, fostering a collaborative environment that propels AI innovation forward.

(Photo by Andrew Neel on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post OpenAI’s first global office will be in London appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/30/openai-first-global-office-in-london/feed/ 0
Mozilla.ai picks up OpenAI’s founding mission https://www.artificialintelligence-news.com/2023/03/23/mozilla-ai-picks-up-openai-founding-mission/ https://www.artificialintelligence-news.com/2023/03/23/mozilla-ai-picks-up-openai-founding-mission/#respond Thu, 23 Mar 2023 15:54:14 +0000 https://www.artificialintelligence-news.com/?p=12857 Mozilla’s new startup will build “trustworthy” AI that benefits humanity. If that sounds familiar, it was OpenAI’s founding mission. The startup, Mozilla.ai, aims to create an independent and open-source AI ecosystem that addresses society’s most pressing concerns about the rapidly-advancing technology. Mark Surman, President of the Mozilla Foundation, wrote in a blog post: “This new... Read more »

The post Mozilla.ai picks up OpenAI’s founding mission appeared first on AI News.

]]>
Mozilla’s new startup will build “trustworthy” AI that benefits humanity. If that sounds familiar, it was OpenAI’s founding mission.

The startup, Mozilla.ai, aims to create an independent and open-source AI ecosystem that addresses society’s most pressing concerns about the rapidly-advancing technology.

Mark Surman, President of the Mozilla Foundation, wrote in a blog post:

“This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering ‘What’s possible?’ and ‘How can people benefit?’ We’re also wondering ‘What could go wrong?’ and ‘How can we address it?’ Two decades of social media, smartphones and their consequences have made us leery.

Mozilla has been asking these questions about AI for a while now — sketching out a vision for trustworthy AI, mobilizing our community to document what’s broken and investing in startups that are trying to create more responsible AI.”

The rush to get AI solutions to market has been likened to a new “arms race,” in reference to the dangerous period when the US, Soviet Union, and their respective allies raced to achieve nuclear supremacy.

OpenAI was founded as a nonprofit with the state mission of ensuring that its research makes positive long-term contributions to humanity. Many believe the company has strayed from this mission.

Just today, a ChatGPT glitch leaked users’ conversation histories. OpenAI’s chief executive tweeted that there would be a “technical postmortem” soon.

Mike Kiser, Director of Strategy and Standards at SailPoint, commented:

“Sharing information with ChatGPT is not like talking to another adult, it is much more like sharing sensitive details with an overly-chatty three-year-old. If you don’t want your organisation’s secrets used to train the platform and then be reused by ChatGPT, discretion is recommended.

In addition, ChatGPT generates content, but it is difficult to prove its veracity. Even when references are used or cited, ChatGPT is learning from these links and assuming that information is true. It then uses well-written language and formatting to give those “facts” more weight. This trust in the written word may have implications for disinformation or other phishing-related attacks.”

Microsoft has invested tens of billions in OpenAI and has rolled out integrations with its popular products at a rapid pace. That partnership appears to have led OpenAI to take more risks and the company is now firmly a for-profit (capped at 100 times any investment.)

Elon Musk was one of OpenAI’s founders but resigned from its board in 2018. Musk has publicly questioned OpenAI’s transformation:

Hopefully, Mozilla.ai won’t forget its founding principles.

“The vision for Mozilla.ai is to make it easy to develop trustworthy AI products. We will build things and hire/collaborate with people that share our vision: AI that has agency, accountability, transparency and openness at its core,” added Mozilla Foundation President Mark Surman.

“Mozilla.ai will be a space outside big tech and academia for like-minded founders, developers, scientists, product managers and builders to gather. We believe that this group of people, working collectively, can turn the tide to create an independent, decentralized and trustworthy AI ecosystem — a real counterweight to the status quo.”

Mozilla.ai will be led by Moez Draief, Managing Director of Mozilla.

(Photo by Astrid Schaffner on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mozilla.ai picks up OpenAI’s founding mission appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/23/mozilla-ai-picks-up-openai-founding-mission/feed/ 0
Gary Marcus criticises Elon Musk’s AGI prediction https://www.artificialintelligence-news.com/2022/06/01/gary-marcus-criticises-elon-musk-agi-prediction/ https://www.artificialintelligence-news.com/2022/06/01/gary-marcus-criticises-elon-musk-agi-prediction/#respond Wed, 01 Jun 2022 11:51:35 +0000 https://www.artificialintelligence-news.com/?p=12030 Gary Marcus has criticised a prediction by Elon Musk that AGI (Artificial General Intelligence) will be achieved by 2029 and challenged him to a $100,000 bet. Marcus founded RobustAI and Geometric Intelligence (acquired by Uber), is the Professor Emeritus of Psychology and Neural Science at NYU, and authored Rebooting.AI. His views on AGI are worth... Read more »

The post Gary Marcus criticises Elon Musk’s AGI prediction appeared first on AI News.

]]>
Gary Marcus has criticised a prediction by Elon Musk that AGI (Artificial General Intelligence) will be achieved by 2029 and challenged him to a $100,000 bet.

Marcus founded RobustAI and Geometric Intelligence (acquired by Uber), is the Professor Emeritus of Psychology and Neural Science at NYU, and authored Rebooting.AI. His views on AGI are worth listening to.

AGI is the kind of artificial intelligence depicted in movies like Space Odyssey (‘HAL’) and Iron Man (‘J.A.R.V.I.S’). Unlike current AIs that are trained for a specific task, AGIs are more like the human brain and can learn how to do tasks.

Most experts believe AGI will take decades to achieve, while some even think it will never be possible. In a survey of leading experts in the field, the average estimate was there is a 50 percent chance AGI will be developed by 2099.

Elon Musk is far more optimistic:

Musk’s tweet received a response from Marcus in which he challenged the SpaceX and Tesla founder to a $100,000 bet that he’s wrong about the timing of AGI.

AI expert Melanie Mitchell from the Santa Fe Institute suggested the bets are placed on longbets.org. Marcus says he’s up for the bet on the platform – where the loser donates the money to a philanthropic effort – but he’s yet to receive a response from Musk.

In a post on his Substack, Marcus explained why he’s calling Musk out on his prediction.

“Your track record on betting on precise timelines for things is, well, spotty,” wrote Marcus. “You said, for instance in 2015, that (truly) self-driving cars were two years away; you’ve pretty much said the same thing every year since. It still hasn’t happened.”

Marcus argues that pronouncements like Musk is famous for can be dangerous and take attention away from the kind of questions that first need answering. 

“People are very excited about the big data and what it’s giving them right now, but I’m not sure it’s taking us closer to the deeper questions in artificial intelligence, like how we understand language or how we reason about the world,” said Marcus in 2016 in an Edge.org interview.

An incident in April, where a Tesla on Autopilot crashed into a $3 million private jet in a mostly empty airport, is pointed to as an example of why the focus needs to be on solving serious issues with AI systems before rushing to AGI:

“It’s easy to convince yourself that AI problems are much easier than they are actually are, because of the long tail problem,” argues Marcus.

“For everyday stuff, we get tons and tons of data that current techniques readily handle, leading to a misleading impression; for rare events, we get very little data, and current techniques struggle there.”

Marcus says that he can guarantee Musk won’t be shipping fully-autonomous ‘Level 5’ cars this year or next, despite what Musk said at TED2022. Unexpected outlier circumstances, like the appearance of a private jet in the way of a car, will continue to pose a problem to AI for the foreseeable future.

“Seven years is a long time, but the field is going to need to invest in other ideas if we are going to get to AGI before the end of the decade,” explains Marcus. “Or else outliers alone might be enough to keep us from getting there.”

Marcus believes outliers aren’t an unsolvable problem, but there’s currently no known solution. Making any predictions about AGI being achievable by the end of the decade before that issue is anywhere near solved is premature.

Along those same lines, Marcus points at how deep learning is “pretty decent” at recognising objects is but nowhere near as adept at human brain-like activities such as planning, reading, or language comprehension.

Here’s a pie chart used by Marcus of the kind of things that an AGI would need to achieve:

Marcus points out that he’s been using the chart for around five years and the situation has barely changed, we “still don’t have anything like stable or trustworthy solutions for common sense, reasoning, language, or analogy.”

Tesla is currently building a robot that claims to be able to perform mundane tasks around the home. Marcus is sceptical given the problems that Tesla is having with its cars on the roads.

“The AGI that you would need for a general-purpose domestic robot (where every home is different, and each poses its own safety risks) is way beyond what you would need for a car that drives on roads that are more or less engineered the same way from one town to the next,” he reasons.

Because AGI is still a somewhat vague term that’s open to interpretation, Marcus makes his own five predictions that AI will not be able to do by Musk’s 2029 prediction that AGI will be achieved:

Well then, Musk—do you accept Marcus’ challenge? Can’t say I would, even if I had anywhere near Musk’s disposable income.

(Photo by Kenny Eliason on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Gary Marcus criticises Elon Musk’s AGI prediction appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/01/gary-marcus-criticises-elon-musk-agi-prediction/feed/ 0