DeepMind - AI News https://www.artificialintelligence-news.com/categories/ai-companies/deepmind/ Artificial Intelligence News Mon, 18 Mar 2024 11:51:17 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png DeepMind - AI News https://www.artificialintelligence-news.com/categories/ai-companies/deepmind/ 32 32 Google launches Gemini 1.5 with ‘experimental’ 1M token context https://www.artificialintelligence-news.com/2024/02/16/google-launches-gemini-1-5-experimental-1m-token-context/ https://www.artificialintelligence-news.com/2024/02/16/google-launches-gemini-1-5-experimental-1m-token-context/#respond Fri, 16 Feb 2024 13:42:49 +0000 https://www.artificialintelligence-news.com/?p=14415 Google has unveiled its latest AI model, Gemini 1.5, which features what the company calls an “experimental” one million token context window.  The new capability allows Gemini 1.5 to process extremely long text passages – up to one million characters – to understand context and meaning. This dwarfs previous AI systems like Claude 2.1 and... Read more »

The post Google launches Gemini 1.5 with ‘experimental’ 1M token context appeared first on AI News.

]]>
Google has unveiled its latest AI model, Gemini 1.5, which features what the company calls an “experimental” one million token context window. 

The new capability allows Gemini 1.5 to process extremely long text passages – up to one million characters – to understand context and meaning. This dwarfs previous AI systems like Claude 2.1 and GPT-4 Turbo, which max out at 200,000 and 128,000 tokens respectively:

“Gemini 1.5 Pro achieves near-perfect recall on long-context retrieval tasks across modalities, improves the state-of-the-art in long-document QA, long-video QA and long-context ASR, and matches or surpasses Gemini 1.0 Ultra’s state-of-the-art performance across a broad set of benchmarks,” said Google researchers in a technical paper (PDF).

The efficiency of Google’s latest model is attributed to its innovative Mixture-of-Experts (MoE) architecture.

“While a traditional Transformer functions as one large neural network, MoE models are divided into smaller ‘expert’ neural networks,” explained Demis Hassabis, CEO of Google DeepMind.

“Depending on the type of input given, MoE models learn to selectively activate only the most relevant expert pathways in its neural network. This specialisation massively enhances the model’s efficiency.”

To demonstrate the power of the 1M token context window, Google showed how Gemini 1.5 could ingest the entire 326,914-token Apollo 11 flight transcript and then accurately answer specific questions about it. It also summarised key details from a 684,000-token silent film when prompted.

Google is initially providing developers and enterprises free access to a limited Gemini 1.5 preview with a one million token context window. A 128,000 token general release for the public will come later, along with pricing details.

For now, the one million token capability remains experimental. But if it lives up to its early promise, Gemini 1.5 could set a new standard for AI’s ability to understand complex, real-world text.

Developers interested in testing Gemini 1.5 Pro can sign up in AI Studio. Google says that enterprise customers can reach out to their Vertex AI account team.

(Image Credit: Google)

See also: Amazon trains 980M parameter LLM with ’emergent abilities’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google launches Gemini 1.5 with ‘experimental’ 1M token context appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/16/google-launches-gemini-1-5-experimental-1m-token-context/feed/ 0
DeepMind framework offers breakthrough in LLMs’ reasoning https://www.artificialintelligence-news.com/2024/02/08/deepmind-framework-offers-breakthrough-llm-reasoning/ https://www.artificialintelligence-news.com/2024/02/08/deepmind-framework-offers-breakthrough-llm-reasoning/#respond Thu, 08 Feb 2024 11:28:05 +0000 https://www.artificialintelligence-news.com/?p=14338 A breakthrough approach in enhancing the reasoning abilities of large language models (LLMs) has been unveiled by researchers from Google DeepMind and the University of Southern California. Their new ‘SELF-DISCOVER’ prompting framework – published this week on arXiV and Hugging Face – represents a significant leap beyond existing techniques, potentially revolutionising the performance of leading... Read more »

The post DeepMind framework offers breakthrough in LLMs’ reasoning appeared first on AI News.

]]>
A breakthrough approach in enhancing the reasoning abilities of large language models (LLMs) has been unveiled by researchers from Google DeepMind and the University of Southern California.

Their new ‘SELF-DISCOVER’ prompting framework – published this week on arXiV and Hugging Face – represents a significant leap beyond existing techniques, potentially revolutionising the performance of leading models such as OpenAI’s GPT-4 and Google’s PaLM 2.

The framework promises substantial enhancements in tackling challenging reasoning tasks. It demonstrates remarkable improvements, boasting up to a 32% performance increase compared to traditional methods like Chain of Thought (CoT). This novel approach revolves around LLMs autonomously uncovering task-intrinsic reasoning structures to navigate complex problems.

At its core, the framework empowers LLMs to self-discover and utilise various atomic reasoning modules – such as critical thinking and step-by-step analysis – to construct explicit reasoning structures.

By mimicking human problem-solving strategies, the framework operates in two stages:

  • Stage one involves composing a coherent reasoning structure intrinsic to the task, leveraging a set of atomic reasoning modules and task examples.
  • During decoding, LLMs then follow this self-discovered structure to arrive at the final solution.

In extensive testing across various reasoning tasks – including Big-Bench Hard, Thinking for Doing, and Math – the self-discover approach consistently outperformed traditional methods. Notably, it achieved an accuracy of 81%, 85%, and 73% across the three tasks with GPT-4, surpassing chain-of-thought and plan-and-solve techniques.

However, the implications of this research extend far beyond mere performance gains.

By equipping LLMs with enhanced reasoning capabilities, the framework paves the way for tackling more challenging problems and brings AI closer to achieving general intelligence. Transferability studies conducted by the researchers further highlight the universal applicability of the composed reasoning structures, aligning with human reasoning patterns.

As the landscape evolves, breakthroughs like the SELF-DISCOVER prompting framework represent crucial milestones in advancing the capabilities of language models and offering a glimpse into the future of AI.

(Photo by Victor on Unsplash)

See also: The UK is outpacing the US for AI hiring

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepMind framework offers breakthrough in LLMs’ reasoning appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/08/deepmind-framework-offers-breakthrough-llm-reasoning/feed/ 0
DeepMind AlphaGeometry solves complex geometry problems https://www.artificialintelligence-news.com/2024/01/18/deepmind-alphageometry-solves-complex-geometry-problems/ https://www.artificialintelligence-news.com/2024/01/18/deepmind-alphageometry-solves-complex-geometry-problems/#respond Thu, 18 Jan 2024 14:13:17 +0000 https://www.artificialintelligence-news.com/?p=14235 DeepMind, the UK-based AI lab owned by Google’s parent company Alphabet, has developed an AI system called AlphaGeometry that can solve complex geometry problems close to human Olympiad gold medalists.  In a new paper in Nature, DeepMind revealed that AlphaGeometry was able to solve 25 out of 30 benchmark geometry problems from past International Mathematical... Read more »

The post DeepMind AlphaGeometry solves complex geometry problems appeared first on AI News.

]]>
DeepMind, the UK-based AI lab owned by Google’s parent company Alphabet, has developed an AI system called AlphaGeometry that can solve complex geometry problems close to human Olympiad gold medalists. 

In a new paper in Nature, DeepMind revealed that AlphaGeometry was able to solve 25 out of 30 benchmark geometry problems from past International Mathematical Olympiad (IMO) competitions within the standard time limits. This nearly matches the average score of 26 problems solved by human gold medalists on the same tests.

The AI system combines a neural language model with a rule-bound deduction engine, providing a synergy that enables the system to find solutions to complex geometry theorems.

AlphaGeometry took a revolutionary approach to synthetic data generation by creating one billion random diagrams of geometric objects and deriving relationships between points and lines in each diagram. This process – termed “symbolic deduction and traceback” – resulted in a final training dataset of 100 million unique examples, providing a rich source for training the AI system.

According to DeepMind, AlphaGeometry represents a breakthrough in mathematical reasoning for AI, bringing it closer to the level of human mathematicians. Developing these skills is seen as essential for advancing artificial general intelligence.

Evan Chen, a maths coach and former Olympiad gold medalist, evaluated a sample of AlphaGeometry’s solutions. He said its output was not just correct, but also clean, human-readable proofs using standard geometry techniques—unlike the messy numerical solutions often produced when AI systems brute force maths problems.

While AlphaGeometry only handles the geometry portions of Olympiad tests so far, its skills alone would have been enough to earn a bronze medal on some past exams. DeepMind hopes to continue improving its maths reasoning abilities to the point it could pass the entire multi-subject Olympiad.

Advancing AI’s understanding of mathematics and logic is a key goal for DeepMind and Google. The researchers believe mastering Olympiad problems brings them one step closer towards more generalised artificial intelligence that can automatically discover new knowledge.

(Photo by Dustin Humes on Unsplash)

See also: Stability AI releases Stable Code 3B for enhanced coding assistance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepMind AlphaGeometry solves complex geometry problems appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/18/deepmind-alphageometry-solves-complex-geometry-problems/feed/ 0
Google creates new AI division to challenge OpenAI https://www.artificialintelligence-news.com/2023/04/21/google-creates-new-ai-division-to-challenge-openai/ https://www.artificialintelligence-news.com/2023/04/21/google-creates-new-ai-division-to-challenge-openai/#respond Fri, 21 Apr 2023 12:08:13 +0000 https://www.artificialintelligence-news.com/?p=12980 Google has consolidated its AI research labs, Google Brain and DeepMind, into a new unit named Google DeepMind. The move is seen as a strategic way for Google to maintain its edge in the competitive AI industry and compete with OpenAI. By combining the talent and resources of both entities, Google DeepMind aims to accelerate... Read more »

The post Google creates new AI division to challenge OpenAI appeared first on AI News.

]]>
Google has consolidated its AI research labs, Google Brain and DeepMind, into a new unit named Google DeepMind.

The move is seen as a strategic way for Google to maintain its edge in the competitive AI industry and compete with OpenAI. By combining the talent and resources of both entities, Google DeepMind aims to accelerate AI advancements while maintaining ethical standards.

The new unit will be responsible for spearheading groundbreaking AI products and advancements, and it will work closely with other Google product areas to deliver AI research and products.

Google Research, the former parent division of Google Brain, will remain an independent division focused on “fundamental advances in computer science across areas such as algorithms and theory, privacy and security, quantum computing, health, climate and sustainability, and responsible AI.”

Demis Hassabis, CEO of DeepMind, believes that the consolidation of the two AI research labs will bring together world-class talent in AI with the computing power, infrastructure, and resources to create the next generation of AI breakthroughs and products boldly and responsibly.

Hassabis claims that the research accomplishments of Google Brain and DeepMind have formed the foundation of the current AI industry—ranging from deep reinforcement learning to transformers. The newly consolidated unit will build upon this foundation to create the next generation of groundbreaking AI products and advancements that will shape the world.

Over the years, Google and DeepMind have jointly developed several groundbreaking innovations. The duo’s achievements include AlphaGo – which famously beat professional human Go players – and AlphaFold, an exceptional tool that accurately predicts protein structures.

Other noteworthy achievements include word2vec, WaveNet, sequence-to-sequence models, distillation, deep reinforcement learning, and distributed systems and software frameworks like TensorFlow and JAX. These cutting-edge tools have proven highly effective for expressing, training, and deploying large-scale ML models.

Google’s acquisition of DeepMind for $500 million in 2014 paved the way for a fruitful collaboration between the two entities. With the consolidation of Google Brain and DeepMind into Google DeepMind, Google hopes to further advance its AI research and development capabilities.

Google’s chief scientist, Jeff Dean, will take on an elevated role as chief scientist for both Google Research and Google DeepMind. He has been tasked with setting the future direction of AI research at the company, as well as heading up the most critical and strategic technical projects related to AI, including a series of powerful multimodal AI models.

The creation of Google DeepMind underscores Google and parent company Alphabet’s commitment to furthering the pioneering research of both DeepMind and Google Brain. With the race to dominate the AI space becoming more intense, Google DeepMind is poised to accelerate AI advancements and create groundbreaking AI products and advancements that will shape the world.

(Image Credit: Google DeepMind)

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google creates new AI division to challenge OpenAI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/21/google-creates-new-ai-division-to-challenge-openai/feed/ 0
DeepMind AlphaCode rivals the abilities of human programmers https://www.artificialintelligence-news.com/2022/02/03/deepmind-alphacode-rivals-abilities-human-programmers/ https://www.artificialintelligence-news.com/2022/02/03/deepmind-alphacode-rivals-abilities-human-programmers/#respond Thu, 03 Feb 2022 17:07:20 +0000 https://artificialintelligence-news.com/?p=11641 DeepMind’s AI coder AlphaCode has proven capable of rivalling the abilities of a standard human programmer. The company selected 10 contests that were hosted on Codeforces – a programming competition platform with thousands of participants – to evaluate the performance of AlphaCode. Below is an example of a problem #AlphaCode can successfully solve, using the... Read more »

The post DeepMind AlphaCode rivals the abilities of human programmers appeared first on AI News.

]]>
DeepMind’s AI coder AlphaCode has proven capable of rivalling the abilities of a standard human programmer.

The company selected 10 contests that were hosted on Codeforces – a programming competition platform with thousands of participants – to evaluate the performance of AlphaCode.

Following the simulations, AlphaCode ranked in the top 54 percent of competitors. That means it wasn’t yet able to beat leading human programmers but could rival the average.

Mike Mirzayanov, Founder of Codeforces, said:

“I can safely say the results of AlphaCode exceeded my expectations. I was sceptical because even in simple competitive problems it is often required not only to implement the algorithm, but also – and this is the most difficult part – to invent it.

AlphaCode managed to perform at the level of a promising new competitor. I can’t wait to see what lies ahead!”

AlphaCode uses transformer-based language models to generate code “at an unprecedented scale”. A preprint paper detailing AlphaCode is available here (PDF).

Petr Mitrichev, Software Engineer at Google, commented:

“Solving competitive programming problems is a really hard thing to do, requiring both good coding skills and problem-solving creativity in humans.

I was very impressed that AlphaCode could make progress in this area and excited to see how the model uses its statement understanding to produce code and guide its random exploration to create solutions.”

DeepMind has released its dataset of competitive programming problems and solutions on GitHub to help others build on their results.

Ian Funnell, Manager of Developer Relations at Matillion, said:

“Advancements like AlphaCode are welcomed and represent huge progress in designing algorithms more effectively. AI coding in general empowers developers to pursue innovation and creativity in setting the parameters and goals, leaving the AI to actually execute them.

Ultimately, this is a catalyst for innovation—helping humans rather than replacing them. Developers are extremely capable individuals, and businesses will continue to count on them to reap valuable insights from their data to differentiate and compete.”

DeepMind has set up an interactive site to view some of AlphaCode’s solutions and dive into the model at alphacode.deepmind.com.

(Image Credit: DeepMind)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepMind AlphaCode rivals the abilities of human programmers appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/03/deepmind-alphacode-rivals-abilities-human-programmers/feed/ 0