natural language Archives - AI News https://www.artificialintelligence-news.com/tag/natural-language/ Artificial Intelligence News Fri, 14 Jun 2024 16:07:59 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png natural language Archives - AI News https://www.artificialintelligence-news.com/tag/natural-language/ 32 32 NLEPs: Bridging the gap between LLMs and symbolic reasoning https://www.artificialintelligence-news.com/2024/06/14/nleps-bridging-the-gap-between-llms-symbolic-reasoning/ https://www.artificialintelligence-news.com/2024/06/14/nleps-bridging-the-gap-between-llms-symbolic-reasoning/#respond Fri, 14 Jun 2024 16:07:57 +0000 https://www.artificialintelligence-news.com/?p=15021 Researchers have introduced a novel approach called natural language embedded programs (NLEPs) to improve the numerical and symbolic reasoning capabilities of large language models (LLMs). The technique involves prompting LLMs to generate and execute Python programs to solve user queries, then output solutions in natural language. While LLMs like ChatGPT have demonstrated impressive performance on... Read more »

The post NLEPs: Bridging the gap between LLMs and symbolic reasoning appeared first on AI News.

]]>
Researchers have introduced a novel approach called natural language embedded programs (NLEPs) to improve the numerical and symbolic reasoning capabilities of large language models (LLMs). The technique involves prompting LLMs to generate and execute Python programs to solve user queries, then output solutions in natural language.

While LLMs like ChatGPT have demonstrated impressive performance on various tasks, they often struggle with problems requiring numerical or symbolic reasoning.

NLEPs follow a four-step problem-solving template: calling necessary packages, importing natural language representations of required knowledge, implementing a solution-calculating function, and outputting results as natural language with optional data visualisation.

This approach offers several advantages, including improved accuracy, transparency, and efficiency. Users can investigate generated programs and fix errors directly, avoiding the need to rerun entire models for troubleshooting. Additionally, a single NLEP can be reused for multiple tasks by replacing certain variables.

The researchers found that NLEPs enabled GPT-4 to achieve over 90% accuracy on various symbolic reasoning tasks, outperforming task-specific prompting methods by 30%

Beyond accuracy improvements, NLEPs could enhance data privacy by running programs locally, eliminating the need to send sensitive user data to external companies for processing. The technique may also boost the performance of smaller language models without costly retraining.

However, NLEPs rely on a model’s program generation capability and may not work as well with smaller models trained on limited datasets. Future research will explore methods to make smaller LLMs generate more effective NLEPs and investigate the impact of prompt variations on reasoning robustness.

The research, supported in part by the Center for Perceptual and Interactive Intelligence of Hong Kong, will be presented at the Annual Conference of the North American Chapter of the Association for Computational Linguistics later this month.

(Photo by Alex Azabache)

See also: Apple is reportedly getting free ChatGPT access

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NLEPs: Bridging the gap between LLMs and symbolic reasoning appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/14/nleps-bridging-the-gap-between-llms-symbolic-reasoning/feed/ 0
Baidu to launch powerful ChatGPT rival https://www.artificialintelligence-news.com/2023/01/30/baidu-to-launch-powerful-chatgpt-rival/ https://www.artificialintelligence-news.com/2023/01/30/baidu-to-launch-powerful-chatgpt-rival/#respond Mon, 30 Jan 2023 15:10:45 +0000 https://www.artificialintelligence-news.com/?p=12664 Chinese web giant Baidu is preparing to launch a powerful ChatGPT rival in March. Baidu is often called the “Google of China” because it offers similar services, including search, maps, email, ads, cloud storage, and more. Baidu, like Google, also invests heavily in AI and machine learning. Earlier this month, AI News reported that Google... Read more »

The post Baidu to launch powerful ChatGPT rival appeared first on AI News.

]]>
Chinese web giant Baidu is preparing to launch a powerful ChatGPT rival in March.

Baidu is often called the “Google of China” because it offers similar services, including search, maps, email, ads, cloud storage, and more. Baidu, like Google, also invests heavily in AI and machine learning.

Earlier this month, AI News reported that Google was changing its AI review processes to speed up the release of new solutions. One of the first products to be released under Google’s new process is set to be a ChatGPT rival, due to be announced during the company’s I/O developer conference in May.

However, Baidu looks set to beat Google by a couple of months.

Bloomberg reports that Baidu will reveal its own AI-powered chatbot in March. The currently unnamed tool will be integrated into the company’s search product.

Powering the Baidu ChatGPT competitor is ‘ERNIE’ (Enhanced Language RepresentatioN with Informative Entities), a powerful AI model with 10 billion parameters.

Researchers have found that deep-learning models trained on text alone – like OpenAI’s GPT-3 or Google’s T5 – perform well for numerous problems, but can fall short on some natural language understanding (NLU) tasks when the knowledge is not present in the input text.

The first version of ERNIE was introduced and open-sourced in 2019 by researchers at Tsinghua University to demonstrate the NLU capabilities of a model that combines both text and knowledge graph data.

Later that year, Baidu released ERNIE 2.0 which became the first model to set a score higher than 90 on the GLUE benchmark for evaluating NLU systems.

In 2021, Baidu’s researchers posted a paper on ERNIE 3.0 in which they claim the model exceeds human performance on the SuperGLUE natural language benchmark. ERNIE 3.0 set a new top score on SuperGLUE and displaced efforts from Google and Microsoft.

Most of the world’s attention until now has been on language model advancements from the likes of OpenAI, Google, Facebook, and Microsoft. However, Baidu will likely get its time in the spotlight in just a couple of months.

(Image Credit: N509FZ under CC BY-SA 4.0 license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Baidu to launch powerful ChatGPT rival appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/30/baidu-to-launch-powerful-chatgpt-rival/feed/ 0
Esteemed consortium launch AI natural language processing benchmark https://www.artificialintelligence-news.com/2019/08/15/consortium-benchmark-ai-natural-language-processing/ https://www.artificialintelligence-news.com/2019/08/15/consortium-benchmark-ai-natural-language-processing/#respond Thu, 15 Aug 2019 16:24:15 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5938 A research consortium featuring some of the greatest minds in AI are launching a benchmark to measure natural language processing (NLP) abilities. The consortium includes Google DeepMind, Facebook AI, New York University, and the University of Washington. Each of the consortium’s members believe a more comprehensive benchmark is needed for NLP than current solutions. The... Read more »

The post Esteemed consortium launch AI natural language processing benchmark appeared first on AI News.

]]>
A research consortium featuring some of the greatest minds in AI are launching a benchmark to measure natural language processing (NLP) abilities.

The consortium includes Google DeepMind, Facebook AI, New York University, and the University of Washington. Each of the consortium’s members believe a more comprehensive benchmark is needed for NLP than current solutions.

The result is a benchmarking platform called SuperGLUE which replaces an older platform called GLUE with a “much harder benchmark with comprehensive human baselines,” according to Facebook AI. 

SuperGLUE helps to put NLP abilities to the test where previous benchmarks were beginning to pose too simple for the latest systems.

“Within one year of release, several NLP models have already surpassed human baseline performance on the GLUE benchmark. Current models have advanced a surprisingly effective recipe that combines language model pretraining on huge text data sets with simple multitask and transfer learning techniques,” Facebook said.

In 2018, Google released BERT (Bidirectional Encoder Representations from Transformers) which Facebook calls one of the biggest breakthroughs in NLP. Facebook took Google’s open-source work and identified changes to improve its effectiveness which led to RoBERTa (Robustly Optimized BERT Pretraining Approach).

RoBERTa basically “smashed it,” as the kids would say, in commonly-used benchmarks:

“Within one year of release, several NLP models (including RoBERTa) have already surpassed human baseline performance on the GLUE benchmark. Current models have advanced a surprisingly effective recipe that combines language model pretraining on huge text data sets with simple multitask and transfer learning techniques,” Facebook explains.

For the SuperGLUE benchmark, the consortium decided on tasks which meet four criteria:

  1. Have varied formats.
  2. Use more nuanced questions.
  3. Are yet-to-be-solved using state-of-the-art methods.
  4. Can be easily solved by people.

The new benchmark includes eight diverse and challenging tasks, including a Choice of Plausible Alternatives (COPA) causal reasoning task. The aforementioned task provides the system with the premise of a sentence and it must determine either the cause or effect of the premise from two possible choices. Humans have managed to achieve 100 percent accuracy on COPA while BERT achieves just 74 percent.

Across SuperGLUE’s tasks, RoBERTa is currently the leading NLP system and isn’t far behind the human baseline:

You can find a full breakdown of SuperGLUE and its various benchmarking tasks in a Facebook AI blog post here.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Esteemed consortium launch AI natural language processing benchmark appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/08/15/consortium-benchmark-ai-natural-language-processing/feed/ 0