development Archives - AI News https://www.artificialintelligence-news.com/tag/development/ Artificial Intelligence News Fri, 14 Jun 2024 16:07:59 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png development Archives - AI News https://www.artificialintelligence-news.com/tag/development/ 32 32 NLEPs: Bridging the gap between LLMs and symbolic reasoning https://www.artificialintelligence-news.com/2024/06/14/nleps-bridging-the-gap-between-llms-symbolic-reasoning/ https://www.artificialintelligence-news.com/2024/06/14/nleps-bridging-the-gap-between-llms-symbolic-reasoning/#respond Fri, 14 Jun 2024 16:07:57 +0000 https://www.artificialintelligence-news.com/?p=15021 Researchers have introduced a novel approach called natural language embedded programs (NLEPs) to improve the numerical and symbolic reasoning capabilities of large language models (LLMs). The technique involves prompting LLMs to generate and execute Python programs to solve user queries, then output solutions in natural language. While LLMs like ChatGPT have demonstrated impressive performance on... Read more »

The post NLEPs: Bridging the gap between LLMs and symbolic reasoning appeared first on AI News.

]]>
Researchers have introduced a novel approach called natural language embedded programs (NLEPs) to improve the numerical and symbolic reasoning capabilities of large language models (LLMs). The technique involves prompting LLMs to generate and execute Python programs to solve user queries, then output solutions in natural language.

While LLMs like ChatGPT have demonstrated impressive performance on various tasks, they often struggle with problems requiring numerical or symbolic reasoning.

NLEPs follow a four-step problem-solving template: calling necessary packages, importing natural language representations of required knowledge, implementing a solution-calculating function, and outputting results as natural language with optional data visualisation.

This approach offers several advantages, including improved accuracy, transparency, and efficiency. Users can investigate generated programs and fix errors directly, avoiding the need to rerun entire models for troubleshooting. Additionally, a single NLEP can be reused for multiple tasks by replacing certain variables.

The researchers found that NLEPs enabled GPT-4 to achieve over 90% accuracy on various symbolic reasoning tasks, outperforming task-specific prompting methods by 30%

Beyond accuracy improvements, NLEPs could enhance data privacy by running programs locally, eliminating the need to send sensitive user data to external companies for processing. The technique may also boost the performance of smaller language models without costly retraining.

However, NLEPs rely on a model’s program generation capability and may not work as well with smaller models trained on limited datasets. Future research will explore methods to make smaller LLMs generate more effective NLEPs and investigate the impact of prompt variations on reasoning robustness.

The research, supported in part by the Center for Perceptual and Interactive Intelligence of Hong Kong, will be presented at the Annual Conference of the North American Chapter of the Association for Computational Linguistics later this month.

(Photo by Alex Azabache)

See also: Apple is reportedly getting free ChatGPT access

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NLEPs: Bridging the gap between LLMs and symbolic reasoning appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/14/nleps-bridging-the-gap-between-llms-symbolic-reasoning/feed/ 0
GPT-4o delivers human-like AI interaction with text, audio, and vision integration https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/ https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/#respond Tue, 14 May 2024 12:43:56 +0000 https://www.artificialintelligence-news.com/?p=14811 OpenAI has launched its new flagship model, GPT-4o, which seamlessly integrates text, audio, and visual inputs and outputs, promising to enhance the naturalness of machine interactions. GPT-4o, where the “o” stands for “omni,” is designed to cater to a broader spectrum of input and output modalities. “It accepts as input any combination of text, audio,... Read more »

The post GPT-4o delivers human-like AI interaction with text, audio, and vision integration appeared first on AI News.

]]>
OpenAI has launched its new flagship model, GPT-4o, which seamlessly integrates text, audio, and visual inputs and outputs, promising to enhance the naturalness of machine interactions.

GPT-4o, where the “o” stands for “omni,” is designed to cater to a broader spectrum of input and output modalities. “It accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs,” OpenAI announced.

Users can expect a response time as quick as 232 milliseconds, mirroring human conversational speed, with an impressive average response time of 320 milliseconds.

Pioneering capabilities

The introduction of GPT-4o marks a leap from its predecessors by processing all inputs and outputs through a single neural network. This approach enables the model to retain critical information and context that were previously lost in the separate model pipeline used in earlier versions.

Prior to GPT-4o, ‘Voice Mode’ could handle audio interactions with latencies of 2.8 seconds for GPT-3.5 and 5.4 seconds for GPT-4. The previous setup involved three distinct models: one for transcribing audio to text, another for textual responses, and a third for converting text back to audio. This segmentation led to loss of nuances such as tone, multiple speakers, and background noise.

As an integrated solution, GPT-4o boasts notable improvements in vision and audio understanding. It can perform more complex tasks such as harmonising songs, providing real-time translations, and even generating outputs with expressive elements like laughter and singing. Examples of its broad capabilities include preparing for interviews, translating languages on the fly, and generating customer service responses.

Nathaniel Whittemore, Founder and CEO of Superintelligent, commented: “Product announcements are going to inherently be more divisive than technology announcements because it’s harder to tell if a product is going to be truly different until you actually interact with it. And especially when it comes to a different mode of human-computer interaction, there is even more room for diverse beliefs about how useful it’s going to be.

“That said, the fact that there wasn’t a GPT-4.5 or GPT-5 announced is also distracting people from the technological advancement that this is a natively multimodal model. It’s not a text model with a voice or image addition; it is a multimodal token in, multimodal token out. This opens up a huge array of use cases that are going to take some time to filter into the consciousness.”

Performance and safety

GPT-4o matches GPT-4 Turbo performance levels in English text and coding tasks but outshines significantly in non-English languages, making it a more inclusive and versatile model. It sets a new benchmark in reasoning with a high score of 88.7% on 0-shot COT MMLU (general knowledge questions) and 87.2% on the 5-shot no-CoT MMLU.

The model also excels in audio and translation benchmarks, surpassing previous state-of-the-art models like Whisper-v3. In multilingual and vision evaluations, it demonstrates superior performance, enhancing OpenAI’s multilingual, audio, and vision capabilities.

OpenAI has incorporated robust safety measures into GPT-4o by design, incorporating techniques to filter training data and refining behaviour through post-training safeguards. The model has been assessed through a Preparedness Framework and complies with OpenAI’s voluntary commitments. Evaluations in areas like cybersecurity, persuasion, and model autonomy indicate that GPT-4o does not exceed a ‘Medium’ risk level across any category.

Further safety assessments involved extensive external red teaming with over 70 experts in various domains, including social psychology, bias, fairness, and misinformation. This comprehensive scrutiny aims to mitigate risks introduced by the new modalities of GPT-4o.

Availability and future integration

Starting today, GPT-4o’s text and image capabilities are available in ChatGPT—including a free tier and extended features for Plus users. A new Voice Mode powered by GPT-4o will enter alpha testing within ChatGPT Plus in the coming weeks.

Developers can access GPT-4o through the API for text and vision tasks, benefiting from its doubled speed, halved price, and enhanced rate limits compared to GPT-4 Turbo.

OpenAI plans to expand GPT-4o’s audio and video functionalities to a select group of trusted partners via the API, with broader rollout expected in the near future. This phased release strategy aims to ensure thorough safety and usability testing before making the full range of capabilities publicly available.

“It’s hugely significant that they’ve made this model available for free to everyone, as well as making the API 50% cheaper. That is a massive increase in accessibility,” explained Whittemore.

OpenAI invites community feedback to continuously refine GPT-4o, emphasising the importance of user input in identifying and closing gaps where GPT-4 Turbo might still outperform.

(Image Credit: OpenAI)

See also: OpenAI takes steps to boost AI-generated content transparency

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GPT-4o delivers human-like AI interaction with text, audio, and vision integration appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/feed/ 0
Chuck Ros, SoftServe: Delivering transformative AI solutions responsibly https://www.artificialintelligence-news.com/2024/05/03/chuck-ros-softserve-delivering-transformative-ai-solutions-responsibly/ https://www.artificialintelligence-news.com/2024/05/03/chuck-ros-softserve-delivering-transformative-ai-solutions-responsibly/#respond Fri, 03 May 2024 14:47:56 +0000 https://www.artificialintelligence-news.com/?p=14774 As the world embraces the transformative potential of AI, SoftServe is at the forefront of developing cutting-edge AI solutions while prioritising responsible deployment. Ahead of AI & Big Data Expo North America – where the company will showcase its expertise – Chuck Ros, Industry Success Director at SoftServe, provided valuable insights into the company’s AI... Read more »

The post Chuck Ros, SoftServe: Delivering transformative AI solutions responsibly appeared first on AI News.

]]>
As the world embraces the transformative potential of AI, SoftServe is at the forefront of developing cutting-edge AI solutions while prioritising responsible deployment.

Ahead of AI & Big Data Expo North America – where the company will showcase its expertise – Chuck Ros, Industry Success Director at SoftServe, provided valuable insights into the company’s AI initiatives, the challenges faced, and its future strategy for leveraging this powerful technology.

Highlighting a recent AI project that exemplifies SoftServe’s innovative approach, Ros discussed the company’s unique solution for a software company in the field service management industry. The vision was to create an easy-to-use, language model-enabled interface that would allow field technicians to access service histories, equipment documentation, and maintenance schedules seamlessly, enhancing productivity and operational efficiency.

“Our AI engineers built a prompt evaluation pipeline that seamlessly considers cost, processing time, semantic similarity, and the likelihood of hallucinations,” Ros explained. “It proved to be an extremely effective architecture that led to improved operational efficiencies for the customer, increased productivity for users in the field, competitive edge for the software company and for their clients, and—perhaps most importantly—a spark for additional innovation.”

While the potential of AI is undeniable, Ros acknowledged the key mistakes businesses often make when deploying AI solutions, emphasising the importance of having a robust data strategy, building adequate data pipelines, and thoroughly testing the models. He also cautioned against rushing to deploy generative AI solutions without properly assessing feasibility and business viability, stating, “We need to pay at least as much attention to whether it should be built as we do to whether it can be built.”

Recognising the critical concern of ethical AI development, Ros stressed the significance of human oversight throughout the entire process. “Managing dynamic data quality, testing and detecting for bias and inaccuracies, ensuring high standards of data privacy, and ethical use of AI systems all require human oversight,” he said. SoftServe’s approach to AI development involves structured engagements that evaluate data and algorithms for suitability, assess potential risks, and implement governance measures to ensure accountability and data traceability.

Looking ahead, Ros envisions AI playing an increasingly vital role in SoftServe’s business strategy, with ongoing refinements to AI-assisted software development lifecycles and the introduction of new tools and processes to boost productivity further. Softserve’s findings suggest that GenAI can accelerate programming productivity by as much as 40 percent.

“I see more models assisting us on a daily basis, helping us write emails and documentation and helping us more and more with the simple, time-consuming mundane tasks we still do,” Ros said. “In the next five years I see ongoing refinement of that view to AI in SDLCs and the regular introduction of new tools, new models, new processes that push that 40 percent productivity hike to 50 percent and 60 percent.”

When asked how SoftServe is leveraging AI for social good, Ros explained the company is delivering solutions ranging from machine learning models to help students discover their passions and aptitudes, enabling personalised learning experiences, to assisting teachers in their daily tasks and making their jobs easier.

“I love this question because one of SoftServe’s key strategic tenets is to power our social purpose and make the world a better place. It’s obviously an ambitious goal, but it’s important to our employees and it’s important to our clients,” explained Ros.

“It’s why we created the Open Eyes Foundation and have collected more than $15 million with the support of the public, our clients, our partners, and of course our employees. We naturally support the Open Eyes Foundation with all manner of technology needs, including AI.”

At the AI & Big Data Expo North America, SoftServe plans to host a keynote presentation titled “Revolutionizing Learning: Unleashing the Power of Generative AI in Education and Beyond,” which will explore the transformative impact of generative AI and large language models in the education sector.

“As we explore the mechanisms through which generative AI leverages data – including training methodologies like fine-tuning and Retrieval Augmented Generation (RAG) – we will pinpoint high-value, low-risk applications that promise to redefine the educational landscape,” said Ros.

“The journey from a nascent idea to a fully operational AI solution is fraught with challenges, including ethical considerations and risks inherent in deploying AI solutions. Through the lens of a success story at Mesquite ISD, where generative AI was leveraged to help students uncover their passions and aptitudes enabling the delivery of personalised learning experiences, this presentation will illustrate the practical benefits and transformative potential of generative AI in education.”

Additionally, the company will participate in panel discussions on topics such as “Getting to Production-Ready – Challenges and Best Practices for Deploying AI” and “Navigating the Data & AI Landscape – Ensuring Safety, Security, and Responsibility in Big Data and AI Systems.” These sessions will provide attendees with valuable insights from SoftServe’s experts on overcoming deployment challenges, ensuring data quality and user acceptance, and mitigating risks associated with AI implementation.

As a key sponsor of the event, SoftServe aims to contribute to the discourse surrounding the responsible and ethical development of AI solutions, while sharing its expertise and vision for leveraging this powerful technology to drive innovation, enhance productivity, and address global challenges. 

“We are, of course, always interested in both sharing and hearing about the diversity of business cases for applications in AI and big data: the concept of the rising tide lifting all boats is definitely relevant in AI and GenAI in particular, and we’re proud to be a part of the AI technology community,” Ros concludes.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Chuck Ros, SoftServe: Delivering transformative AI solutions responsibly appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/03/chuck-ros-softserve-delivering-transformative-ai-solutions-responsibly/feed/ 0
Mixtral 8x22B sets new benchmark for open models https://www.artificialintelligence-news.com/2024/04/18/mixtral-8x22b-sets-new-benchmark-open-models/ https://www.artificialintelligence-news.com/2024/04/18/mixtral-8x22b-sets-new-benchmark-open-models/#respond Thu, 18 Apr 2024 14:39:18 +0000 https://www.artificialintelligence-news.com/?p=14714 Mistral AI has released Mixtral 8x22B, which sets a new benchmark for open source models in performance and efficiency. The model boasts robust multilingual capabilities and superior mathematical and coding prowess. Mixtral 8x22B operates as a Sparse Mixture-of-Experts (SMoE) model, utilising just 39 billion of its 141 billion parameters when active. Beyond its efficiency, the... Read more »

The post Mixtral 8x22B sets new benchmark for open models appeared first on AI News.

]]>
Mistral AI has released Mixtral 8x22B, which sets a new benchmark for open source models in performance and efficiency. The model boasts robust multilingual capabilities and superior mathematical and coding prowess.

Mixtral 8x22B operates as a Sparse Mixture-of-Experts (SMoE) model, utilising just 39 billion of its 141 billion parameters when active.

Beyond its efficiency, the Mixtral 8x22B boasts fluency in multiple major languages including English, French, Italian, German, and Spanish. Its adeptness extends into technical domains with strong mathematical and coding capabilities. Notably, the model supports native function calling paired with a ‘constrained output mode,’ facilitating large-scale application development and tech upgrades.

With a substantial 64K tokens context window, Mixtral 8x22B ensures precise information recall from voluminous documents, further appealing to enterprise-level utilisation where handling extensive data sets is routine.

In line with fostering a collaborative and innovative AI research environment, Mistral AI has released Mixtral 8x22B under the Apache 2.0 license. This highly permissive open-source license ensures no-restriction usage and enables widespread adoption.

Statistically, Mixtral 8x22B outclasses many existing models. In head-to-head comparisons on standard industry benchmarks – ranging from common sense, reasoning, to subject-specific knowledge – Mistral’s new innovation excels. Figures released by Mistral AI illustrate that Mixtral 8x22B significantly outperforms LLaMA 2 70B model in varied linguistic contexts across critical reasoning and knowledge benchmarks:

Furthermore, in the arenas of coding and maths, Mixtral continues its dominance among open models. Updated results show an impressive performance improvement in mathematical benchmarks, following the release of an instructed version of the model:

Prospective users and developers are urged to explore Mixtral 8x22B on La Plateforme, Mistral AI’s interactive platform. Here, they can engage directly with the model.

In an era where AI’s role is ever-expanding, Mixtral 8x22B’s blend of high performance, efficiency, and open accessibility marks a significant milestone in the democratisation of advanced AI tools.

(Photo by Joshua Golde)

See also: SAS aims to make AI accessible regardless of skill set with packaged AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mixtral 8x22B sets new benchmark for open models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/18/mixtral-8x22b-sets-new-benchmark-open-models/feed/ 0
OpenAI makes GPT-4 Turbo with Vision API generally available https://www.artificialintelligence-news.com/2024/04/10/openai-gpt-4-turbo-with-vision-api-generally-available/ https://www.artificialintelligence-news.com/2024/04/10/openai-gpt-4-turbo-with-vision-api-generally-available/#respond Wed, 10 Apr 2024 12:15:01 +0000 https://www.artificialintelligence-news.com/?p=14670 OpenAI has announced that its powerful GPT-4 Turbo with Vision model is now generally available through the company’s API, opening up new opportunities for enterprises and developers to integrate advanced language and vision capabilities into their applications. The launch of GPT-4 Turbo with Vision on the API follows the initial release of GPT-4’s vision and... Read more »

The post OpenAI makes GPT-4 Turbo with Vision API generally available appeared first on AI News.

]]>
OpenAI has announced that its powerful GPT-4 Turbo with Vision model is now generally available through the company’s API, opening up new opportunities for enterprises and developers to integrate advanced language and vision capabilities into their applications.

The launch of GPT-4 Turbo with Vision on the API follows the initial release of GPT-4’s vision and audio upload features last September and the unveiling of the turbocharged GPT-4 Turbo model at OpenAI’s developer conference in November.

GPT-4 Turbo promises significant speed improvements, larger input context windows of up to 128,000 tokens (equivalent to about 300 pages), and increased affordability for developers.

A key enhancement is the ability for API requests to utilise the model’s vision recognition and analysis capabilities through text format JSON and function calling. This allows developers to generate JSON code snippets that can automate actions within connected apps, such as sending emails, making purchases, or posting online. However, OpenAI strongly recommends building user confirmation flows before taking actions that impact the real world.

Several startups are already leveraging GPT-4 Turbo with Vision, including Cognition, whose AI coding agent Devin relies on the model to automatically generate full code:

Healthify, a health and fitness app, uses the model to provide nutritional analysis and recommendations based on photos of meals:

TLDraw, a UK-based startup, employs GPT-4 Turbo with Vision to power its virtual whiteboard and convert user drawings into functional websites:

Despite facing stiff competition from newer models such as Anthropic’s Claude 3 Opus and Google’s Gemini Advanced, the API launch should help solidify OpenAI’s position in the enterprise market as developers await the company’s next large language model.

(Photo by v2osk)

See also: Stability AI unveils 12B parameter Stable LM 2 model and updated 1.6B variant

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI makes GPT-4 Turbo with Vision API generally available appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/10/openai-gpt-4-turbo-with-vision-api-generally-available/feed/ 0
Stability AI unveils 12B parameter Stable LM 2 model and updated 1.6B variant https://www.artificialintelligence-news.com/2024/04/09/stability-ai-unveils-12b-parameter-stable-lm-2-model-updated-1-6b-variant/ https://www.artificialintelligence-news.com/2024/04/09/stability-ai-unveils-12b-parameter-stable-lm-2-model-updated-1-6b-variant/#respond Tue, 09 Apr 2024 16:40:24 +0000 https://www.artificialintelligence-news.com/?p=14665 Stability AI has introduced the latest additions to its Stable LM 2 language model series: a 12 billion parameter base model and an instruction-tuned variant. These models were trained on an impressive two trillion tokens across seven languages: English, Spanish, German, Italian, French, Portuguese, and Dutch. The 12 billion parameter model aims to strike a... Read more »

The post Stability AI unveils 12B parameter Stable LM 2 model and updated 1.6B variant appeared first on AI News.

]]>
Stability AI has introduced the latest additions to its Stable LM 2 language model series: a 12 billion parameter base model and an instruction-tuned variant. These models were trained on an impressive two trillion tokens across seven languages: English, Spanish, German, Italian, French, Portuguese, and Dutch.

The 12 billion parameter model aims to strike a balance between strong performance, efficiency, memory requirements, and speed. It follows the established framework of Stability AI’s previously released Stable LM 2 1.6B technical report. This new release extends the company’s model range, offering developers a transparent and powerful tool for innovating with AI language technology.

Alongside the 12B model, Stability AI has also released a new version of its Stable LM 2 1.6B model. This updated 1.6B variant improves conversation abilities across the same seven languages while maintaining remarkably low system requirements.

Stable LM 2 12B is designed as an efficient open model tailored for multilingual tasks with smooth performance on widely available hardware.

According to Stability AI, this model can handle tasks typically feasible only for significantly larger models, which often require substantial computational and memory resources, such as large Mixture-of-Experts (MoEs). The instruction-tuned version is particularly well-suited for various uses, including as a central part of retrieval RAG systems, due to its high performance in tool usage and function calling.

In performance comparisons with popular strong language models like Mixtral, Llama2, Qwen 1.5, Gemma, and Mistral, Stable LM 2 12B offers solid results when tested on zero-shot and few-shot tasks across general benchmarks outlined in the Open LLM leaderboard:

With this new release, Stability AI extends the StableLM 2 family into the 12B category, providing an open and transparent model without compromising power and accuracy. The company is confident that this release will enable developers and businesses to continue developing the future while retaining full control over their data.

Developers and businesses can use Stable LM 2 12B now for commercial and non-commercial purposes with a Stability AI Membership.

(Photo by Muha Ajjan)

See also: ML Olympiad returns with over 20 challenges

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Stability AI unveils 12B parameter Stable LM 2 model and updated 1.6B variant appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/09/stability-ai-unveils-12b-parameter-stable-lm-2-model-updated-1-6b-variant/feed/ 0
ML Olympiad returns with over 20 challenges https://www.artificialintelligence-news.com/2024/04/08/ml-olympiad-returns-with-over-20-challenges/ https://www.artificialintelligence-news.com/2024/04/08/ml-olympiad-returns-with-over-20-challenges/#respond Mon, 08 Apr 2024 09:16:00 +0000 https://www.artificialintelligence-news.com/?p=14656 The popular ML Olympiad is back for its third round with over 20 community-hosted machine learning competitions on Kaggle. The ML Olympiad – organised by groups including ML GDE, TFUG, and other ML communities – aims to provide developers with hands-on opportunities to learn and practice machine learning skills by tackling real-world challenges. Over the... Read more »

The post ML Olympiad returns with over 20 challenges appeared first on AI News.

]]>
The popular ML Olympiad is back for its third round with over 20 community-hosted machine learning competitions on Kaggle.

The ML Olympiad – organised by groups including ML GDE, TFUG, and other ML communities – aims to provide developers with hands-on opportunities to learn and practice machine learning skills by tackling real-world challenges.

Over the previous two rounds, an impressive 605 teams participated across 32 competitions, generating 105 discussions and 170 notebooks.

This year’s lineup includes challenges spanning areas like healthcare, sustainability, natural language processing (NLP), computer vision, and more. Competitions are hosted by expert groups and developers from around the world.
Here are this year’s challenges:

  • Smoking Detection in Patients

Hosted by Rishiraj Acharya (AI/ML GDE) in collaboration with TFUG Kolkata, this competition tasks participants with predicting smoking status using bio-signal ML models.

  • TurtleVision Challenge

Organised by Anas Lahdhiri under MLAct, this challenge calls for the development of a classification model to differentiate between jellyfish and plastic pollution in ocean imagery.

  • Detect Hallucinations in LLMs

Luca Massaron (AI/ML GDE) presents a unique challenge of identifying hallucinations in answers provided by a Mistral 7B instruct model.

  • ZeroWasteEats

Anushka Raj, alongside TFUG Hajipur, seeks ML solutions to mitigate food wastage, a critical concern in today’s world.

  • Predicting Wellness

Hosted by Ankit Kumar Verma and TFUG Prayagraj, this competition involves predicting the percentage of body fat in men using multiple regression methods.

  • Offbeats Edition

Ayush Morbar from Offbeats Byte Labs invites participants to build a regression model to predict the age of crabs.

  • Nashik Weather

TFUG Nashik challenges participants to forecast the weather condition in Nashik, India, leveraging machine learning techniques.

  • Predicting Earthquake Damage

Usha Rengaraju presents a task of predicting the level of damage to buildings caused by earthquakes, based on various factors.

  • Forecasting Bangladesh’s Weather

TFUG Bangladesh (Dhaka) aims to predict rainfall, average temperature, and rainy days for a particular day in Bangladesh.

  • CO2 Emissions Prediction Challenge

Md Shahriar Azad Evan and Shuvro Pal from TFUG North Bengal seek to predict CO2 emissions per capita for 2030 using global development indicators.

  • AI & ML Malaysia

Kuan Hoong (AI/ML GDE) challenges participants to predict loan approval status, addressing a crucial aspect of financial inclusion.

  • Sustainable Urban Living

Ashwin Raj and BeyondML task participants with predicting the habitability score of properties, promoting sustainable urban development.

  • Toxic Language (PTBR) Detection

Hosted in Brazilian Portuguese, this challenge by Mikaeri Ohana, Pedro Gengo, and Vinicius F. Caridá (AI/ML GDE) involves classifying toxic tweets.

  • Improving Disaster Response

Yara Armel Desire of TFUG Abidjan invites participants to predict humanitarian aid contributions in response to disasters worldwide.

  • Urban Traffic Density

Kartikey Rawat from TFUG Durg calls for the development of predictive models to estimate traffic density in urban areas.

  • Know Your Customer Opinion

TFUG Surabaya presents a challenge of classifying customer opinions into Likert scale categories.

  • Forecasting India’s Weather

Mohammed Moinuddin and TFUG Hyderabad task participants with predicting temperatures for specific months in India.

  • Classification Champ

Hosted by TFUG Bhopal, this competition involves developing classification models to predict tumour malignancy.

  • AI-Powered Job Description Generator

Akaash Tripathi from TFUG Ghaziabad challenges participants to build a system that automatically generates job descriptions using Generative AI and chatbot interface.

  • Machine Translation French-Wolof

GalsenAI presents a challenge of accurately translating French sentences into Wolof, offering a platform to enhance language translation capabilities.

  • Water Mapping using Satellite Imagery

Taha Bouhsine of ML Nomads tasks participants with water mapping using satellite imagery for dam drought detection.

Google is supporting each community host this round through its Google for Developers program.

Participants are encouraged to search for “ML Olympiad” on Kaggle, follow #MLOlympiad on social media, and get involved in the competitions that most interest them.

With such a diverse array of real-world machine learning challenges, the ML Olympiad represents an excellent opportunity for developers to put their skills to the test and gain valuable experience.

(Image Credit: Google)

See also: Microsoft: China plans to disrupt elections with AI-generated disinformation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ML Olympiad returns with over 20 challenges appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/08/ml-olympiad-returns-with-over-20-challenges/feed/ 0
AIs in India will need government permission before launching https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/ https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/#respond Mon, 04 Mar 2024 17:03:13 +0000 https://www.artificialintelligence-news.com/?p=14478 In an advisory issued by India’s Ministry of Electronics and Information Technology (MeitY) last Friday, it was declared that any AI technology still in development must acquire explicit government permission before being released to the public. Developers will also only be able to deploy these technologies after labelling the potential fallibility or unreliability of the... Read more »

The post AIs in India will need government permission before launching appeared first on AI News.

]]>
In an advisory issued by India’s Ministry of Electronics and Information Technology (MeitY) last Friday, it was declared that any AI technology still in development must acquire explicit government permission before being released to the public.

Developers will also only be able to deploy these technologies after labelling the potential fallibility or unreliability of the output generated.

Furthermore, the document outlines plans for implementing a “consent popup” mechanism to inform users about potential defects or errors produced by AI. It also mandates the labelling of deepfakes with permanent unique metadata or other identifiers to prevent misuse.

In addition to these measures, the advisory orders all intermediaries or platforms to ensure that any AI model product – including large language models (LLM) – does not permit bias, discrimination, or threaten the integrity of the electoral process.

Some industry figures have criticised India’s plans as going too far:

Developers are requested to comply with the advisory within 15 days of its issuance. It has been suggested that after compliance and application for permission to release a product, developers may be required to perform a demo for government officials or undergo stress testing.

Although the advisory is not legally binding at present, it signifies the government’s expectations and hints at the future direction of regulation in the AI sector.

“We are doing it as an advisory today asking you (the AI platforms) to comply with it,” said IT minister Rajeev Chandrasekhar. He added that this stance would eventually be encoded in legislation.

“Generative AI or AI platforms available on the internet will have to take full responsibility for what the platform does, and cannot escape the accountability by saying that their platform is under testing,” continued Chandrasekhar, as reported by local media.

(Photo by Naveed Ahmed on Unsplash)

See also: Elon Musk sues OpenAI over alleged breach of nonprofit agreement

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AIs in India will need government permission before launching appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/04/ai-india-need-government-permission-before-launching/feed/ 0
UK and Canada sign AI compute agreement https://www.artificialintelligence-news.com/2024/01/31/uk-and-canada-sign-ai-compute-agreement/ https://www.artificialintelligence-news.com/2024/01/31/uk-and-canada-sign-ai-compute-agreement/#respond Wed, 31 Jan 2024 09:58:26 +0000 https://www.artificialintelligence-news.com/?p=14309 The UK and Canada have signed a landmark agreement to collaborate on the computing power needed to advance AI research and development.  The new Memorandum of Understanding on compute was signed in Ottawa by UK Technology Secretary Michelle Donelan and Canadian Minister for Innovation, Science and Industry François-Phillippe Champagne. It cements the two countries’ partnership... Read more »

The post UK and Canada sign AI compute agreement appeared first on AI News.

]]>
The UK and Canada have signed a landmark agreement to collaborate on the computing power needed to advance AI research and development. 

The new Memorandum of Understanding on compute was signed in Ottawa by UK Technology Secretary Michelle Donelan and Canadian Minister for Innovation, Science and Industry François-Phillippe Champagne. It cements the two countries’ partnership on AI by committing them to explore ways to give researchers and companies affordable access to the high-powered computing capacity required for cutting-edge AI systems.

Compute power and data are essential ingredients for developing modern AI models and applications. As AI rapidly advances, access to state-of-the-art computing infrastructure is increasingly vital for conducting groundbreaking research and staying globally competitive. The UK-Canada agreement recognises this and aims to foster joint innovation by improving compute access.

Specifically, under the new agreement, the UK and Canada will look at opportunities for collaborating on providing compute power for shared research priorities like biomedicine. They also intend to work together – and with like-minded countries – on sustainable models for sharing compute capabilities. 

The compute agreement builds on a wider UK-Canada science and technology partnership also renewed during Secretary Donelan’s visit. This partnership identifies quantum computing, AI, semiconductors and clean energy as key areas for increased collaboration between British and Canadian researchers. It also focuses on coordinating scientific diplomacy efforts relating to new technologies.

Academics and researchers from both countries have been actively involved in collaborative programs, with £350 million awarded by UK Research and Innovation between 2020 and 2023. This includes pioneering initiatives like the first industry-led partnership on quantum technologies and a project on arctic ecosystems in collaboration with Inuit Tapiriit Kanatami.

The latest accords reinforce the two countries’ “unique partnership” across science and innovation, said Secretary Donelan. She emphasised their commitment to harnessing emerging technologies as an “active force for good.”

Minister Champagne echoed this, saying the agreements will have “positive impacts across all fields of research and innovation.” He highlighted opportunities to link leading AI researchers in both countries.

The renewal of UK-Canada science ties comes as Secretary Donelan meets with AI experts and companies during a three-day visit. She held discussions on the future of AI with Yoshua Bengio, a pioneer in the field and recipient of the Turing Award, computing’s highest honour.

With a combined $5 trillion economy, the UK and Canada have committed to collaborating closely on technological innovation for the benefit of both countries and the wider world. The compute accord marks an important step toward realising that vision in the critical field of AI.

(Photo by Scott Graham on Unsplash)

See also: Financial services introducing AI but hindered by data issues

The post UK and Canada sign AI compute agreement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/31/uk-and-canada-sign-ai-compute-agreement/feed/ 0
OpenAI releases new models and lowers API pricing https://www.artificialintelligence-news.com/2024/01/26/openai-releases-new-models-lowers-api-pricing/ https://www.artificialintelligence-news.com/2024/01/26/openai-releases-new-models-lowers-api-pricing/#respond Fri, 26 Jan 2024 13:25:01 +0000 https://www.artificialintelligence-news.com/?p=14270 OpenAI has announced several updates that will benefit developers using its AI services, including new embedding models, a lower price for GPT-3.5 Turbo, an updated GPT-4 Turbo preview, and more robust content moderation capabilities. The San Francisco-based AI lab said its new text-embedding-3-small and text-embedding-3-large models offer upgraded performance over previous generations. For example, text-embedding-3-large... Read more »

The post OpenAI releases new models and lowers API pricing appeared first on AI News.

]]>
OpenAI has announced several updates that will benefit developers using its AI services, including new embedding models, a lower price for GPT-3.5 Turbo, an updated GPT-4 Turbo preview, and more robust content moderation capabilities.

The San Francisco-based AI lab said its new text-embedding-3-small and text-embedding-3-large models offer upgraded performance over previous generations. For example, text-embedding-3-large achieves average scores of 54.9 percent on the MIRACL benchmark and 64.6 percent on the MTEB benchmark, up from 31.4 percent and 61 percent respectively for the older text-embedding-ada-002 model. 

Additionally, OpenAI revealed the price per 1,000 tokens has dropped 5x for text-embedding-3-small compared to text-embedding-ada-002, from $0.0001 to $0.00002. The company said developers can also shorten embeddings to reduce costs without significantly impacting accuracy.

Next week, OpenAI plans to release an updated GPT-3.5 Turbo model and cut its pricing by 50 percent for input tokens and 25 percent for output tokens. This will mark the third price reduction for GPT-3.5 Turbo in the past year as OpenAI aims to drive more adoption.

OpenAI has additionally updated its GPT-4 Turbo preview to version gpt-4-0125-preview, noting over 70 percent of requests have transitioned to the model since its debut. Improvements include more thorough completion of code generation and other tasks. 

To support developers building safe AI apps, OpenAI has also rolled out its most advanced content moderation model yet in text-moderation-007. The company said this identifies potentially harmful text more accurately than previous versions.

Finally, developers now have more control over API keys and visibility into usage metrics. OpenAI says developers can assign permissions to keys and view consumption on a per-key level to better track individual products or projects.

OpenAI says that more platform improvements are planned over the coming months to cater for larger development teams.

(Photo by Jonathan Kemper on Unsplash)

See also: OpenAI suspends developer of politician-impersonating chatbot

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI releases new models and lowers API pricing appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/26/openai-releases-new-models-lowers-api-pricing/feed/ 0