developers Archives - AI News https://www.artificialintelligence-news.com/tag/developers/ Artificial Intelligence News Tue, 14 May 2024 12:43:58 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png developers Archives - AI News https://www.artificialintelligence-news.com/tag/developers/ 32 32 GPT-4o delivers human-like AI interaction with text, audio, and vision integration https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/ https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/#respond Tue, 14 May 2024 12:43:56 +0000 https://www.artificialintelligence-news.com/?p=14811 OpenAI has launched its new flagship model, GPT-4o, which seamlessly integrates text, audio, and visual inputs and outputs, promising to enhance the naturalness of machine interactions. GPT-4o, where the “o” stands for “omni,” is designed to cater to a broader spectrum of input and output modalities. “It accepts as input any combination of text, audio,... Read more »

The post GPT-4o delivers human-like AI interaction with text, audio, and vision integration appeared first on AI News.

]]>
OpenAI has launched its new flagship model, GPT-4o, which seamlessly integrates text, audio, and visual inputs and outputs, promising to enhance the naturalness of machine interactions.

GPT-4o, where the “o” stands for “omni,” is designed to cater to a broader spectrum of input and output modalities. “It accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs,” OpenAI announced.

Users can expect a response time as quick as 232 milliseconds, mirroring human conversational speed, with an impressive average response time of 320 milliseconds.

Pioneering capabilities

The introduction of GPT-4o marks a leap from its predecessors by processing all inputs and outputs through a single neural network. This approach enables the model to retain critical information and context that were previously lost in the separate model pipeline used in earlier versions.

Prior to GPT-4o, ‘Voice Mode’ could handle audio interactions with latencies of 2.8 seconds for GPT-3.5 and 5.4 seconds for GPT-4. The previous setup involved three distinct models: one for transcribing audio to text, another for textual responses, and a third for converting text back to audio. This segmentation led to loss of nuances such as tone, multiple speakers, and background noise.

As an integrated solution, GPT-4o boasts notable improvements in vision and audio understanding. It can perform more complex tasks such as harmonising songs, providing real-time translations, and even generating outputs with expressive elements like laughter and singing. Examples of its broad capabilities include preparing for interviews, translating languages on the fly, and generating customer service responses.

Nathaniel Whittemore, Founder and CEO of Superintelligent, commented: “Product announcements are going to inherently be more divisive than technology announcements because it’s harder to tell if a product is going to be truly different until you actually interact with it. And especially when it comes to a different mode of human-computer interaction, there is even more room for diverse beliefs about how useful it’s going to be.

“That said, the fact that there wasn’t a GPT-4.5 or GPT-5 announced is also distracting people from the technological advancement that this is a natively multimodal model. It’s not a text model with a voice or image addition; it is a multimodal token in, multimodal token out. This opens up a huge array of use cases that are going to take some time to filter into the consciousness.”

Performance and safety

GPT-4o matches GPT-4 Turbo performance levels in English text and coding tasks but outshines significantly in non-English languages, making it a more inclusive and versatile model. It sets a new benchmark in reasoning with a high score of 88.7% on 0-shot COT MMLU (general knowledge questions) and 87.2% on the 5-shot no-CoT MMLU.

The model also excels in audio and translation benchmarks, surpassing previous state-of-the-art models like Whisper-v3. In multilingual and vision evaluations, it demonstrates superior performance, enhancing OpenAI’s multilingual, audio, and vision capabilities.

OpenAI has incorporated robust safety measures into GPT-4o by design, incorporating techniques to filter training data and refining behaviour through post-training safeguards. The model has been assessed through a Preparedness Framework and complies with OpenAI’s voluntary commitments. Evaluations in areas like cybersecurity, persuasion, and model autonomy indicate that GPT-4o does not exceed a ‘Medium’ risk level across any category.

Further safety assessments involved extensive external red teaming with over 70 experts in various domains, including social psychology, bias, fairness, and misinformation. This comprehensive scrutiny aims to mitigate risks introduced by the new modalities of GPT-4o.

Availability and future integration

Starting today, GPT-4o’s text and image capabilities are available in ChatGPT—including a free tier and extended features for Plus users. A new Voice Mode powered by GPT-4o will enter alpha testing within ChatGPT Plus in the coming weeks.

Developers can access GPT-4o through the API for text and vision tasks, benefiting from its doubled speed, halved price, and enhanced rate limits compared to GPT-4 Turbo.

OpenAI plans to expand GPT-4o’s audio and video functionalities to a select group of trusted partners via the API, with broader rollout expected in the near future. This phased release strategy aims to ensure thorough safety and usability testing before making the full range of capabilities publicly available.

“It’s hugely significant that they’ve made this model available for free to everyone, as well as making the API 50% cheaper. That is a massive increase in accessibility,” explained Whittemore.

OpenAI invites community feedback to continuously refine GPT-4o, emphasising the importance of user input in identifying and closing gaps where GPT-4 Turbo might still outperform.

(Image Credit: OpenAI)

See also: OpenAI takes steps to boost AI-generated content transparency

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GPT-4o delivers human-like AI interaction with text, audio, and vision integration appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/feed/ 0
OpenAI makes GPT-4 Turbo with Vision API generally available https://www.artificialintelligence-news.com/2024/04/10/openai-gpt-4-turbo-with-vision-api-generally-available/ https://www.artificialintelligence-news.com/2024/04/10/openai-gpt-4-turbo-with-vision-api-generally-available/#respond Wed, 10 Apr 2024 12:15:01 +0000 https://www.artificialintelligence-news.com/?p=14670 OpenAI has announced that its powerful GPT-4 Turbo with Vision model is now generally available through the company’s API, opening up new opportunities for enterprises and developers to integrate advanced language and vision capabilities into their applications. The launch of GPT-4 Turbo with Vision on the API follows the initial release of GPT-4’s vision and... Read more »

The post OpenAI makes GPT-4 Turbo with Vision API generally available appeared first on AI News.

]]>
OpenAI has announced that its powerful GPT-4 Turbo with Vision model is now generally available through the company’s API, opening up new opportunities for enterprises and developers to integrate advanced language and vision capabilities into their applications.

The launch of GPT-4 Turbo with Vision on the API follows the initial release of GPT-4’s vision and audio upload features last September and the unveiling of the turbocharged GPT-4 Turbo model at OpenAI’s developer conference in November.

GPT-4 Turbo promises significant speed improvements, larger input context windows of up to 128,000 tokens (equivalent to about 300 pages), and increased affordability for developers.

A key enhancement is the ability for API requests to utilise the model’s vision recognition and analysis capabilities through text format JSON and function calling. This allows developers to generate JSON code snippets that can automate actions within connected apps, such as sending emails, making purchases, or posting online. However, OpenAI strongly recommends building user confirmation flows before taking actions that impact the real world.

Several startups are already leveraging GPT-4 Turbo with Vision, including Cognition, whose AI coding agent Devin relies on the model to automatically generate full code:

Healthify, a health and fitness app, uses the model to provide nutritional analysis and recommendations based on photos of meals:

TLDraw, a UK-based startup, employs GPT-4 Turbo with Vision to power its virtual whiteboard and convert user drawings into functional websites:

Despite facing stiff competition from newer models such as Anthropic’s Claude 3 Opus and Google’s Gemini Advanced, the API launch should help solidify OpenAI’s position in the enterprise market as developers await the company’s next large language model.

(Photo by v2osk)

See also: Stability AI unveils 12B parameter Stable LM 2 model and updated 1.6B variant

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI makes GPT-4 Turbo with Vision API generally available appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/10/openai-gpt-4-turbo-with-vision-api-generally-available/feed/ 0
ML Olympiad returns with over 20 challenges https://www.artificialintelligence-news.com/2024/04/08/ml-olympiad-returns-with-over-20-challenges/ https://www.artificialintelligence-news.com/2024/04/08/ml-olympiad-returns-with-over-20-challenges/#respond Mon, 08 Apr 2024 09:16:00 +0000 https://www.artificialintelligence-news.com/?p=14656 The popular ML Olympiad is back for its third round with over 20 community-hosted machine learning competitions on Kaggle. The ML Olympiad – organised by groups including ML GDE, TFUG, and other ML communities – aims to provide developers with hands-on opportunities to learn and practice machine learning skills by tackling real-world challenges. Over the... Read more »

The post ML Olympiad returns with over 20 challenges appeared first on AI News.

]]>
The popular ML Olympiad is back for its third round with over 20 community-hosted machine learning competitions on Kaggle.

The ML Olympiad – organised by groups including ML GDE, TFUG, and other ML communities – aims to provide developers with hands-on opportunities to learn and practice machine learning skills by tackling real-world challenges.

Over the previous two rounds, an impressive 605 teams participated across 32 competitions, generating 105 discussions and 170 notebooks.

This year’s lineup includes challenges spanning areas like healthcare, sustainability, natural language processing (NLP), computer vision, and more. Competitions are hosted by expert groups and developers from around the world.
Here are this year’s challenges:

  • Smoking Detection in Patients

Hosted by Rishiraj Acharya (AI/ML GDE) in collaboration with TFUG Kolkata, this competition tasks participants with predicting smoking status using bio-signal ML models.

  • TurtleVision Challenge

Organised by Anas Lahdhiri under MLAct, this challenge calls for the development of a classification model to differentiate between jellyfish and plastic pollution in ocean imagery.

  • Detect Hallucinations in LLMs

Luca Massaron (AI/ML GDE) presents a unique challenge of identifying hallucinations in answers provided by a Mistral 7B instruct model.

  • ZeroWasteEats

Anushka Raj, alongside TFUG Hajipur, seeks ML solutions to mitigate food wastage, a critical concern in today’s world.

  • Predicting Wellness

Hosted by Ankit Kumar Verma and TFUG Prayagraj, this competition involves predicting the percentage of body fat in men using multiple regression methods.

  • Offbeats Edition

Ayush Morbar from Offbeats Byte Labs invites participants to build a regression model to predict the age of crabs.

  • Nashik Weather

TFUG Nashik challenges participants to forecast the weather condition in Nashik, India, leveraging machine learning techniques.

  • Predicting Earthquake Damage

Usha Rengaraju presents a task of predicting the level of damage to buildings caused by earthquakes, based on various factors.

  • Forecasting Bangladesh’s Weather

TFUG Bangladesh (Dhaka) aims to predict rainfall, average temperature, and rainy days for a particular day in Bangladesh.

  • CO2 Emissions Prediction Challenge

Md Shahriar Azad Evan and Shuvro Pal from TFUG North Bengal seek to predict CO2 emissions per capita for 2030 using global development indicators.

  • AI & ML Malaysia

Kuan Hoong (AI/ML GDE) challenges participants to predict loan approval status, addressing a crucial aspect of financial inclusion.

  • Sustainable Urban Living

Ashwin Raj and BeyondML task participants with predicting the habitability score of properties, promoting sustainable urban development.

  • Toxic Language (PTBR) Detection

Hosted in Brazilian Portuguese, this challenge by Mikaeri Ohana, Pedro Gengo, and Vinicius F. Caridá (AI/ML GDE) involves classifying toxic tweets.

  • Improving Disaster Response

Yara Armel Desire of TFUG Abidjan invites participants to predict humanitarian aid contributions in response to disasters worldwide.

  • Urban Traffic Density

Kartikey Rawat from TFUG Durg calls for the development of predictive models to estimate traffic density in urban areas.

  • Know Your Customer Opinion

TFUG Surabaya presents a challenge of classifying customer opinions into Likert scale categories.

  • Forecasting India’s Weather

Mohammed Moinuddin and TFUG Hyderabad task participants with predicting temperatures for specific months in India.

  • Classification Champ

Hosted by TFUG Bhopal, this competition involves developing classification models to predict tumour malignancy.

  • AI-Powered Job Description Generator

Akaash Tripathi from TFUG Ghaziabad challenges participants to build a system that automatically generates job descriptions using Generative AI and chatbot interface.

  • Machine Translation French-Wolof

GalsenAI presents a challenge of accurately translating French sentences into Wolof, offering a platform to enhance language translation capabilities.

  • Water Mapping using Satellite Imagery

Taha Bouhsine of ML Nomads tasks participants with water mapping using satellite imagery for dam drought detection.

Google is supporting each community host this round through its Google for Developers program.

Participants are encouraged to search for “ML Olympiad” on Kaggle, follow #MLOlympiad on social media, and get involved in the competitions that most interest them.

With such a diverse array of real-world machine learning challenges, the ML Olympiad represents an excellent opportunity for developers to put their skills to the test and gain valuable experience.

(Image Credit: Google)

See also: Microsoft: China plans to disrupt elections with AI-generated disinformation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ML Olympiad returns with over 20 challenges appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/08/ml-olympiad-returns-with-over-20-challenges/feed/ 0
Google launches Gemini 1.5 with ‘experimental’ 1M token context https://www.artificialintelligence-news.com/2024/02/16/google-launches-gemini-1-5-experimental-1m-token-context/ https://www.artificialintelligence-news.com/2024/02/16/google-launches-gemini-1-5-experimental-1m-token-context/#respond Fri, 16 Feb 2024 13:42:49 +0000 https://www.artificialintelligence-news.com/?p=14415 Google has unveiled its latest AI model, Gemini 1.5, which features what the company calls an “experimental” one million token context window.  The new capability allows Gemini 1.5 to process extremely long text passages – up to one million characters – to understand context and meaning. This dwarfs previous AI systems like Claude 2.1 and... Read more »

The post Google launches Gemini 1.5 with ‘experimental’ 1M token context appeared first on AI News.

]]>
Google has unveiled its latest AI model, Gemini 1.5, which features what the company calls an “experimental” one million token context window. 

The new capability allows Gemini 1.5 to process extremely long text passages – up to one million characters – to understand context and meaning. This dwarfs previous AI systems like Claude 2.1 and GPT-4 Turbo, which max out at 200,000 and 128,000 tokens respectively:

“Gemini 1.5 Pro achieves near-perfect recall on long-context retrieval tasks across modalities, improves the state-of-the-art in long-document QA, long-video QA and long-context ASR, and matches or surpasses Gemini 1.0 Ultra’s state-of-the-art performance across a broad set of benchmarks,” said Google researchers in a technical paper (PDF).

The efficiency of Google’s latest model is attributed to its innovative Mixture-of-Experts (MoE) architecture.

“While a traditional Transformer functions as one large neural network, MoE models are divided into smaller ‘expert’ neural networks,” explained Demis Hassabis, CEO of Google DeepMind.

“Depending on the type of input given, MoE models learn to selectively activate only the most relevant expert pathways in its neural network. This specialisation massively enhances the model’s efficiency.”

To demonstrate the power of the 1M token context window, Google showed how Gemini 1.5 could ingest the entire 326,914-token Apollo 11 flight transcript and then accurately answer specific questions about it. It also summarised key details from a 684,000-token silent film when prompted.

Google is initially providing developers and enterprises free access to a limited Gemini 1.5 preview with a one million token context window. A 128,000 token general release for the public will come later, along with pricing details.

For now, the one million token capability remains experimental. But if it lives up to its early promise, Gemini 1.5 could set a new standard for AI’s ability to understand complex, real-world text.

Developers interested in testing Gemini 1.5 Pro can sign up in AI Studio. Google says that enterprise customers can reach out to their Vertex AI account team.

(Image Credit: Google)

See also: Amazon trains 980M parameter LLM with ’emergent abilities’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google launches Gemini 1.5 with ‘experimental’ 1M token context appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/02/16/google-launches-gemini-1-5-experimental-1m-token-context/feed/ 0
Developers believe AI will have a positive world impact https://www.artificialintelligence-news.com/2023/03/13/developers-believe-ai-have-positive-world-impact/ https://www.artificialintelligence-news.com/2023/03/13/developers-believe-ai-have-positive-world-impact/#respond Mon, 13 Mar 2023 15:47:06 +0000 https://www.artificialintelligence-news.com/?p=12806 AI is among the “next” technologies that developers believe will have a positive world impact. Some artists, developers, writers, and other creators have expressed concern that generative AIs may pose a threat to their livelihoods. However, an increasing number view such AIs as assistive tools that will help creators rather than replace them. Stack Overflow... Read more »

The post Developers believe AI will have a positive world impact appeared first on AI News.

]]>
AI is among the “next” technologies that developers believe will have a positive world impact.

Some artists, developers, writers, and other creators have expressed concern that generative AIs may pose a threat to their livelihoods. However, an increasing number view such AIs as assistive tools that will help creators rather than replace them.

Stack Overflow surveyed its developer community to find out how developers feel about technologies currently making the headlines.

“Our latest pulse survey asked developers to think about nascent trends in technology and tell us how they felt about them,” explained Erin Yepis Senior Analyst, Market Research and Insights at Stack Overflow.

“With AI-assisted technologies in the news, this survey’s aim was to get a baseline for perceived utility and impact of a range of buzzworthy technologies in order to better understand the overall ecosystem.”

The technologies were ranked on a scale of zero (negative impact) to 10 (positive impact) based on expected world impact:

“Possibly what we are seeing here as far as why developers would not rate AI more negatively than technologies like low code/no code or blockchain but do give it a higher emergent score is that they understand the technology better than a typical journalist or think tank analyst,” reflects Yepis.

“Developers understand the distinction between media buzz around AI replacing humans in well-paying jobs and the possibility of humans in better quality jobs when AI and machine learning technologies mature.”

Machine learning (18%) and AI-assisted technologies (13%) came out on top of the technologies that developers want more hands-on training with. Despite ranking just about over the mean in expected world impact, blockchain (9%) took third place. Cloud computing (8%) and open-source (5%) took the fourth and fifth top spots, respectively.

The headline-grabbing technologies were also ranked on a scale of zero (experimental) to 10 (proven) based on how ready for primetime developers view each as being:

There are few surprises here but some may find it interesting to see low-code/no-code being viewed as less ready for primetime than AI-assisted technologies. Apple will also be hoping that developers’ views around AR/VR will change after the widely-expected launch of its headset later this year.

(Photo by Elena Mozhvilo on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Developers believe AI will have a positive world impact appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/13/developers-believe-ai-have-positive-world-impact/feed/ 0
OpenAI now allows developers to customise GPT-3 models https://www.artificialintelligence-news.com/2021/12/15/openai-now-allows-developers-to-customise-gpt-3-models/ https://www.artificialintelligence-news.com/2021/12/15/openai-now-allows-developers-to-customise-gpt-3-models/#respond Wed, 15 Dec 2021 11:44:42 +0000 https://artificialintelligence-news.com/?p=11507 OpenAI is making it easy for developers to “fine-tune” GPT-3, enabling custom models for their applications. The company says that existing datasets of “virtually any shape and size” can be used for custom models. A single command in the OpenAI command-line tool, alongside a user-provided file, is all that it takes to begin training. The... Read more »

The post OpenAI now allows developers to customise GPT-3 models appeared first on AI News.

]]>
OpenAI is making it easy for developers to “fine-tune” GPT-3, enabling custom models for their applications.

The company says that existing datasets of “virtually any shape and size” can be used for custom models.

A single command in the OpenAI command-line tool, alongside a user-provided file, is all that it takes to begin training. The custom GPT-3 model will then be available for use in OpenAI’s API immediately.

One customer says that it was able to increase correct outputs from 83 percent to 95 percent through fine-tuning. Another client reduced error rates by 50 percent.

Andreas Stuhlmüller, Co-Founder of Elicit, said:

“Since we started integrating fine-tuning into Elicit, for tasks with 500+ training examples, we’ve found that fine-tuning usually results in better speed and quality at a lower cost than few-shot learning.

This has been essential for making Elicit responsive at the same time as increasing its accuracy at summarising complex research statements.

As far as we can tell, this wouldn’t have been doable without fine-tuning GPT-3”

Joel Hellermark, CEO of Sana Labs, commented:

“With OpenAI’s customised models, fine-tuned on our data, Sana’s question and content generation went from grammatically correct but general responses to highly accurate semantic outputs which are relevant to the key learnings.

This yielded a 60 percent improvement when compared to non-custom models, enabling fundamentally more personalised and effective experiences for our learners.”

In June, Gartner said that 80 percent of technology products and services will be built by those who are not technology professionals by 2024. OpenAI is enabling custom AI models to be easily created to unlock the full potential of such products and services.

Related: OpenAI removes GPT-3 API waitlist and opens applications for all developers

(Photo by Sigmund on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post OpenAI now allows developers to customise GPT-3 models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/15/openai-now-allows-developers-to-customise-gpt-3-models/feed/ 0
GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research https://www.artificialintelligence-news.com/2021/11/09/gtc-2021-nvidia-debuts-accelerated-computing-libraries-partners-with-google-ibm-and-others-to-speed-up-quantum-research/ https://www.artificialintelligence-news.com/2021/11/09/gtc-2021-nvidia-debuts-accelerated-computing-libraries-partners-with-google-ibm-and-others-to-speed-up-quantum-research/#respond Tue, 09 Nov 2021 13:06:58 +0000 https://artificialintelligence-news.com/?p=11349 Nvidia has unveiled 65 new and updated software development kits at GTC 2021, alongside a partnership with industry leaders to speed up quantum research. The company’s roster of accelerated computing kits now exceeds 150 and supports the almost three million developers in NVIDIA’s Developer Program. Four of the major new SDKs are: ReOpt – Automatically... Read more »

The post GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research appeared first on AI News.

]]>
Nvidia has unveiled 65 new and updated software development kits at GTC 2021, alongside a partnership with industry leaders to speed up quantum research.

The company’s roster of accelerated computing kits now exceeds 150 and supports the almost three million developers in NVIDIA’s Developer Program.

Four of the major new SDKs are:

  • ReOpt – Automatically optimises logistical processes using advanced, parallel algorithms. This includes vehicle routes, warehouse selection, and fleet mix. The dynamic rerouting capabilities – shown in an on-stage demo – can reduce travel time, save fuel costs, and minimise idle periods.
  • cuNumeric – Implements the popular NumPy application programming interface and enables scaling to multi-GPU and multi-node systems with zero code changes.
  • cuQuantum – Designed for quantum computing, it enables large quantum circuits to be simulated faster. This enables quantum researchers to simulate areas such as near-term variational quantum algorithms for molecules, error correction algorithms to identify fault tolerance, and accelerate popular quantum simulators from Atos, Google, and IBM.
  • CUDA-X accelerated DGL container – Helps developers and data scientists working on graph neural networks to quickly set up a working environment. The container makes it easy to work in an integrated, GPU-accelerated GNN environment combining DGL and Pytorch.

Some existing AI-related SDKs that have received notable updates are:

  • Deepstream 6.0 – introduces a new graph composer that makes computer vision accessible with a visual drag-and-drop interface.
  • Triton 2.15, TensorRT 8.2 and cuDNN 8.4 – assists with the development of deep neural networks by providing new optimisations for large language models and inference acceleration for gradient-boosted decision trees and random forests.
  • Merlin 0.8 – boosts recommendation systems with its new capabilities for predicting a user’s next action with little or no user data and support for models larger than GPU memory.

Accelerating quantum research

Nvidia has established a partnership with Google, IBM, and a number of small companies, national labs, and university research groups to accelerate quantum research.

“It takes a village to nurture an emerging technology, so Nvidia is collaborating with Google Quantum AI, IBM, and others to take quantum computing to the next level,” explained the company in a blog post.

The first library from the aforementioned new cuQuantum SDK is Nvidia’s initial contribution to the partnership. The library is called cuStateVec and is an accelerator for the state vector simulation method which tracks the full state of the system in memory and can scale to tens of qubits.

cuStateVec has been integrated into Google Quantum AI’s state vector simulator qsim and can be used through the open-source framework Cirq.

“Quantum computing promises to solve tough challenges in computing that are beyond the reach of traditional systems,” commented Catherine Vollgraff Heidweiller at Google Quantum AI.

“This high-performance simulation stack will accelerate the work of researchers around the world who are developing algorithms and applications for quantum computers.”

In December, cuStateVec will also be integrated with Qiskit Aer—a high-performance simulator framework for quantum circuits from IBM.

Among the national labs using cuQuantum to accelerate their research are Oak Ridge, Argonne, Lawrence Berkeley National Laboratory, and Pacific Northwest National Laboratory. University research groups include those at Caltech, Oxford, and MIT.

Nvidia is helping developers to get started by creating a ‘DGX quantum appliance’ that puts its simulation software in a container optimised for its DGX A100 systems. The software will be available early next year via the company’s NGC Catalog.

(Image Credit: Nvidia)

Looking to revamp your digital transformation strategy? Learn more about the Digital Transformation Week event taking place in Amsterdam on 23-24 November 2021 and discover key strategies for making your digital efforts a success.

The post GTC 2021: Nvidia debuts accelerated computing libraries, partners with Google, IBM, and others to speed up quantum research appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/09/gtc-2021-nvidia-debuts-accelerated-computing-libraries-partners-with-google-ibm-and-others-to-speed-up-quantum-research/feed/ 0
Unity devs aren’t too happy their work is being sold for military AI purposes https://www.artificialintelligence-news.com/2021/08/24/unity-devs-arent-too-happy-work-sold-military-ai-purposes/ https://www.artificialintelligence-news.com/2021/08/24/unity-devs-arent-too-happy-work-sold-military-ai-purposes/#respond Tue, 24 Aug 2021 14:06:09 +0000 http://artificialintelligence-news.com/?p=10954 Developers from Unity are calling for more transparency after discovering their AI work is being sold to the military. Video games have pioneered AI developments since Nim was released in 1951. In the decades since, game developers have worked to improve AIs to provide a more enjoyable experience for a growing number of people around... Read more »

The post Unity devs aren’t too happy their work is being sold for military AI purposes appeared first on AI News.

]]>
Developers from Unity are calling for more transparency after discovering their AI work is being sold to the military.

Video games have pioneered AI developments since Nim was released in 1951. In the decades since, game developers have worked to improve AIs to provide a more enjoyable experience for a growing number of people around the world.

Just imagine the horror if those developers found out their work was instead being used for real military purposes without their knowledge. That’s exactly what developers behind the popular Unity game engine discovered.

According to a Vice report, three former and current Unity employees confirmed that much of the company’s contract work is to do with AI programming. That’s of little surprise and wouldn’t be of too much concern if it wasn’t conducted under the “GovTech” department with seemingly a high degree of secrecy.

“It should be very clear when people are stepping into the military initiative part of Unity,” one of Vice’s sources said, on condition of anonymity for fear of reprisal.

Vice discovered several deals with the Department of Defense, including two six-figure contracts for “modeling and simulation prototypes” with the US Air Force.

Unity bosses clearly understand that some employees may not be entirely comfortable with knowing their work could be used for war. One memo instructs managers to use the terms “government” or “defense” instead of “military.”

In an internal Slack group, Unity CEO John Riccitello promised to have a meeting with employees.

“Whether or not I’m working directly for the government team, I’m empowering the products they’re selling,” wrote Riccitello. “Do you want to use your tools to catch bad guys?”

That question is likely to receive some passionate responses. After all, few of us are going to forget the backlash and subsequent resignation of Googlers following revelations about the company’s since-revoked ‘Project Maven’ contract with the Pentagon.

You can find Vice’s full report here.

(Photo by Levi Meir Clancy on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Unity devs aren’t too happy their work is being sold for military AI purposes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/24/unity-devs-arent-too-happy-work-sold-military-ai-purposes/feed/ 0
Experts debate whether GitHub’s latest AI tool violates copyright law https://www.artificialintelligence-news.com/2021/07/06/experts-debate-github-latest-ai-tool-violates-copyright-law/ https://www.artificialintelligence-news.com/2021/07/06/experts-debate-github-latest-ai-tool-violates-copyright-law/#respond Tue, 06 Jul 2021 15:47:53 +0000 http://artificialintelligence-news.com/?p=10749 GitHub’s impressive new code-assisting AI tool called Copilot is receiving both praise and criticism. Copilot draws context from the code that a developer is working on and can suggest entire lines or functions. The system, from OpenAI, claims to be “significantly more capable than GPT-3” in generating code and can help even veteran programmers to... Read more »

The post Experts debate whether GitHub’s latest AI tool violates copyright law appeared first on AI News.

]]>
GitHub’s impressive new code-assisting AI tool called Copilot is receiving both praise and criticism.

Copilot draws context from the code that a developer is working on and can suggest entire lines or functions. The system, from OpenAI, claims to be “significantly more capable than GPT-3” in generating code and can help even veteran programmers to discover new APIs or ways to solve problems.

Critics claim the system is using copyrighted code that GitHub then plans to charge for:

Julia Reda, a researcher and former MEP, published a blog post arguing that “GitHub Copilot is not infringing your copyright”.

GitHub – and therefore its owner, Microsoft – is using the huge number of repositories it hosts with ‘copyleft’ licenses for its tool. Copyleft allows open-source software or documentation to be modified and distributed back to the community.

Reda argues in her post that clamping down on tools such as GitHub’s through tighter copyright laws would harm copyleft and the benefits it offers.

One commenter isn’t entirely convinced:

“Lots of people have demonstrated that it pretty much regurgitates code verbatim from codebases with abandon. Putting GPL code inside a neural network does not remove the license if the output is the same as the input.

A large portion of what Copilot outputs is already full of copyright/license violations, even without extensions.”

Because the code is machine-generated, Reda also claims that it cannot be determined to be ‘derivative work’ that would face the wrath of intellectual property laws.

“Copyright law has only ever applied to intellectual creations – where there is no creator, there is no work,” says Reda. “This means that machine-generated code like that of GitHub Copilot is not a work under copyright law at all, so it is not a derivative work either.”

There is, of course, also a debate over whether the increasing amounts of machine-generated work should be covered under IP laws. We’ll let you decide your own position on the matter.

(Photo by Markus Winkler on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Experts debate whether GitHub’s latest AI tool violates copyright law appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/07/06/experts-debate-github-latest-ai-tool-violates-copyright-law/feed/ 0
Google launches fully managed cloud ML platform Vertex AI https://www.artificialintelligence-news.com/2021/05/19/google-launches-fully-managed-cloud-ml-platform-vertex-ai/ https://www.artificialintelligence-news.com/2021/05/19/google-launches-fully-managed-cloud-ml-platform-vertex-ai/#respond Wed, 19 May 2021 15:33:44 +0000 http://artificialintelligence-news.com/?p=10578 Google Cloud has launched Vertex AI, a fully managed cloud platform that simplifies the deployment and maintenance of machine learning models. Vertex was announced during this year’s virtual I/O developer conference and somewhat breaks from Google’s tradition of using its keynote to focus more on updates to its mobile and web development solutions. Google announcing... Read more »

The post Google launches fully managed cloud ML platform Vertex AI appeared first on AI News.

]]>
Google Cloud has launched Vertex AI, a fully managed cloud platform that simplifies the deployment and maintenance of machine learning models.

Vertex was announced during this year’s virtual I/O developer conference and somewhat breaks from Google’s tradition of using its keynote to focus more on updates to its mobile and web development solutions. Google announcing the platform during the keynote shows how important the company believes it to be for a wide range of developers.

Google claims that using Vertex enables models to be trained with up to 80 percent fewer lines of code when compared to competing platforms.

Bradley Shimmin, Chief Analyst for AI Platforms, Analytics, and Data Management at Omdia, said:

“Data science practitioners hoping to put AI to work across the enterprise aren’t looking to wrangle tooling. Rather, they want tooling that can tame the ML lifecycle. Unfortunately, that is no small order.

It takes a supportive infrastructure capable of unifying the user experience, plying AI itself as a supportive guide, and putting data at the very heart of the process — all while encouraging the flexible adoption of diverse technologies.”

Vertex brings together Google Cloud’s AI solutions into a single environment where models can go from experimentation all the way to production.

Andrew Moore, VP and GM of Cloud AI and Industry Solutions at Google Cloud, said:

“We had two guiding lights while building Vertex AI: get data scientists and engineers out of the orchestration weeds, and create an industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production.

We are very proud of what we came up with in this platform, as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work.”

Vertex provides access to Google’s MLOps toolkit which the company uses internally for workloads involving computer vision, conversation, and language.

Other MLOps features supported by Vertex include Vizier, which increases the rate of experimentation; Feature Store to help practitioners serve, share, and reuse ML features; and Experiments to accelerate the deployment of models into production with faster model selection.

Some high-profile companies were given early access to Vertex. Among them is ModiFace, a part of L’Oréal that focuses on the use of AR and AI to revolutionise the beauty industry.

Jeff Houghton, COO at ModiFace, said:

“We provide an immersive and personalized experience for people to purchase with confidence whether it’s a virtual try-on at web check out, or helping to understand what brand product is right for each individual.

With more and more of our users looking for information at home, on their phone, or at any other touchpoint, Vertex AI allowed us to create technology that is incredibly close to actually trying the product in real life.”

ModiFace uses Vertex to train AI models for all of its new services. For example, the company’s skin diagnostic service is trained on thousands of images from L’Oréal’s Research & Innovation arm and is combined with ModiFace’s AI algorithm to create tailor-made skincare routines.

Another firm that is benefiting from Vertex’s capabilities is Essence, a media agency that is part of London-based global advertising and communications giant WPP.

With Vertex AI, Essence’s developers and data analysts are able to regularly update models to keep pace with the rapidly-changing world of human behaviours and channel content.

Those are just two examples of companies whose operations are already being greatly enhanced through the use of Vertex. Now the floodgates have been opened, we’re sure there’ll be many more stories over the coming years and we can’t wait to hear about them.

You can learn how to get started with Vertex AI here.

(Photo by John Baker on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Google launches fully managed cloud ML platform Vertex AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/19/google-launches-fully-managed-cloud-ml-platform-vertex-ai/feed/ 0