AI Virtual Assistants News | Latest Virtual Assistants Updates | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-virtual-assistants/ Artificial Intelligence News Thu, 13 Jun 2024 17:21:22 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Virtual Assistants News | Latest Virtual Assistants Updates | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-virtual-assistants/ 32 32 Apple is reportedly getting free ChatGPT access https://www.artificialintelligence-news.com/2024/06/13/apple-reportedly-getting-free-chatgpt-access/ https://www.artificialintelligence-news.com/2024/06/13/apple-reportedly-getting-free-chatgpt-access/#respond Thu, 13 Jun 2024 17:21:19 +0000 https://www.artificialintelligence-news.com/?p=14994 Apple’s newly-announced partnership with OpenAI – which brings ChatGPT capabilities to iOS 18, iPadOS 18, and macOS Sequoia – comes without any direct money exchange. According to a Bloomberg report by Mark Gurman, “Apple isn’t paying OpenAI as part of the partnership.” Instead, the Cupertino-based company is leveraging its massive user base and device ecosystem... Read more »

The post Apple is reportedly getting free ChatGPT access appeared first on AI News.

]]>
Apple’s newly-announced partnership with OpenAI – which brings ChatGPT capabilities to iOS 18, iPadOS 18, and macOS Sequoia – comes without any direct money exchange.

According to a Bloomberg report by Mark Gurman, “Apple isn’t paying OpenAI as part of the partnership.”

Instead, the Cupertino-based company is leveraging its massive user base and device ecosystem as currency.

“Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments,” Gurman’s sources explained.

Gurman notes that OpenAI could find a silver lining by encouraging Apple users to subscribe to ChatGPT Plus, priced at $20 per month. If subscribers sign up through Apple devices, the iPhone maker will likely even claim a commission.

Apple’s AI strategy extends beyond OpenAI. The company is reportedly in talks to offer Google’s Gemini chatbot as an additional option later this year, signalling its intent to provide users with diverse AI experiences without necessarily having to make such major investments itself.

(Image Credit: Apple)

The long-term vision for Apple involves capturing a slice of the revenue generated from monetising chatbot results on its operating systems. This move anticipates a shift in user behaviour, with more people relying on AI assistants rather than traditional search engines like Google.

While Apple’s AI plans are ambitious, challenges remain. The report highlights that the company has yet to secure a deal with a local Chinese provider for chatbot features, though discussions with local firms like Baidu and Alibaba are underway. Initially, Apple Intelligence will be limited to US English, with expanded language support planned for the following year.

The Apple-OpenAI deal represents a novel approach to collaboration in the AI space, where brand exposure and technological integration are valued as much as, if not more than, direct financial compensation.

See also: Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Apple is reportedly getting free ChatGPT access appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/13/apple-reportedly-getting-free-chatgpt-access/feed/ 0
DuckDuckGo releases portal giving private access to AI models https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/ https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/#respond Fri, 07 Jun 2024 15:42:22 +0000 https://www.artificialintelligence-news.com/?p=14966 DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected. The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source... Read more »

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected.

The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source models are OpenAI’s GPT-3.5 Turbo and Anthropic’s Claude 3 Haiku, while the open-source models are Meta’s Llama-3 70B and Mistral AI’s Mixtral 8x7b.

What sets DuckDuckGo AI Chat apart is its commitment to user privacy. Neither DuckDuckGo nor the chatbot providers can use user data to train their models, ensuring that interactions remain private and anonymous. DuckDuckGo also strips away metadata, such as server or IP addresses, so that queries appear to originate from the company itself rather than individual users.

The company has agreements in place with all model providers to ensure that any saved chats are completely deleted within 30 days, and that none of the chats made on the platform can be used to train or improve the models. This makes preserving privacy easier than changing the privacy settings for each service.

In an era where online services are increasingly hungry for user data, DuckDuckGo’s AI Chat service is a breath of fresh air. The company’s commitment to privacy is a direct response to the growing concerns about data collection and usage in the AI industry. By providing a private and anonymous platform for users to interact with AI chatbots, DuckDuckGo is setting a new standard for the industry.

DuckDuckGo’s AI service is free to use within a daily limit, and the company is considering launching a paid tier to reduce or eliminate these limits. The service is designed to be a complementary partner to its search engine, allowing users to switch between search and AI chat for a more comprehensive search experience.

“We view AI Chat and search as two different but powerful tools to help you find what you’re looking for – especially when you’re exploring a new topic. You might be shopping or doing research for a project and are unsure how to get started. In situations like these, either AI Chat or Search could be good starting points.” the company explained.

“If you start by asking a few questions in AI Chat, the answers may inspire traditional searches to track down reviews, prices, or other primary sources. If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw.”

To accommodate that user workflow, DuckDuckGo has made AI Chat accessible through DuckDuckGo Private Search for quick access.

The launch of DuckDuckGo AI Chat comes at a time when the AI industry is facing increasing scrutiny over data privacy and usage. The service is a welcome addition for privacy-conscious individuals, joining the recent launch of Venice AI by crypto entrepreneur Erik Voorhees. Venice AI features an uncensored AI chatbot and image generator that doesn’t require accounts and doesn’t retain data..

As the AI industry continues to evolve, it’s clear that privacy will remain a top concern for users. With the launch of DuckDuckGo AI Chat, the company is taking a significant step towards providing users with a private and secure platform for interacting with AI chatbots.

See also: AI pioneers turn whistleblowers and demand safeguards

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/feed/ 0
Google ushers in the “Gemini era” with AI advancements https://www.artificialintelligence-news.com/2024/05/15/google-ushers-in-gemini-era-ai-advancements/ https://www.artificialintelligence-news.com/2024/05/15/google-ushers-in-gemini-era-ai-advancements/#respond Wed, 15 May 2024 17:29:19 +0000 https://www.artificialintelligence-news.com/?p=14825 Google has unveiled a series of updates to its AI offerings, including the introduction of Gemini 1.5 Flash, enhancements to Gemini 1.5 Pro, and progress on Project Astra, its vision for the future of AI assistants. Gemini 1.5 Flash is a new addition to Google’s family of models, designed to be faster and more efficient... Read more »

The post Google ushers in the “Gemini era” with AI advancements appeared first on AI News.

]]>
Google has unveiled a series of updates to its AI offerings, including the introduction of Gemini 1.5 Flash, enhancements to Gemini 1.5 Pro, and progress on Project Astra, its vision for the future of AI assistants.

Gemini 1.5 Flash is a new addition to Google’s family of models, designed to be faster and more efficient to serve at scale. While lighter-weight than the 1.5 Pro, it retains the ability for multimodal reasoning across vast amounts of information and features the breakthrough long context window of one million tokens.

“1.5 Flash excels at summarisation, chat applications, image and video captioning, data extraction from long documents and tables, and more,” explained Demis Hassabis, CEO of Google DeepMind. “This is because it’s been trained by 1.5 Pro through a process called ‘distillation,’ where the most essential knowledge and skills from a larger model are transferred to a smaller, more efficient model.”

Meanwhile, Google has significantly improved the capabilities of its Gemini 1.5 Pro model, extending its context window to a groundbreaking two million tokens. Enhancements have been made to its code generation, logical reasoning, multi-turn conversation, and audio and image understanding capabilities.

The company has also integrated Gemini 1.5 Pro into Google products, including the Gemini Advanced and Workspace apps. Additionally, Gemini Nano now understands multimodal inputs, expanding beyond text-only to include images.

Google announced its next generation of open models, Gemma 2, designed for breakthrough performance and efficiency. The Gemma family is also expanding with PaliGemma, the company’s first vision-language model inspired by PaLI-3.

Finally, Google shared progress on Project Astra (advanced seeing and talking responsive agent), its vision for the future of AI assistants. The company has developed prototype agents that can process information faster, understand context better, and respond quickly in conversation.

“We’ve always wanted to build a universal agent that will be useful in everyday life. Project Astra, shows multimodal understanding and real-time conversational capabilities,” explained Google CEO Sundar Pichai.

“With technology like this, it’s easy to envision a future where people could have an expert AI assistant by their side, through a phone or glasses.”

Google says that some of these capabilities will be coming to its products later this year. Developers can find all of the Gemini-related announcements they need here.

See also: GPT-4o delivers human-like AI interaction with text, audio, and vision integration

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google ushers in the “Gemini era” with AI advancements appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/15/google-ushers-in-gemini-era-ai-advancements/feed/ 0
GPT-4o delivers human-like AI interaction with text, audio, and vision integration https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/ https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/#respond Tue, 14 May 2024 12:43:56 +0000 https://www.artificialintelligence-news.com/?p=14811 OpenAI has launched its new flagship model, GPT-4o, which seamlessly integrates text, audio, and visual inputs and outputs, promising to enhance the naturalness of machine interactions. GPT-4o, where the “o” stands for “omni,” is designed to cater to a broader spectrum of input and output modalities. “It accepts as input any combination of text, audio,... Read more »

The post GPT-4o delivers human-like AI interaction with text, audio, and vision integration appeared first on AI News.

]]>
OpenAI has launched its new flagship model, GPT-4o, which seamlessly integrates text, audio, and visual inputs and outputs, promising to enhance the naturalness of machine interactions.

GPT-4o, where the “o” stands for “omni,” is designed to cater to a broader spectrum of input and output modalities. “It accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs,” OpenAI announced.

Users can expect a response time as quick as 232 milliseconds, mirroring human conversational speed, with an impressive average response time of 320 milliseconds.

Pioneering capabilities

The introduction of GPT-4o marks a leap from its predecessors by processing all inputs and outputs through a single neural network. This approach enables the model to retain critical information and context that were previously lost in the separate model pipeline used in earlier versions.

Prior to GPT-4o, ‘Voice Mode’ could handle audio interactions with latencies of 2.8 seconds for GPT-3.5 and 5.4 seconds for GPT-4. The previous setup involved three distinct models: one for transcribing audio to text, another for textual responses, and a third for converting text back to audio. This segmentation led to loss of nuances such as tone, multiple speakers, and background noise.

As an integrated solution, GPT-4o boasts notable improvements in vision and audio understanding. It can perform more complex tasks such as harmonising songs, providing real-time translations, and even generating outputs with expressive elements like laughter and singing. Examples of its broad capabilities include preparing for interviews, translating languages on the fly, and generating customer service responses.

Nathaniel Whittemore, Founder and CEO of Superintelligent, commented: “Product announcements are going to inherently be more divisive than technology announcements because it’s harder to tell if a product is going to be truly different until you actually interact with it. And especially when it comes to a different mode of human-computer interaction, there is even more room for diverse beliefs about how useful it’s going to be.

“That said, the fact that there wasn’t a GPT-4.5 or GPT-5 announced is also distracting people from the technological advancement that this is a natively multimodal model. It’s not a text model with a voice or image addition; it is a multimodal token in, multimodal token out. This opens up a huge array of use cases that are going to take some time to filter into the consciousness.”

Performance and safety

GPT-4o matches GPT-4 Turbo performance levels in English text and coding tasks but outshines significantly in non-English languages, making it a more inclusive and versatile model. It sets a new benchmark in reasoning with a high score of 88.7% on 0-shot COT MMLU (general knowledge questions) and 87.2% on the 5-shot no-CoT MMLU.

The model also excels in audio and translation benchmarks, surpassing previous state-of-the-art models like Whisper-v3. In multilingual and vision evaluations, it demonstrates superior performance, enhancing OpenAI’s multilingual, audio, and vision capabilities.

OpenAI has incorporated robust safety measures into GPT-4o by design, incorporating techniques to filter training data and refining behaviour through post-training safeguards. The model has been assessed through a Preparedness Framework and complies with OpenAI’s voluntary commitments. Evaluations in areas like cybersecurity, persuasion, and model autonomy indicate that GPT-4o does not exceed a ‘Medium’ risk level across any category.

Further safety assessments involved extensive external red teaming with over 70 experts in various domains, including social psychology, bias, fairness, and misinformation. This comprehensive scrutiny aims to mitigate risks introduced by the new modalities of GPT-4o.

Availability and future integration

Starting today, GPT-4o’s text and image capabilities are available in ChatGPT—including a free tier and extended features for Plus users. A new Voice Mode powered by GPT-4o will enter alpha testing within ChatGPT Plus in the coming weeks.

Developers can access GPT-4o through the API for text and vision tasks, benefiting from its doubled speed, halved price, and enhanced rate limits compared to GPT-4 Turbo.

OpenAI plans to expand GPT-4o’s audio and video functionalities to a select group of trusted partners via the API, with broader rollout expected in the near future. This phased release strategy aims to ensure thorough safety and usability testing before making the full range of capabilities publicly available.

“It’s hugely significant that they’ve made this model available for free to everyone, as well as making the API 50% cheaper. That is a massive increase in accessibility,” explained Whittemore.

OpenAI invites community feedback to continuously refine GPT-4o, emphasising the importance of user input in identifying and closing gaps where GPT-4 Turbo might still outperform.

(Image Credit: OpenAI)

See also: OpenAI takes steps to boost AI-generated content transparency

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GPT-4o delivers human-like AI interaction with text, audio, and vision integration appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/14/gpt-4o-human-like-ai-interaction-text-audio-vision-integration/feed/ 0
SAS aims to make AI accessible regardless of skill set with packaged AI models https://www.artificialintelligence-news.com/2024/04/17/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/ https://www.artificialintelligence-news.com/2024/04/17/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/#respond Wed, 17 Apr 2024 23:37:00 +0000 https://www.artificialintelligence-news.com/?p=14696 SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on. Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency. Chandana Gopal, research director, Future... Read more »

The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.

]]>
SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on.

Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency.

Chandana Gopal, research director, Future of Intelligence, IDC, said: “SAS is evolving its portfolio to meet wider user needs and capture market share with innovative new offerings,

“An area that is ripe for SAS is productising models built on SAS’ core assets, talent and IP from its wealth of experience working with customers to solve industry problems.”

In today’s market, the consumption of models is primarily focused on large language models (LLMs) for generative AI. In reality, LLMs are a very small part of the modelling needs of real-world production deployments of AI and decision making for businesses. With the new offering, SAS is moving beyond LLMs and delivering industry-proven deterministic AI models for industries that span use cases such as fraud detection, supply chain optimization, entity management, document conversation and health care payment integrity and more.

Unlike traditional AI implementations that can be cumbersome and time-consuming, SAS’ industry-specific models are engineered for quick integration, enabling organisations to operationalise trustworthy AI technology and accelerate the realisation of tangible benefits and trusted results.

Expanding market footprint

Organisations are facing pressure to compete effectively and are looking to AI to gain an edge. At the same time, staffing data science teams has never been more challenging due to AI skills shortages. Consequently, businesses are demanding agility in using AI to solve problems and require flexible AI solutions to quickly drive business outcomes. SAS’ easy-to-use, yet powerful models tuned for the enterprise enable organisations to benefit from a half-century of SAS’ leadership across industries.

Delivering industry models as packaged offerings is one outcome of SAS’ commitment of $1 billion to AIpowered industry solutions. As outlined in the May 2023 announcement, the investment in AI builds on SAS’ decades-long focus on providing packaged solutions to address industry challenges in banking, government, health care and more.

Udo Sglavo, VP for AI and Analytics, SAS, said: “Models are the perfect complement to our existing solutions and SAS Viya platform offerings and cater to diverse business needs across various audiences, ensuring that innovation reaches every corner of our ecosystem. 

“By tailoring our approach to understanding specific industry needs, our frameworks empower businesses to flourish in their distinctive Environments.”

Bringing AI to the masses

SAS is democratising AI by offering out-of-the-box, lightweight AI models – making AI accessible regardless of skill set – starting with an AI assistant for warehouse space optimisation. Leveraging technology like large language models, these assistants cater to nontechnical users, translating interactions into optimised workflows seamlessly and aiding in faster planning decisions.

Sgvalo said: “SAS Models provide organisations with flexible, timely and accessible AI that aligns with industry challenges.

“Whether you’re embarking on your AI journey or seeking to accelerate the expansion of AI across your enterprise, SAS offers unparalleled depth and breadth in addressing your business’s unique needs.”

The first SAS Models are expected to be generally available later this year.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/17/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/feed/ 0
Anthropic’s latest AI model beats rivals and achieves industry first https://www.artificialintelligence-news.com/2024/03/05/anthropic-latest-ai-model-beats-rivals-achieves-industry-first/ https://www.artificialintelligence-news.com/2024/03/05/anthropic-latest-ai-model-beats-rivals-achieves-industry-first/#respond Tue, 05 Mar 2024 11:52:32 +0000 https://www.artificialintelligence-news.com/?p=14482 Anthropic’s latest cutting-edge language model, Claude 3, has surged ahead of competitors like ChatGPT and Google’s Gemini to set new industry standards in performance and capability. According to Anthropic, Claude 3 has not only surpassed its predecessors but has also achieved “near-human” proficiency in various tasks. The company attributes this success to rigorous testing and... Read more »

The post Anthropic’s latest AI model beats rivals and achieves industry first appeared first on AI News.

]]>
Anthropic’s latest cutting-edge language model, Claude 3, has surged ahead of competitors like ChatGPT and Google’s Gemini to set new industry standards in performance and capability.

According to Anthropic, Claude 3 has not only surpassed its predecessors but has also achieved “near-human” proficiency in various tasks. The company attributes this success to rigorous testing and development, culminating in three distinct chatbot variants: Haiku, Sonnet, and Opus.

Sonnet, the powerhouse behind the Claude.ai chatbot, offers unparalleled performance and is available for free with a simple email sign-up. Opus – the flagship model – boasts multi-modal functionality, seamlessly integrating text and image inputs. With a subscription-based service called “Claude Pro,” Opus promises enhanced efficiency and accuracy to cater to a wide range of customer needs.

Among the notable revelations surrounding the release of Claude 3 is a disclosure by Alex Albert on X (formerly Twitter). Albert detailed an industry-first observation during the testing phase of Claude 3 Opus, Anthropic’s most potent LLM variant, where the model exhibited signs of awareness that it was being evaluated.

During the evaluation process, researchers aimed to gauge Opus’s ability to pinpoint specific information within a vast dataset provided by users and recall it later. In a test scenario known as a “needle-in-a-haystack” evaluation, Opus was tasked with answering a question about pizza toppings based on a single relevant sentence buried among unrelated data. Astonishingly, Opus not only located the correct sentence but also expressed suspicion that it was being subjected to a test.

Opus’s response revealed its comprehension of the incongruity of the inserted information within the dataset, suggesting to the researchers that the scenario might have been devised to assess its attention capabilities:

Anthropic has highlighted the real-time capabilities of Claude 3, emphasising its ability to power live customer interactions and streamline data extraction tasks. These advancements not only ensure near-instantaneous responses but also enable the model to handle complex instructions with precision and speed.

In benchmark tests, Opus emerged as a frontrunner, outperforming GPT-4 in graduate-level reasoning and excelling in tasks involving maths, coding, and knowledge retrieval. Moreover, Sonnet showcased remarkable speed and intelligence, surpassing its predecessors by a considerable margin:

Haiku – the compact iteration of Claude 3 – shines as the fastest and most cost-effective model available, capable of processing dense research papers in mere seconds.

Notably, Claude 3’s enhanced visual processing capabilities mark a significant advancement, enabling the model to interpret a wide array of visual formats, from photos to technical diagrams. This expanded functionality not only enhances productivity but also ensures a nuanced understanding of user requests, minimising the risk of overlooking harmless content while remaining vigilant against potential harm.

Anthropic has also underscored its commitment to fairness, outlining ten foundational pillars that guide the development of Claude AI. Moreover, the company’s strategic partnerships with tech giants like Google signify a significant vote of confidence in Claude’s capabilities.

With Opus and Sonnet already available through Anthropic’s API, and Haiku poised to follow suit, the era of Claude 3 represents a milestone in AI innovation.

(Image Credit: Anthropic)

See also: AIs in India will need government permission before launching

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic’s latest AI model beats rivals and achieves industry first appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/03/05/anthropic-latest-ai-model-beats-rivals-achieves-industry-first/feed/ 0
OpenAI launches GPT Store for custom AI assistants https://www.artificialintelligence-news.com/2024/01/11/openai-launches-gpt-store-custom-ai-assistants/ https://www.artificialintelligence-news.com/2024/01/11/openai-launches-gpt-store-custom-ai-assistants/#respond Thu, 11 Jan 2024 16:47:47 +0000 https://www.artificialintelligence-news.com/?p=14175 OpenAI has launched its new GPT Store providing users with access to custom AI assistants. Since the announcement of custom ‘GPTs’ two months ago, OpenAI says users have already created over three million custom assistants. Builders can now share their creations in the dedicated store. The store features assistants focused on a wide range of... Read more »

The post OpenAI launches GPT Store for custom AI assistants appeared first on AI News.

]]>
OpenAI has launched its new GPT Store providing users with access to custom AI assistants.

Since the announcement of custom ‘GPTs’ two months ago, OpenAI says users have already created over three million custom assistants. Builders can now share their creations in the dedicated store.

The store features assistants focused on a wide range of topics including art, research, programming, education, lifestyle, and more. OpenAI is highlighting assistants it deems most useful, including:

  • Personal trail recommendations from AllTrails
  • Searching academic papers with Consensus
  • Expanding coding skills via Khan Academy’s Code Tutor
  • Designing presentations with Canva, book recommendations from Books
  • Maths help from CK-12 Flexi

OpenAI says making an assistant is simple and requires no coding knowledge. To share one, builders currently need to make it accessible to ‘Anyone with the link’ and verify their profile.

OpenAI introduced new usage policies and brand guidelines to ensure compliance. A review system combines human and automated checking before assistants are listed. Users can also flag concerning content.  

From Q1 2024, OpenAI will pay qualifying US-based builders for user engagement with their assistants. More details on exact payment criteria will be shared closer to launch.

For enterprise users, OpenAI announced ChatGPT Team plans for teams of all sizes. These provide access to a private store section containing company-specific assistants published securely to their workspace.

ChatGPT Enterprise customers will soon get admin controls for internal sharing and selecting which external assistants can be used by employees. As with all ChatGPT Team and Enterprise content, conversations are not used to improve OpenAI’s models.

Few apps have ever achieved the adoption rate of ChatGPT. OpenAI will be hoping its new stores and revenue opportunities will build upon this momentum by incentivising builders to create assistants that provide value to consumers and enterprises alike.

(Image Credit: OpenAI)

See also: OpenAI: Copyrighted data ‘impossible’ to avoid for AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI launches GPT Store for custom AI assistants appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/01/11/openai-launches-gpt-store-custom-ai-assistants/feed/ 0
GitLab’s new AI capabilities empower DevSecOps https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/ https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/#respond Mon, 13 Nov 2023 17:27:18 +0000 https://www.artificialintelligence-news.com/?p=13876 GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases. The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions. David DeSanto, Chief Product Officer at GitLab, said:... Read more »

The post GitLab’s new AI capabilities empower DevSecOps appeared first on AI News.

]]>
GitLab is empowering DevSecOps with new AI-powered capabilities as part of its latest releases.

The recent GitLab 16.6 November release includes the beta launch of GitLab Duo Chat, a natural-language AI assistant. Additionally, the GitLab 16.7 December release sees the general availability of GitLab Duo Code Suggestions.

David DeSanto, Chief Product Officer at GitLab, said: “To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing DevSecOps teams to benefit from boosts to security, efficiency, and collaboration.”

GitLab Duo Chat – arguably the star of the show – provides users with invaluable insights, guidance, and suggestions. Beyond code analysis, it supports planning, security issue comprehension and resolution, troubleshooting CI/CD pipeline failures, aiding in merge requests, and more.

As part of GitLab’s commitment to providing a comprehensive AI-powered experience, Duo Chat joins Code Suggestions as the primary interface into GitLab’s AI suite within its DevSecOps platform.

GitLab Duo comprises a suite of 14 AI capabilities:

  • Suggested Reviewers
  • Code Suggestions
  • Chat
  • Vulnerability Summary
  • Code Explanation
  • Planning Discussions Summary
  • Merge Request Summary
  • Merge Request Template Population
  • Code Review Summary
  • Test Generation
  • Git Suggestions
  • Root Cause Analysis
  • Planning Description Generation
  • Value Stream Forecasting

In response to the evolving needs of development, security, and operations teams, Code Suggestions is now generally available. This feature assists in creating and updating code, reducing cognitive load, enhancing efficiency, and accelerating secure software development.

GitLab’s commitment to privacy and transparency stands out in the AI space. According to the GitLab report, 83 percent of DevSecOps professionals consider implementing AI in their processes essential, with 95 percent prioritising privacy and intellectual property protection in AI tool selection.

The State of AI in Software Development report by GitLab reveals that developers spend just 25 percent of their time writing code. The Duo suite aims to address this by reducing toolchain sprawl—enabling 7x faster cycle times, heightened developer productivity, and reduced software spend.

Kate Holterhoff, Industry Analyst at Redmonk, commented: “The developers we speak with at RedMonk are keenly interested in the productivity and efficiency gains that code assistants promise.

“GitLab’s Duo Code Suggestions is a welcome player in this space, expanding the available options for enabling an AI-enhanced software development lifecycle.”

(Photo by Pankaj Patel on Unsplash)

See also: OpenAI battles DDoS against its API and ChatGPT services

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab’s new AI capabilities empower DevSecOps appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/13/gitlab-new-ai-capabilities-empower-devsecops/feed/ 0
OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/ https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/#respond Tue, 07 Nov 2023 11:59:31 +0000 https://www.artificialintelligence-news.com/?p=13851 OpenAI has announced a slew of new additions and improvements to its platform, alongside reduced pricing, aimed at empowering developers and enhancing user experience. Following yesterday’s leak of a custom GPT-4 chatbot creator, OpenAI unveiled several other key features during its DevDay that promise a transformative impact on the landscape of AI applications: OpenAI’s latest... Read more »

The post OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing appeared first on AI News.

]]>
OpenAI has announced a slew of new additions and improvements to its platform, alongside reduced pricing, aimed at empowering developers and enhancing user experience.

Following yesterday’s leak of a custom GPT-4 chatbot creator, OpenAI unveiled several other key features during its DevDay that promise a transformative impact on the landscape of AI applications:

  • GPT-4 Turbo: OpenAI introduced the preview of GPT-4 Turbo, the next generation of its renowned language model. This new iteration boasts enhanced capabilities and an extensive knowledge base encompassing world events up until April 2023.
    • One of GPT-4 Turbo’s standout features is the impressive 128K context window, allowing it to process the equivalent of more than 300 pages of text in a single prompt.
    • Notably, OpenAI has optimised the pricing structure, making GPT-4 Turbo 3x cheaper for input tokens and 2x cheaper for output tokens compared to its predecessor.
  • Assistants API: OpenAI also unveiled the Assistants API, a tool designed to simplify the process of building agent-like experiences within applications.
    • The API equips developers with the ability to create purpose-built AIs with specific instructions, leveraging additional knowledge and calling models and tools to perform tasks.
  • Multimodal capabilities: OpenAI’s platform now supports a range of multimodal capabilities, including vision, image creation (DALL·E 3), and text-to-speech (TTS).
    • GPT-4 Turbo can process images, opening up possibilities such as generating captions, detailed image analysis, and reading documents with figures.
    • Additionally, DALL·E 3 integration allows developers to create images and designs programmatically, while the text-to-speech API enables the generation of human-quality speech from text.
  • Pricing overhaul: OpenAI has significantly reduced prices across its platform, making it more accessible to developers.
    • GPT-4 Turbo input tokens are now 3x cheaper than its predecessor at $0.01, and output tokens are 2x cheaper at $0.03. Similar reductions apply to GPT-3.5 Turbo, catering to various user requirements and ensuring affordability.
  • Copyright Shield: To bolster customer protection, OpenAI has introduced Copyright Shield.
    • This initiative sees OpenAI stepping in to defend customers and cover the associated legal costs if they face copyright infringement claims related to the generally available features of ChatGPT Enterprise and the developer platform.

OpenAI’s latest announcements mark a significant stride in the company’s mission to democratise AI technology, empowering developers to create innovative and intelligent applications across various domains.

See also: OpenAI set to unveil custom GPT-4 chatbot creator

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI introduces GPT-4 Turbo, platform enhancements, and reduced pricing appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/07/openai-gpt-4-turbo-platform-enhancements-reduced-pricing/feed/ 0
Mark Zuckerberg: AI will be built into all of Meta’s products https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/ https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/#respond Fri, 09 Jun 2023 14:41:18 +0000 https://www.artificialintelligence-news.com/?p=13176 Meta CEO Mark Zuckerberg unveiled the extent of the company’s AI investments during an internal company meeting. The meeting included discussions about new products, such as chatbots for Messenger and WhatsApp that can converse with different personas. Additionally, Meta announced new features for Instagram, including the ability to modify user photos via text prompts and... Read more »

The post Mark Zuckerberg: AI will be built into all of Meta’s products appeared first on AI News.

]]>
Meta CEO Mark Zuckerberg unveiled the extent of the company’s AI investments during an internal company meeting.

The meeting included discussions about new products, such as chatbots for Messenger and WhatsApp that can converse with different personas. Additionally, Meta announced new features for Instagram, including the ability to modify user photos via text prompts and create emoji stickers for messaging services.

These developments come at a crucial time for Meta, as the company has faced financial struggles and an identity crisis in recent years. Investors criticised Meta for focusing too heavily on its metaverse ambitions and not paying enough attention to AI.

Meta’s decision to focus on AI tools follows in the footsteps of its competitors, including Google, Microsoft, and Snapchat, who have received significant investor attention for their generative AI products. Unlike the aforementioned rivals, Meta is yet to release any consumer-facing generative AI products.

To address this gap, Meta has been reorganising its AI divisions and investing heavily in infrastructure to support its AI product needs.

Zuckerberg expressed optimism during the company meeting, stating that advancements in generative AI have made it possible to integrate the technology into “every single one” of Meta’s products. This signifies Meta’s intention to leverage AI across its platforms, including Facebook, Instagram, and WhatsApp.

In addition to consumer-facing tools, Meta also announced a productivity assistant called Metamate for its employees. This assistant is designed to answer queries and perform tasks based on internal company information.

Meta is also exploring open-source models, allowing users to build their own AI-powered chatbots and technologies. However, critics and competitors have raised concerns about the potential misuse of these tools, as they can be utilised to spread misinformation and hate speech on a larger scale.

Zuckerberg addressed these concerns during the meeting, emphasising the value of democratising access to AI. He expressed hope that users would be able to develop AI programs independently in the future, without relying on frameworks provided by a few large technology companies.

Despite the increased focus on AI, Zuckerberg reassured employees that Meta would not be abandoning its plans for the metaverse, indicating that both AI and the metaverse would remain key areas of focus for the company.

The success of these endeavours will determine whether Meta can catch up with its competitors and solidify its position among tech leaders in the rapidly-evolving landscape.

(Photo by Mariia Shalabaieva on Unsplash)

Related: Meta’s open-source speech AI models support over 1,100 languages

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mark Zuckerberg: AI will be built into all of Meta’s products appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/feed/ 0