meta Archives - AI News https://www.artificialintelligence-news.com/tag/meta/ Artificial Intelligence News Wed, 19 Jun 2024 15:40:50 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png meta Archives - AI News https://www.artificialintelligence-news.com/tag/meta/ 32 32 Meta unveils five AI models for multi-modal processing, music generation, and more https://www.artificialintelligence-news.com/2024/06/19/meta-unveils-ai-models-multi-modal-processing-music-generation-more/ https://www.artificialintelligence-news.com/2024/06/19/meta-unveils-ai-models-multi-modal-processing-music-generation-more/#respond Wed, 19 Jun 2024 15:40:48 +0000 https://www.artificialintelligence-news.com/?p=15062 Meta has unveiled five major new AI models and research, including multi-modal systems that can process both text and images, next-gen language models, music generation, AI speech detection, and efforts to improve diversity in AI systems. The releases come from Meta’s Fundamental AI Research (FAIR) team which has focused on advancing AI through open research... Read more »

The post Meta unveils five AI models for multi-modal processing, music generation, and more appeared first on AI News.

]]>
Meta has unveiled five major new AI models and research, including multi-modal systems that can process both text and images, next-gen language models, music generation, AI speech detection, and efforts to improve diversity in AI systems.

The releases come from Meta’s Fundamental AI Research (FAIR) team which has focused on advancing AI through open research and collaboration for over a decade. As AI rapidly innovates, Meta believes working with the global community is crucial.

“By publicly sharing this research, we hope to inspire iterations and ultimately help advance AI in a responsible way,” said Meta.

Chameleon: Multi-modal text and image processing

Among the releases are key components of Meta’s ‘Chameleon’ models under a research license. Chameleon is a family of multi-modal models that can understand and generate both text and images simultaneously—unlike most large language models which are typically unimodal.

“Just as humans can process the words and images simultaneously, Chameleon can process and deliver both image and text at the same time,” explained Meta. “Chameleon can take any combination of text and images as input and also output any combination of text and images.”

Potential use cases are virtually limitless from generating creative captions to prompting new scenes with text and images.

Multi-token prediction for faster language model training

Meta has also released pretrained models for code completion that use ‘multi-token prediction’ under a non-commercial research license. Traditional language model training is inefficient by predicting just the next word. Multi-token models can predict multiple future words simultaneously to train faster.

“While [the one-word] approach is simple and scalable, it’s also inefficient. It requires several orders of magnitude more text than what children need to learn the same degree of language fluency,” said Meta.

JASCO: Enhanced text-to-music model

On the creative side, Meta’s JASCO allows generating music clips from text while affording more control by accepting inputs like chords and beats.

“While existing text-to-music models like MusicGen rely mainly on text inputs for music generation, our new model, JASCO, is capable of accepting various inputs, such as chords or beat, to improve control over generated music outputs,” explained Meta.

AudioSeal: Detecting AI-generated speech

Meta claims AudioSeal is the first audio watermarking system designed to detect AI-generated speech. It can pinpoint the specific segments generated by AI within larger audio clips up to 485x faster than previous methods.

“AudioSeal is being released under a commercial license. It’s just one of several lines of responsible research we have shared to help prevent the misuse of generative AI tools,” said Meta.

Improving text-to-image diversity

Another important release aims to improve the diversity of text-to-image models which can often exhibit geographical and cultural biases.

Meta developed automatic indicators to evaluate potential geographical disparities and conducted a large 65,000+ annotation study to understand how people globally perceive geographic representation.

“This enables more diversity and better representation in AI-generated images,” said Meta. The relevant code and annotations have been released to help improve diversity across generative models.

By publicly sharing these groundbreaking models, Meta says it hopes to foster collaboration and drive innovation within the AI community.

(Photo by Dima Solomin)

See also: NVIDIA presents latest advancements in visual AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta unveils five AI models for multi-modal processing, music generation, and more appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/19/meta-unveils-ai-models-multi-modal-processing-music-generation-more/feed/ 0
DuckDuckGo releases portal giving private access to AI models https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/ https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/#respond Fri, 07 Jun 2024 15:42:22 +0000 https://www.artificialintelligence-news.com/?p=14966 DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected. The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source... Read more »

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected.

The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source models are OpenAI’s GPT-3.5 Turbo and Anthropic’s Claude 3 Haiku, while the open-source models are Meta’s Llama-3 70B and Mistral AI’s Mixtral 8x7b.

What sets DuckDuckGo AI Chat apart is its commitment to user privacy. Neither DuckDuckGo nor the chatbot providers can use user data to train their models, ensuring that interactions remain private and anonymous. DuckDuckGo also strips away metadata, such as server or IP addresses, so that queries appear to originate from the company itself rather than individual users.

The company has agreements in place with all model providers to ensure that any saved chats are completely deleted within 30 days, and that none of the chats made on the platform can be used to train or improve the models. This makes preserving privacy easier than changing the privacy settings for each service.

In an era where online services are increasingly hungry for user data, DuckDuckGo’s AI Chat service is a breath of fresh air. The company’s commitment to privacy is a direct response to the growing concerns about data collection and usage in the AI industry. By providing a private and anonymous platform for users to interact with AI chatbots, DuckDuckGo is setting a new standard for the industry.

DuckDuckGo’s AI service is free to use within a daily limit, and the company is considering launching a paid tier to reduce or eliminate these limits. The service is designed to be a complementary partner to its search engine, allowing users to switch between search and AI chat for a more comprehensive search experience.

“We view AI Chat and search as two different but powerful tools to help you find what you’re looking for – especially when you’re exploring a new topic. You might be shopping or doing research for a project and are unsure how to get started. In situations like these, either AI Chat or Search could be good starting points.” the company explained.

“If you start by asking a few questions in AI Chat, the answers may inspire traditional searches to track down reviews, prices, or other primary sources. If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw.”

To accommodate that user workflow, DuckDuckGo has made AI Chat accessible through DuckDuckGo Private Search for quick access.

The launch of DuckDuckGo AI Chat comes at a time when the AI industry is facing increasing scrutiny over data privacy and usage. The service is a welcome addition for privacy-conscious individuals, joining the recent launch of Venice AI by crypto entrepreneur Erik Voorhees. Venice AI features an uncensored AI chatbot and image generator that doesn’t require accounts and doesn’t retain data..

As the AI industry continues to evolve, it’s clear that privacy will remain a top concern for users. With the launch of DuckDuckGo AI Chat, the company is taking a significant step towards providing users with a private and secure platform for interacting with AI chatbots.

See also: AI pioneers turn whistleblowers and demand safeguards

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/feed/ 0
UAE unveils new AI model to rival big tech giants https://www.artificialintelligence-news.com/2024/05/15/uae-unveils-new-ai-model-to-rival-big-tech-giants/ https://www.artificialintelligence-news.com/2024/05/15/uae-unveils-new-ai-model-to-rival-big-tech-giants/#respond Wed, 15 May 2024 09:53:41 +0000 https://www.artificialintelligence-news.com/?p=14818 The UAE is making big waves by launching a new open-source generative AI model. This step, taken by a government-backed research institute, is turning heads and marking the UAE as a formidable player in the global AI race. In Abu Dhabi, the Technology Innovation Institute (TII) unveiled the Falcon 2 series. As reported by Reuters, this series includes Falcon 2 11B,... Read more »

The post UAE unveils new AI model to rival big tech giants appeared first on AI News.

]]>
The UAE is making big waves by launching a new open-source generative AI model. This step, taken by a government-backed research institute, is turning heads and marking the UAE as a formidable player in the global AI race.

In Abu Dhabi, the Technology Innovation Institute (TII) unveiled the Falcon 2 series. As reported by Reuters, this series includes Falcon 2 11B, a text-based model, and Falcon 2 11B VLM, a vision-to-language model capable of generating text descriptions from images. TII is run by Abu Dhabi’s Advanced Technology Research Council.

As a major oil exporter and a key player in the Middle East, the UAE is investing heavily in AI. This strategy has caught the eye of U.S. officials, leading to tensions over whether to use American or Chinese technology. In a move coordinated with Washington, Emirati AI firm G42 withdrew from Chinese investments and replaced Chinese hardware, securing a US$1.5 billion investment from Microsoft.

Faisal Al Bannai, Secretary General of the Advanced Technology Research Council and an adviser on strategic research and advanced technology, proudly states that the UAE is proving itself as a major player in AI. The release of the Falcon 2 series is part of a broader race among nations and companies to develop proprietary large language models. While some opt to keep their AI code private, the UAE, like Meta’s Llama, is making its groundbreaking work accessible to all.

Al Bannai is also excited about the upcoming Falcon 3 generation and expresses confidence in the UAE’s ability to compete globally: “We’re very proud that we can still punch way above our weight, really compete with the best players globally.”

Reflecting on his earlier statements this year, Al Bannai emphasised that the UAE’s decisive advantage lies in its ability to make swift strategic decisions.

It’s worth noting that Abu Dhabi’s ruling family controls some of the world’s largest sovereign wealth funds, worth about US$1.5 trillion. These funds, formerly used to diversify the UAE’s oil wealth, are now critical for accelerating growth in AI and other cutting-edge technologies. In fact, the UAE is emerging as a key player in producing advanced computer chips essential for training powerful AI systems. According to Wall Street Journal, OpenAI CEO Sam Altman met with investors, including Sheik Tahnoun bin Zayed Al Nahyan, who runs Abu Dhabi’s major sovereign wealth fund, to discuss a potential US$7 trillion investment to develop an AI chipmaker to compete with Nvidia.

Furthermore, the UAE’s commitment to generative AI is evident in its recent launch of a ‘Generative AI’ guide. This guide aims to unlock AI’s potential in various fields, including education, healthcare, and media. It provides a detailed overview of generative AI, addressing digital technologies’ challenges and opportunities while emphasising data privacy. The guide is designed to assist government agencies and the community leverage AI technologies by demonstrating 100 practical AI use cases for entrepreneurs, students, job seekers, and tech enthusiasts.

This proactive stance showcases the UAE’s commitment to participating in and leading the global AI race, positioning it as a nation to watch in the rapidly evolving tech scene.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UAE unveils new AI model to rival big tech giants appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/15/uae-unveils-new-ai-model-to-rival-big-tech-giants/feed/ 0
Meta raises the bar with open source Llama 3 LLM https://www.artificialintelligence-news.com/2024/04/19/meta-raises-bar-open-source-llama-3-llm/ https://www.artificialintelligence-news.com/2024/04/19/meta-raises-bar-open-source-llama-3-llm/#respond Fri, 19 Apr 2024 12:00:18 +0000 https://www.artificialintelligence-news.com/?p=14721 Meta has introduced Llama 3, the next generation of its state-of-the-art open source large language model (LLM). The tech giant claims Llama 3 establishes new performance benchmarks, surpassing previous industry-leading models like GPT-3.5 in real-world scenarios. “With Llama 3, we set out to build the best open models that are on par with the best... Read more »

The post Meta raises the bar with open source Llama 3 LLM appeared first on AI News.

]]>
Meta has introduced Llama 3, the next generation of its state-of-the-art open source large language model (LLM). The tech giant claims Llama 3 establishes new performance benchmarks, surpassing previous industry-leading models like GPT-3.5 in real-world scenarios.

“With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today,” said Meta in a blog post announcing the release.

The initial Llama 3 models being opened up are 8 billion and 70 billion parameter versions. Meta says its teams are still training larger 400 billion+ parameter models which will be released over the coming months, alongside research papers detailing the work.

Llama 3 has been over two years in the making with significant resources dedicated to assembling high-quality training data, scaling up distributed training, optimising the model architecture, and innovative approaches to instruction fine-tuning.

Meta’s 70 billion parameter instruction fine-tuned model outperformed GPT-3.5, Claude, and other LLMs of comparable scale in human evaluations across 12 key usage scenarios like coding, reasoning, and creative writing. The company’s 8 billion parameter pretrained model also sets new benchmarks on popular LLM evaluation tasks:

“We believe these are the best open source models of their class, period,” stated Meta.

The tech giant is releasing the models via an “open by default” approach to further an open ecosystem around AI development. Llama 3 will be available across all major cloud providers, model hosts, hardware manufacturers, and AI platforms.

Victor Botev, CTO and co-founder of Iris.ai, said: “With the global shift towards AI regulation, the launch of Meta’s Llama 3 model is notable. By embracing transparency through open-sourcing, Meta aligns with the growing emphasis on responsible AI practices and ethical development.

”Moreover, this grants the opportunity for wider community education as open models facilitate insights into development and the ability to scrutinise various approaches, with this transparency feeding back into the drafting and enforcement of regulation.”

Accompanying Meta’s latest models is an updated suite of AI safety tools, including the second iterations of Llama Guard for classifying risks and CyberSec Eval for assessing potential misuse. A new component called Code Shield has also been introduced to filter insecure code suggestions at inference time.

“However, it’s important to maintain perspective – a model simply being open-source does not automatically equate to ethical AI,” Botev continued. “Addressing AI’s challenges requires a comprehensive approach to tackling issues like data privacy, algorithmic bias, and societal impacts – all key focuses of emerging AI regulations worldwide.

”While open initiatives like Llama 3 promote scrutiny and collaboration, their true impact hinges on a holistic approach to AI governance compliance and embedding ethics into AI systems’ lifecycles. Meta’s continuing efforts with the Llama model is a step in the right direction, but ethical AI demands sustained commitment from all stakeholders.”

Meta says it has adopted a “system-level approach” to responsible AI development and deployment with Llama 3. While the models have undergone extensive safety testing, the company emphasises that developers should implement their own input/output filtering in line with their application’s requirements.

The company’s end-user product integrating Llama 3 is Meta AI, which Meta claims is now the world’s leading AI assistant thanks to the new models. Users can access Meta AI via Facebook, Instagram, WhatsApp, Messenger and the web for productivity, learning, creativity, and general queries.  

Multimodal versions of Meta AI integrating vision capabilities are on the way, with an early preview coming to Meta’s Ray-Ban smart glasses.

Despite the considerable achievements of Llama 3, some in the AI field have expressed scepticism over Meta’s motivation being an open approach “for the good of society.” 

However, just a day after Mistral AI set a new benchmark for open source models with Mixtral 8x22B, Meta’s release does once again raise the bar for openly-available LLMs.

See also: SAS aims to make AI accessible regardless of skill set with packaged AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta raises the bar with open source Llama 3 LLM appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/04/19/meta-raises-bar-open-source-llama-3-llm/feed/ 0
Meta unveils SeamlessM4T multimodal translation model https://www.artificialintelligence-news.com/2023/08/22/meta-unveils-seamlessm4t-multimodal-translation-model/ https://www.artificialintelligence-news.com/2023/08/22/meta-unveils-seamlessm4t-multimodal-translation-model/#respond Tue, 22 Aug 2023 14:30:33 +0000 https://www.artificialintelligence-news.com/?p=13509 Meta researchers have unveiled SeamlessM4T, a pioneering multilingual and multitask model that facilitates seamless translation and transcription across both speech and text.  The internet, mobile devices, social media, and communication platforms have ushered in an era where access to multilingual content has reached unprecedented levels. SeamlessM4T aims to realise the vision of seamless communication and... Read more »

The post Meta unveils SeamlessM4T multimodal translation model appeared first on AI News.

]]>
Meta researchers have unveiled SeamlessM4T, a pioneering multilingual and multitask model that facilitates seamless translation and transcription across both speech and text. 

The internet, mobile devices, social media, and communication platforms have ushered in an era where access to multilingual content has reached unprecedented levels. SeamlessM4T aims to realise the vision of seamless communication and comprehension across languages.

Boasting an impressive array of capabilities, SeamlessM4T encompasses:

  • Automatic speech recognition for nearly 100 languages
  • Speech-to-text translation supporting nearly 100 input and output languages
  • Speech-to-speech translation for nearly 100 input languages and 35 (including English) output languages
  • Text-to-text translation for almost 100 languages
  • Text-to-speech translation for nearly 100 input languages and 35 (including English) output languages

SeamlessM4T is being made available to researchers and developers under the CC BY-NC 4.0 license, embodying an ethos of open science.

Additionally, the metadata of SeamlessAlign – the largest multimodal translation dataset ever compiled, consisting of 270,000 hours of mined speech and text alignments – has been released. This facilitates independent data mining and further research within the community.

The development of SeamlessM4T addresses a long-standing challenge in the field of multilingual communication. Unlike earlier systems, which were confined by limited language coverage and reliance on separate subsystems, SeamlessM4T presents a unified model capable of comprehensively handling speech-to-speech and speech-to-text translation tasks. 

Meta has built upon previous innovations – such as No Language Left Behind (NLLB) and Universal Speech Translator – to create this unified multilingual model. With its impressive performance on low-resource languages and consistently strong performance on high-resource languages, SeamlessM4T holds the potential to revolutionise cross-language communication.

Underpinning the model’s architecture is the multitask UnitY model, which excels in generating translated text and speech.

UnitY supports various translation tasks, including automatic speech recognition, text-to-text translation, and speech-to-speech translation, all from a single model. To train this versatile model, Meta employed advanced techniques such as text and speech encoders, self-supervised encoders, and sophisticated decoding processes.

The result is a model that outperforms previous leaders:

To ensure the accuracy and safety of the system, Meta adheres to a responsible AI framework.

Meta says that extensive research on toxicity and bias mitigation has been conducted, resulting in a model that is more aware of and responsive to potential issues. The public release of the SeamlessM4T model encourages collaborative research and development in the AI community.

As the world becomes more connected, SeamlessM4T’s ability to transcend language barriers is a testament to the power of AI-driven innovation. This milestone brings us closer to a future where communication knows no linguistic limitations, enabling a world where people can truly understand each other regardless of language.

A demo of SeamlessM4T can be found here. The code, model, and data can be downloaded on GitHub.

(Image Credit: Meta AI)

See also: Study highlights impact of demographics on AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta unveils SeamlessM4T multimodal translation model appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/22/meta-unveils-seamlessm4t-multimodal-translation-model/feed/ 0
Meta bets on AI chatbots to retain users https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/ https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/#respond Tue, 01 Aug 2023 11:44:17 +0000 https://www.artificialintelligence-news.com/?p=13411 Meta is planning to release AI chatbots that possess human-like personalities, a move aimed at enhancing user retention efforts. Insiders familiar with the matter revealed that prototypes of these advanced chatbots have been under development, with the final products capable of engaging in discussions with users on a human level. The diverse range of chatbots... Read more »

The post Meta bets on AI chatbots to retain users appeared first on AI News.

]]>
Meta is planning to release AI chatbots that possess human-like personalities, a move aimed at enhancing user retention efforts.

Insiders familiar with the matter revealed that prototypes of these advanced chatbots have been under development, with the final products capable of engaging in discussions with users on a human level. The diverse range of chatbots will showcase various personalities and are expected to be rolled out as early as next month.

Referred to as “personas” by Meta staff, these chatbots will take on the form of different characters, each embodying a distinct persona. For instance, insiders mentioned that Meta has explored the creation of a chatbot that mimics the speaking style of former US President Abraham Lincoln, as well as another designed to offer travel advice with the laid-back language of a surfer.

While the primary objective of these chatbots will be to offer personalised recommendations and improved search functionality, they are also being positioned as a source of entertainment for users to enjoy. The chatbots are expected to engage users in playful and interactive conversations, a move that could potentially increase user engagement and retention.

However, with such sophisticated AI capabilities, concerns arise about the potential for rule-breaking speech and inaccuracies. In response, sources mentioned that Meta may implement automated checks on the chatbots’ outputs to ensure accuracy and compliance with platform rules.

This strategic development comes at a time when Meta is doubling down on user retention efforts.

During the company’s 2023 second-quarter earnings call on July 26, CEO Mark Zuckerberg highlighted the positive response to the company’s latest product, Threads, which aims to rival X (formerly Twitter.)

Zuckerberg expressed satisfaction with the increased number of users returning to Threads daily and confirmed that Meta’s primary focus was on the platform’s user retention.

Meta’s chatbots venture raises concerns about data privacy and security. The company will gain access to a treasure trove of user data that has already led to legal challenges for AI companies such as OpenAI.

Whether these chatbots will revolutionise user experiences and boost Meta’s ailing user retention – or just present new challenges for data privacy – remains to be seen. For now, users and experts alike will be closely monitoring Meta’s next moves.

(Photo by Edge2Edge Media on Unsplash)

See also: Meta launches Llama 2 open-source LLM

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta bets on AI chatbots to retain users appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/feed/ 0
Meta launches Llama 2 open-source LLM https://www.artificialintelligence-news.com/2023/07/19/meta-launches-llama-2-open-source-llm/ https://www.artificialintelligence-news.com/2023/07/19/meta-launches-llama-2-open-source-llm/#respond Wed, 19 Jul 2023 11:14:53 +0000 https://www.artificialintelligence-news.com/?p=13289 Meta has introduced Llama 2, an open-source family of AI language models which comes with a license allowing integration into commercial products. The Llama 2 models range in size from 7-70 billion parameters, making them a formidable force in the AI landscape. According to Meta’s claims, these models “outperform open source chat models on most... Read more »

The post Meta launches Llama 2 open-source LLM appeared first on AI News.

]]>
Meta has introduced Llama 2, an open-source family of AI language models which comes with a license allowing integration into commercial products.

The Llama 2 models range in size from 7-70 billion parameters, making them a formidable force in the AI landscape.

According to Meta’s claims, these models “outperform open source chat models on most benchmarks we tested.”

The release of Llama 2 marks a turning point in the LLM (large language model) market and has already caught the attention of industry experts and enthusiasts alike.

The new language models offered by Llama 2 come in two variants – pretrained and fine-tuned:

  • The pretrained models are trained on a whopping two trillion tokens and have a context window of 4,096 tokens, enabling them to process vast amounts of content at once.
  • The fine-tuned models, designed for chat applications like ChatGPT, have been trained on “over one million human annotations,” further enhancing their language processing capabilities.

While Llama 2’s performance may not yet rival OpenAI’s GPT-4, it shows remarkable promise for an open-source model.

The Llama 2 journey started with its predecessor, LLaMA, which Meta released as open source with a non-commercial license in February.

However, someone leaked LLaMA’s weights to torrent sites, leading to a surge in its usage within the AI community. This laid the foundation for a fast-growing underground LLM development scene.

Open-source AI models like Llama 2 come with their share of advantages and concerns.

On the positive side, they encourage transparency in terms of training data, foster economic competition, promote free speech, and democratise access to AI. However, critics point out potential risks, such as misuse in synthetic biology, spam generation, or disinformation.

To address such concerns, Meta released a statement in support of its open innovation approach, emphasising that responsible and open innovation encourages transparency and trust in AI technologies.

Despite the benefits of open-source models, some critics remain sceptical, especially regarding the lack of transparency in the training data used for LLMs. While Meta claims to have made efforts to remove data containing personal information, the specific sources of training data remain undisclosed, raising concerns about privacy and ethical considerations.

With the combination of open-source development and commercial licensing, Llama 2 promises to bring exciting advancements and opportunities to the AI community while simultaneously navigating the challenges of data privacy and responsible usage.

(Photo by Joakim Honkasalo on Unsplash)

See also: Anthropic launches ChatGPT rival Claude 2

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta launches Llama 2 open-source LLM appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/19/meta-launches-llama-2-open-source-llm/feed/ 0
Mark Zuckerberg: AI will be built into all of Meta’s products https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/ https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/#respond Fri, 09 Jun 2023 14:41:18 +0000 https://www.artificialintelligence-news.com/?p=13176 Meta CEO Mark Zuckerberg unveiled the extent of the company’s AI investments during an internal company meeting. The meeting included discussions about new products, such as chatbots for Messenger and WhatsApp that can converse with different personas. Additionally, Meta announced new features for Instagram, including the ability to modify user photos via text prompts and... Read more »

The post Mark Zuckerberg: AI will be built into all of Meta’s products appeared first on AI News.

]]>
Meta CEO Mark Zuckerberg unveiled the extent of the company’s AI investments during an internal company meeting.

The meeting included discussions about new products, such as chatbots for Messenger and WhatsApp that can converse with different personas. Additionally, Meta announced new features for Instagram, including the ability to modify user photos via text prompts and create emoji stickers for messaging services.

These developments come at a crucial time for Meta, as the company has faced financial struggles and an identity crisis in recent years. Investors criticised Meta for focusing too heavily on its metaverse ambitions and not paying enough attention to AI.

Meta’s decision to focus on AI tools follows in the footsteps of its competitors, including Google, Microsoft, and Snapchat, who have received significant investor attention for their generative AI products. Unlike the aforementioned rivals, Meta is yet to release any consumer-facing generative AI products.

To address this gap, Meta has been reorganising its AI divisions and investing heavily in infrastructure to support its AI product needs.

Zuckerberg expressed optimism during the company meeting, stating that advancements in generative AI have made it possible to integrate the technology into “every single one” of Meta’s products. This signifies Meta’s intention to leverage AI across its platforms, including Facebook, Instagram, and WhatsApp.

In addition to consumer-facing tools, Meta also announced a productivity assistant called Metamate for its employees. This assistant is designed to answer queries and perform tasks based on internal company information.

Meta is also exploring open-source models, allowing users to build their own AI-powered chatbots and technologies. However, critics and competitors have raised concerns about the potential misuse of these tools, as they can be utilised to spread misinformation and hate speech on a larger scale.

Zuckerberg addressed these concerns during the meeting, emphasising the value of democratising access to AI. He expressed hope that users would be able to develop AI programs independently in the future, without relying on frameworks provided by a few large technology companies.

Despite the increased focus on AI, Zuckerberg reassured employees that Meta would not be abandoning its plans for the metaverse, indicating that both AI and the metaverse would remain key areas of focus for the company.

The success of these endeavours will determine whether Meta can catch up with its competitors and solidify its position among tech leaders in the rapidly-evolving landscape.

(Photo by Mariia Shalabaieva on Unsplash)

Related: Meta’s open-source speech AI models support over 1,100 languages

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mark Zuckerberg: AI will be built into all of Meta’s products appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/09/mark-zuckerberg-ai-built-into-all-meta-products/feed/ 0
Meta’s open-source speech AI models support over 1,100 languages https://www.artificialintelligence-news.com/2023/05/23/meta-open-source-speech-ai-models-support-over-1100-languages/ https://www.artificialintelligence-news.com/2023/05/23/meta-open-source-speech-ai-models-support-over-1100-languages/#respond Tue, 23 May 2023 12:46:19 +0000 https://www.artificialintelligence-news.com/?p=13101 Advancements in machine learning and speech recognition technology have made information more accessible to people, particularly those who rely on voice to access information. However, the lack of labelled data for numerous languages poses a significant challenge in developing high-quality machine-learning models. In response to this problem, the Meta-led Massively Multilingual Speech (MMS) project has... Read more »

The post Meta’s open-source speech AI models support over 1,100 languages appeared first on AI News.

]]>
Advancements in machine learning and speech recognition technology have made information more accessible to people, particularly those who rely on voice to access information. However, the lack of labelled data for numerous languages poses a significant challenge in developing high-quality machine-learning models.

In response to this problem, the Meta-led Massively Multilingual Speech (MMS) project has made remarkable strides in expanding language coverage and improving the performance of speech recognition and synthesis models.

By combining self-supervised learning techniques with a diverse dataset of religious readings, the MMS project has achieved impressive results in growing the ~100 languages supported by existing speech recognition models to over 1,100 languages.

Breaking down language barriers

To address the scarcity of labelled data for most languages, the MMS project utilised religious texts, such as the Bible, which have been translated into numerous languages.

These translations provided publicly available audio recordings of people reading the texts, enabling the creation of a dataset comprising readings of the New Testament in over 1,100 languages.

By including unlabeled recordings of other religious readings, the project expanded language coverage to recognise over 4,000 languages.

Despite the dataset’s specific domain and predominantly male speakers, the models performed equally well for male and female voices. Meta also says it did not introduce any religious bias.

Overcoming challenges through self-supervised learning

Training conventional supervised speech recognition models with just 32 hours of data per language is inadequate.

To overcome this limitation, the MMS project leveraged the benefits of the wav2vec 2.0 self-supervised speech representation learning technique.

By training self-supervised models on approximately 500,000 hours of speech data across 1,400 languages, the project significantly reduced the reliance on labelled data.

The resulting models were then fine-tuned for specific speech tasks, such as multilingual speech recognition and language identification.

Impressive results

Evaluation of the models trained on the MMS data revealed impressive results. In a comparison with OpenAI’s Whisper, the MMS models exhibited half the word error rate while covering 11 times more languages.

Furthermore, the MMS project successfully built text-to-speech systems for over 1,100 languages. Despite the limitation of having relatively few different speakers for many languages, the speech generated by these systems exhibited high quality.

While the MMS models have shown promising results, it is essential to acknowledge their imperfections. Mistranscriptions or misinterpretations by the speech-to-text model could result in offensive or inaccurate language. The MMS project emphasises collaboration across the AI community to mitigate such risks.

You can read the MMS paper here or find the project on GitHub.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta’s open-source speech AI models support over 1,100 languages appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/23/meta-open-source-speech-ai-models-support-over-1100-languages/feed/ 0
Meta’s protein-folding AI reminds us it’s not just a metaverse firm https://www.artificialintelligence-news.com/2022/11/02/meta-protein-folding-ai-not-just-metaverse-firm/ https://www.artificialintelligence-news.com/2022/11/02/meta-protein-folding-ai-not-just-metaverse-firm/#respond Wed, 02 Nov 2022 12:55:35 +0000 https://www.artificialintelligence-news.com/?p=12440 Meta has unveiled a new protein-folding AI that could be revolutionary for science and the development of new medicines. Facebook, as the company was known before changing its name, has always been seen as a leader in AI. The popular open-source framework PyTorch was Facebook’s creation and earlier this year Meta became a founding member... Read more »

The post Meta’s protein-folding AI reminds us it’s not just a metaverse firm appeared first on AI News.

]]>
Meta has unveiled a new protein-folding AI that could be revolutionary for science and the development of new medicines.

Facebook, as the company was known before changing its name, has always been seen as a leader in AI. The popular open-source framework PyTorch was Facebook’s creation and earlier this year Meta became a founding member of a foundation aiming to drive the adoption of AI.

In its pursuit to become a leader in the metaverse, changing its very company name to reflect, many people – including shareholders – have been concerned that it will reduce its focus on other important areas.

Brad Gerstner, the founder of Meta shareholder Altimeter Capital, penned a letter in which he urged Meta to reduce its metaverse investments and “solidify the company’s position” as one of the world’s leaders in AI.

“Meta’s investment in AI will lead to exciting and important new products that can be cross-sold to billions of customers. From Grand Teton to Universal Speech Translator to Make-A-Video, we are witnessing a Cambrian moment in AI, and Meta is no doubt well positioned to help invent and monetize that future,” wrote Gerstner.

“Perhaps it was the re-naming of the company to Meta that caused the world to conclude that you were spending 100% of your time on Reality Labs instead of AI or the core business. Whatever the reason, that is certainly the perception.”

Meta’s announcement this week of its protein-folding AI could help to alleviate such concerns.

The company has released the ESM Metagenomic Atlas – which features over 600 million proteins and predictions for almost the entire MGnify90 database – in addition to the model used to create the database and an API that allows researchers to use it for scientific discovery.

Meta says that it found using a language model of protein sequences accelerated structure prediction by up to 60x.

“ESMFold shows how AI can give us new tools to understand the natural world, much like the microscope, which enabled us to see into the world at an infinitesimal scale and opened up a whole new understanding of life,” explained Meta. 

“Much of AI research has focused on helping computers understand the world in a way similar to how humans do. The language of proteins is one that is beyond human comprehension and has eluded even the most powerful computational tools. AI has the potential to open up this language to our understanding.”

ESM code and models can be found on GitHub here.

(Photo by Kelly Sikkema on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta’s protein-folding AI reminds us it’s not just a metaverse firm appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/02/meta-protein-folding-ai-not-just-metaverse-firm/feed/ 0