Ryan Daws, Author at AI News https://www.artificialintelligence-news.com Artificial Intelligence News Wed, 19 Jun 2024 15:40:50 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Ryan Daws, Author at AI News https://www.artificialintelligence-news.com 32 32 Meta unveils five AI models for multi-modal processing, music generation, and more https://www.artificialintelligence-news.com/2024/06/19/meta-unveils-ai-models-multi-modal-processing-music-generation-more/ https://www.artificialintelligence-news.com/2024/06/19/meta-unveils-ai-models-multi-modal-processing-music-generation-more/#respond Wed, 19 Jun 2024 15:40:48 +0000 https://www.artificialintelligence-news.com/?p=15062 Meta has unveiled five major new AI models and research, including multi-modal systems that can process both text and images, next-gen language models, music generation, AI speech detection, and efforts to improve diversity in AI systems. The releases come from Meta’s Fundamental AI Research (FAIR) team which has focused on advancing AI through open research... Read more »

The post Meta unveils five AI models for multi-modal processing, music generation, and more appeared first on AI News.

]]>
Meta has unveiled five major new AI models and research, including multi-modal systems that can process both text and images, next-gen language models, music generation, AI speech detection, and efforts to improve diversity in AI systems.

The releases come from Meta’s Fundamental AI Research (FAIR) team which has focused on advancing AI through open research and collaboration for over a decade. As AI rapidly innovates, Meta believes working with the global community is crucial.

“By publicly sharing this research, we hope to inspire iterations and ultimately help advance AI in a responsible way,” said Meta.

Chameleon: Multi-modal text and image processing

Among the releases are key components of Meta’s ‘Chameleon’ models under a research license. Chameleon is a family of multi-modal models that can understand and generate both text and images simultaneously—unlike most large language models which are typically unimodal.

“Just as humans can process the words and images simultaneously, Chameleon can process and deliver both image and text at the same time,” explained Meta. “Chameleon can take any combination of text and images as input and also output any combination of text and images.”

Potential use cases are virtually limitless from generating creative captions to prompting new scenes with text and images.

Multi-token prediction for faster language model training

Meta has also released pretrained models for code completion that use ‘multi-token prediction’ under a non-commercial research license. Traditional language model training is inefficient by predicting just the next word. Multi-token models can predict multiple future words simultaneously to train faster.

“While [the one-word] approach is simple and scalable, it’s also inefficient. It requires several orders of magnitude more text than what children need to learn the same degree of language fluency,” said Meta.

JASCO: Enhanced text-to-music model

On the creative side, Meta’s JASCO allows generating music clips from text while affording more control by accepting inputs like chords and beats.

“While existing text-to-music models like MusicGen rely mainly on text inputs for music generation, our new model, JASCO, is capable of accepting various inputs, such as chords or beat, to improve control over generated music outputs,” explained Meta.

AudioSeal: Detecting AI-generated speech

Meta claims AudioSeal is the first audio watermarking system designed to detect AI-generated speech. It can pinpoint the specific segments generated by AI within larger audio clips up to 485x faster than previous methods.

“AudioSeal is being released under a commercial license. It’s just one of several lines of responsible research we have shared to help prevent the misuse of generative AI tools,” said Meta.

Improving text-to-image diversity

Another important release aims to improve the diversity of text-to-image models which can often exhibit geographical and cultural biases.

Meta developed automatic indicators to evaluate potential geographical disparities and conducted a large 65,000+ annotation study to understand how people globally perceive geographic representation.

“This enables more diversity and better representation in AI-generated images,” said Meta. The relevant code and annotations have been released to help improve diversity across generative models.

By publicly sharing these groundbreaking models, Meta says it hopes to foster collaboration and drive innovation within the AI community.

(Photo by Dima Solomin)

See also: NVIDIA presents latest advancements in visual AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta unveils five AI models for multi-modal processing, music generation, and more appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/19/meta-unveils-ai-models-multi-modal-processing-music-generation-more/feed/ 0
NVIDIA presents latest advancements in visual AI https://www.artificialintelligence-news.com/2024/06/17/nvidia-presents-latest-advancements-visual-ai/ https://www.artificialintelligence-news.com/2024/06/17/nvidia-presents-latest-advancements-visual-ai/#respond Mon, 17 Jun 2024 16:05:03 +0000 https://www.artificialintelligence-news.com/?p=15026 NVIDIA researchers are presenting new visual generative AI models and techniques at the Computer Vision and Pattern Recognition (CVPR) conference this week in Seattle. The advancements span areas like custom image generation, 3D scene editing, visual language understanding, and autonomous vehicle perception. “Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement,” said... Read more »

The post NVIDIA presents latest advancements in visual AI appeared first on AI News.

]]>
NVIDIA researchers are presenting new visual generative AI models and techniques at the Computer Vision and Pattern Recognition (CVPR) conference this week in Seattle. The advancements span areas like custom image generation, 3D scene editing, visual language understanding, and autonomous vehicle perception.

“Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement,” said Jan Kautz, VP of learning and perception research at NVIDIA.

“At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.”

Among the over 50 NVIDIA research projects being presented, two papers have been selected as finalists for CVPR’s Best Paper Awards – one exploring the training dynamics of diffusion models and another on high-definition maps for self-driving cars.

Additionally, NVIDIA has won the CVPR Autonomous Grand Challenge’s End-to-End Driving at Scale track, outperforming over 450 entries globally. This milestone demonstrates NVIDIA’s pioneering work in using generative AI for comprehensive self-driving vehicle models, also earning an Innovation Award from CVPR.

One of the headlining research projects is JeDi, a new technique that allows creators to rapidly customise diffusion models – the leading approach for text-to-image generation – to depict specific objects or characters using just a few reference images, rather than the time-intensive process of fine-tuning on custom datasets.

Another breakthrough is FoundationPose, a new foundation model that can instantly understand and track the 3D pose of objects in videos without per-object training. It set a new performance record and could unlock new AR and robotics applications.

NVIDIA researchers also introduced NeRFDeformer, a method to edit the 3D scene captured by a Neural Radiance Field (NeRF) using a single 2D snapshot, rather than having to manually reanimate changes or recreate the NeRF entirely. This could streamline 3D scene editing for graphics, robotics, and digital twin applications.

On the visual language front, NVIDIA collaborated with MIT to develop VILA, a new family of vision language models that achieve state-of-the-art performance in understanding images, videos, and text. With enhanced reasoning capabilities, VILA can even comprehend internet memes by combining visual and linguistic understanding.

NVIDIA’s visual AI research spans numerous industries, including over a dozen papers exploring novel approaches for autonomous vehicle perception, mapping, and planning. Sanja Fidler, VP of NVIDIA’s AI Research team, is presenting on the potential of vision language models for self-driving cars.

The breadth of NVIDIA’s CVPR research exemplifies how generative AI could empower creators, accelerate automation in manufacturing and healthcare, while propelling autonomy and robotics forward.

(Photo by v2osk)

See also: NLEPs: Bridging the gap between LLMs and symbolic reasoning

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA presents latest advancements in visual AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/17/nvidia-presents-latest-advancements-visual-ai/feed/ 0
NLEPs: Bridging the gap between LLMs and symbolic reasoning https://www.artificialintelligence-news.com/2024/06/14/nleps-bridging-the-gap-between-llms-symbolic-reasoning/ https://www.artificialintelligence-news.com/2024/06/14/nleps-bridging-the-gap-between-llms-symbolic-reasoning/#respond Fri, 14 Jun 2024 16:07:57 +0000 https://www.artificialintelligence-news.com/?p=15021 Researchers have introduced a novel approach called natural language embedded programs (NLEPs) to improve the numerical and symbolic reasoning capabilities of large language models (LLMs). The technique involves prompting LLMs to generate and execute Python programs to solve user queries, then output solutions in natural language. While LLMs like ChatGPT have demonstrated impressive performance on... Read more »

The post NLEPs: Bridging the gap between LLMs and symbolic reasoning appeared first on AI News.

]]>
Researchers have introduced a novel approach called natural language embedded programs (NLEPs) to improve the numerical and symbolic reasoning capabilities of large language models (LLMs). The technique involves prompting LLMs to generate and execute Python programs to solve user queries, then output solutions in natural language.

While LLMs like ChatGPT have demonstrated impressive performance on various tasks, they often struggle with problems requiring numerical or symbolic reasoning.

NLEPs follow a four-step problem-solving template: calling necessary packages, importing natural language representations of required knowledge, implementing a solution-calculating function, and outputting results as natural language with optional data visualisation.

This approach offers several advantages, including improved accuracy, transparency, and efficiency. Users can investigate generated programs and fix errors directly, avoiding the need to rerun entire models for troubleshooting. Additionally, a single NLEP can be reused for multiple tasks by replacing certain variables.

The researchers found that NLEPs enabled GPT-4 to achieve over 90% accuracy on various symbolic reasoning tasks, outperforming task-specific prompting methods by 30%

Beyond accuracy improvements, NLEPs could enhance data privacy by running programs locally, eliminating the need to send sensitive user data to external companies for processing. The technique may also boost the performance of smaller language models without costly retraining.

However, NLEPs rely on a model’s program generation capability and may not work as well with smaller models trained on limited datasets. Future research will explore methods to make smaller LLMs generate more effective NLEPs and investigate the impact of prompt variations on reasoning robustness.

The research, supported in part by the Center for Perceptual and Interactive Intelligence of Hong Kong, will be presented at the Annual Conference of the North American Chapter of the Association for Computational Linguistics later this month.

(Photo by Alex Azabache)

See also: Apple is reportedly getting free ChatGPT access

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NLEPs: Bridging the gap between LLMs and symbolic reasoning appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/14/nleps-bridging-the-gap-between-llms-symbolic-reasoning/feed/ 0
Apple is reportedly getting free ChatGPT access https://www.artificialintelligence-news.com/2024/06/13/apple-reportedly-getting-free-chatgpt-access/ https://www.artificialintelligence-news.com/2024/06/13/apple-reportedly-getting-free-chatgpt-access/#respond Thu, 13 Jun 2024 17:21:19 +0000 https://www.artificialintelligence-news.com/?p=14994 Apple’s newly-announced partnership with OpenAI – which brings ChatGPT capabilities to iOS 18, iPadOS 18, and macOS Sequoia – comes without any direct money exchange. According to a Bloomberg report by Mark Gurman, “Apple isn’t paying OpenAI as part of the partnership.” Instead, the Cupertino-based company is leveraging its massive user base and device ecosystem... Read more »

The post Apple is reportedly getting free ChatGPT access appeared first on AI News.

]]>
Apple’s newly-announced partnership with OpenAI – which brings ChatGPT capabilities to iOS 18, iPadOS 18, and macOS Sequoia – comes without any direct money exchange.

According to a Bloomberg report by Mark Gurman, “Apple isn’t paying OpenAI as part of the partnership.”

Instead, the Cupertino-based company is leveraging its massive user base and device ecosystem as currency.

“Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments,” Gurman’s sources explained.

Gurman notes that OpenAI could find a silver lining by encouraging Apple users to subscribe to ChatGPT Plus, priced at $20 per month. If subscribers sign up through Apple devices, the iPhone maker will likely even claim a commission.

Apple’s AI strategy extends beyond OpenAI. The company is reportedly in talks to offer Google’s Gemini chatbot as an additional option later this year, signalling its intent to provide users with diverse AI experiences without necessarily having to make such major investments itself.

(Image Credit: Apple)

The long-term vision for Apple involves capturing a slice of the revenue generated from monetising chatbot results on its operating systems. This move anticipates a shift in user behaviour, with more people relying on AI assistants rather than traditional search engines like Google.

While Apple’s AI plans are ambitious, challenges remain. The report highlights that the company has yet to secure a deal with a local Chinese provider for chatbot features, though discussions with local firms like Baidu and Alibaba are underway. Initially, Apple Intelligence will be limited to US English, with expanded language support planned for the following year.

The Apple-OpenAI deal represents a novel approach to collaboration in the AI space, where brand exposure and technological integration are valued as much as, if not more than, direct financial compensation.

See also: Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Apple is reportedly getting free ChatGPT access appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/13/apple-reportedly-getting-free-chatgpt-access/feed/ 0
Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/ https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/#respond Wed, 12 Jun 2024 15:45:08 +0000 https://www.artificialintelligence-news.com/?p=14988 Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process. Musk had initially sued OpenAI in March 2024,... Read more »

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process.

Musk had initially sued OpenAI in March 2024, alleging breach of contracts, unfair business practices, and failure in fiduciary duty. He claimed that his contributions to the company were made “in exchange for and in reliance on promises that those assets were irrevocably dedicated to building AI for public benefit, with only safety as a countervailing concern.”

The lawsuit sought remedies for “breach of contract, promissory estoppel, breach of fiduciary duty, unfair business practices, and accounting,” as well as specific performance, restitution, and damages.

However, Musk’s filings to withdraw the case provided no explanation for abandoning the lawsuit. OpenAI had previously called Musk’s claims “incoherent” and that his inability to produce a contract made his breach claims difficult to prove, stating that documents provided by Musk “contradict his allegations as to the alleged terms of the agreement.”

The withdrawal of the lawsuit comes at a time when Musk is strongly opposing Apple’s plans to integrate ChatGPT into its operating systems.

During Apple’s keynote event announcing Apple Intelligence for iOS 18, iPadOS 18, and macOS Sequoia, Musk threatened to ban Apple devices from his companies, calling the integration “an unacceptable security violation.”

Despite assurances from Apple and OpenAI that user data would only be shared with explicit consent and that interactions would be secure, Musk questioned Apple’s ability to ensure data security, stating, “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”

Since bringing the lawsuit against OpenAI, Musk has also created his own AI company, xAI, and secured over $6 billion in funding for his plans to advance the Grok chatbot on his social network, X.

While Musk’s reasoning for dropping the OpenAI lawsuit remains unclear, his actions suggest a potential shift in focus towards advancing his own AI endeavours while continuing to vocalise his criticism of OpenAI through social media rather than the courts.

See also: DuckDuckGo releases portal giving private access to AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/12/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/feed/ 0
DuckDuckGo releases portal giving private access to AI models https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/ https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/#respond Fri, 07 Jun 2024 15:42:22 +0000 https://www.artificialintelligence-news.com/?p=14966 DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected. The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source... Read more »

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected.

The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source models are OpenAI’s GPT-3.5 Turbo and Anthropic’s Claude 3 Haiku, while the open-source models are Meta’s Llama-3 70B and Mistral AI’s Mixtral 8x7b.

What sets DuckDuckGo AI Chat apart is its commitment to user privacy. Neither DuckDuckGo nor the chatbot providers can use user data to train their models, ensuring that interactions remain private and anonymous. DuckDuckGo also strips away metadata, such as server or IP addresses, so that queries appear to originate from the company itself rather than individual users.

The company has agreements in place with all model providers to ensure that any saved chats are completely deleted within 30 days, and that none of the chats made on the platform can be used to train or improve the models. This makes preserving privacy easier than changing the privacy settings for each service.

In an era where online services are increasingly hungry for user data, DuckDuckGo’s AI Chat service is a breath of fresh air. The company’s commitment to privacy is a direct response to the growing concerns about data collection and usage in the AI industry. By providing a private and anonymous platform for users to interact with AI chatbots, DuckDuckGo is setting a new standard for the industry.

DuckDuckGo’s AI service is free to use within a daily limit, and the company is considering launching a paid tier to reduce or eliminate these limits. The service is designed to be a complementary partner to its search engine, allowing users to switch between search and AI chat for a more comprehensive search experience.

“We view AI Chat and search as two different but powerful tools to help you find what you’re looking for – especially when you’re exploring a new topic. You might be shopping or doing research for a project and are unsure how to get started. In situations like these, either AI Chat or Search could be good starting points.” the company explained.

“If you start by asking a few questions in AI Chat, the answers may inspire traditional searches to track down reviews, prices, or other primary sources. If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw.”

To accommodate that user workflow, DuckDuckGo has made AI Chat accessible through DuckDuckGo Private Search for quick access.

The launch of DuckDuckGo AI Chat comes at a time when the AI industry is facing increasing scrutiny over data privacy and usage. The service is a welcome addition for privacy-conscious individuals, joining the recent launch of Venice AI by crypto entrepreneur Erik Voorhees. Venice AI features an uncensored AI chatbot and image generator that doesn’t require accounts and doesn’t retain data..

As the AI industry continues to evolve, it’s clear that privacy will remain a top concern for users. With the launch of DuckDuckGo AI Chat, the company is taking a significant step towards providing users with a private and secure platform for interacting with AI chatbots.

See also: AI pioneers turn whistleblowers and demand safeguards

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/07/duckduckgo-portal-giving-private-access-ai-models/feed/ 0
AI pioneers turn whistleblowers and demand safeguards https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/ https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/#respond Thu, 06 Jun 2024 15:39:54 +0000 https://www.artificialintelligence-news.com/?p=14962 OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology.  In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came... Read more »

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology. 

In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came shortly after OpenAI unveiled its new flagship GPT-4o model, which it touted as “magical” at its Spring Update event.

According to reports, Leike’s departure was driven by constant disagreements over security measures, monitoring practices, and the prioritisation of flashy product releases over safety considerations.

Leike’s exit has opened a Pandora’s box for the AI firm. Former OpenAI board members have come forward with allegations of psychological abuse levelled against CEO Sam Altman and the company’s leadership.

The growing internal turmoil at OpenAI coincides with mounting external concerns about the potential risks posed by generative AI technology like the company’s own language models. Critics have warned about the imminent existential threat of advanced AI surpassing human capabilities, as well as more immediate risks like job displacement and the weaponisation of AI for misinformation and manipulation campaigns.

In response, a group of current and former employees from OpenAI, Anthropic, DeepMind, and other leading AI companies have penned an open letter addressing these risks.

“We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies,” the letter states.

“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts.”

The letter, which has been signed by 13 employees and endorsed by AI pioneers Yoshua Bengio and Geoffrey Hinton, outlines four core demands aimed at protecting whistleblowers and fostering greater transparency and accountability around AI development:

  1. That companies will not enforce non-disparagement clauses or retaliate against employees for raising risk-related concerns.
  2. That companies will facilitate a verifiably anonymous process for employees to raise concerns to boards, regulators, and independent experts.
  3. That companies will support a culture of open criticism and allow employees to publicly share risk-related concerns, with appropriate protection of trade secrets.
  4. That companies will not retaliate against employees who share confidential risk-related information after other processes have failed.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” said Daniel Kokotajlo, a former OpenAI employee who left due to concerns over the company’s values and lack of responsibility.

The demands come amid reports that OpenAI has forced departing employees to sign non-disclosure agreements preventing them from criticising the company or risk losing their vested equity. OpenAI CEO Sam Altman admitted being “embarrassed” by the situation but claimed the company had never actually clawed back anyone’s vested equity.

As the AI revolution charges forward, the internal strife and whistleblower demands at OpenAI underscore the growing pains and unresolved ethical quandaries surrounding the technology.

See also: OpenAI disrupts five covert influence operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/06/ai-pioneers-turn-whistleblowers-demand-safeguards/feed/ 0
Amazon will use computer vision to spot defects before dispatch https://www.artificialintelligence-news.com/2024/06/04/amazon-use-computer-vision-spot-defects-before-dispatch/ https://www.artificialintelligence-news.com/2024/06/04/amazon-use-computer-vision-spot-defects-before-dispatch/#respond Tue, 04 Jun 2024 11:44:26 +0000 https://www.artificialintelligence-news.com/?p=14956 Amazon will harness computer vision and AI to ensure customers receive products in pristine condition and further its sustainability efforts. The initiative – dubbed “Project P.I.” (short for “private investigator”) – operates within Amazon fulfilment centres across North America, where it will scan millions of products daily for defects. Project P.I. leverages generative AI and... Read more »

The post Amazon will use computer vision to spot defects before dispatch appeared first on AI News.

]]>
Amazon will harness computer vision and AI to ensure customers receive products in pristine condition and further its sustainability efforts. The initiative – dubbed “Project P.I.” (short for “private investigator”) – operates within Amazon fulfilment centres across North America, where it will scan millions of products daily for defects.

Project P.I. leverages generative AI and computer vision technologies to detect issues such as damaged products or incorrect colours and sizes before they reach customers. The AI model not only identifies defects but also helps uncover the root causes, enabling Amazon to implement preventative measures upstream. This system has proven highly effective in the sites where it has been deployed, accurately identifying product issues among the vast number of items processed each month.

Before any item is dispatched, it passes through an imaging tunnel where Project P.I. evaluates its condition. If a defect is detected, the item is isolated and further investigated to determine if similar products are affected.

Amazon associates review the flagged items and decide whether to resell them at a discount via Amazon’s Second Chance site, donate them, or find alternative uses. This technology aims to act as an extra pair of eyes, enhancing manual inspections at several North American fulfilment centres, with plans for expansion throughout 2024.

Dharmesh Mehta, Amazon’s VP of Worldwide Selling Partner Services, said: “We want to get the experience right for customers every time they shop in our store.

“By leveraging AI and product imaging within our operations facilities, we are able to efficiently detect potentially damaged products and address more of those issues before they ever reach a customer, which is a win for the customer, our selling partners, and the environment.”

Project P.I. also plays a crucial role in Amazon’s sustainability initiatives. By preventing damaged or defective items from reaching customers, the system helps reduce unwanted returns, wasted packaging, and unnecessary carbon emissions from additional transportation.

Kara Hurst, Amazon’s VP of Worldwide Sustainability, commented: “AI is helping Amazon ensure that we’re not just delighting customers with high-quality items, but we’re extending that customer obsession to our sustainability work by preventing less-than-perfect items from leaving our facilities, and helping us avoid unnecessary carbon emissions due to transportation, packaging, and other steps in the returns process.”

In parallel, Amazon is utilising a generative AI system equipped with a Multi-Modal LLM (MLLM) to investigate the root causes of negative customer experiences.

When defects reported by customers slip through initial checks, this system reviews customer feedback and analyses images from fulfilment centres to understand what went wrong. For example, if a customer receives the wrong size of a product, the system examines the product labels in fulfilment centre images to pinpoint the error.

This technology is also beneficial for Amazon’s selling partners, especially the small and medium-sized businesses that make up over 60% of Amazon’s sales. By making defect data more accessible, Amazon helps these sellers rectify issues quickly and reduce future errors.

(Photo by Andrew Stickelman)

See also: X now permits AI-generated adult content

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Amazon will use computer vision to spot defects before dispatch appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/04/amazon-use-computer-vision-spot-defects-before-dispatch/feed/ 0
X now permits AI-generated adult content https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/ https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/#respond Mon, 03 Jun 2024 12:44:45 +0000 https://www.artificialintelligence-news.com/?p=14927 Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities. “We believe that users should be able to create, distribute, and consume material related... Read more »

The post X now permits AI-generated adult content appeared first on AI News.

]]>
Social media network X has updated its rules to formally permit users to share consensually-produced AI-generated NSFW content, provided it is clearly labelled. This change aligns with previous experiments under Elon Musk’s leadership, which involved hosting adult content within specific communities.

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed. Sexual expression, visual or written, can be a legitimate form of artistic expression,” X’s updated ‘adult content’ policy states.

The policy further elaborates: “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality. We balance this freedom by restricting exposure to adult content for children or adult users who choose not to see it.”

Users can mark their posts as containing sensitive media, ensuring that such content is restricted from users under 18 or those who haven’t provided their birth dates.

While X’s violent content rules have similar guidelines, the platform maintains a strict stance against excessively gory content and depictions of sexual violence. Explicit threats or content inciting or glorifying violence remain prohibited.

X’s decision to allow graphic content is aimed at enabling users to participate in discussions about current events, including sharing relevant images and videos. 

Although X has never outright banned porn, these new clauses could pave the way for developing services centred around adult content, potentially creating a competitor to services like OnlyFans and enhancing its revenue streams. This would further Musk’s vision of X becoming an “everything app,” similar to China’s WeChat.

A 2022 Reuters report, citing internal company documents, indicated that approximately 13% of posts on the platform contained adult content. This percentage has likely increased, especially with the proliferation of porn bots on X.

See also: Elon Musk’s xAI secures $6B to challenge OpenAI in AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post X now permits AI-generated adult content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/06/03/x-permits-ai-generated-adult-content/feed/ 0
OpenAI disrupts five covert influence operations https://www.artificialintelligence-news.com/2024/05/31/openai-disrupts-five-covert-influence-operations/ https://www.artificialintelligence-news.com/2024/05/31/openai-disrupts-five-covert-influence-operations/#respond Fri, 31 May 2024 17:54:24 +0000 https://www.artificialintelligence-news.com/?p=14918 In the last three months, OpenAI has disrupted five covert influence operations (IO) that attempted to exploit the company’s models for deceptive activities online. As of May 2024, these campaigns have not shown a substantial increase in audience engagement or reach due to OpenAI’s services. OpenAI claims its commitment to designing AI models with safety... Read more »

The post OpenAI disrupts five covert influence operations appeared first on AI News.

]]>
In the last three months, OpenAI has disrupted five covert influence operations (IO) that attempted to exploit the company’s models for deceptive activities online. As of May 2024, these campaigns have not shown a substantial increase in audience engagement or reach due to OpenAI’s services.

OpenAI claims its commitment to designing AI models with safety in mind has often thwarted the threat actors’ attempts to generate desired content. Additionally, the company says AI tools have enhanced the efficiency of OpenAI’s investigations.

Detailed threat reporting by distribution platforms and the open-source community has significantly contributed to combating IO. OpenAI is sharing these findings to promote information sharing and best practices among the broader community of stakeholders.

Disrupting covert IO

In the past three months, OpenAI disrupted several IO operations using its models for various tasks, such as generating short comments, creating fake social media profiles, conducting open-source research, debugging simple code, and translating texts.

Specific operations disrupted include:

  • Bad Grammar: A previously unreported operation from Russia targeting Ukraine, Moldova, the Baltic States, and the US. This group used OpenAI’s models to debug code for running a Telegram bot and to create political comments in Russian and English, posted on Telegram.
  • Doppelganger: Another Russian operation generating comments in multiple languages on platforms like X and 9GAG, translating and editing articles, generating headlines, and converting news articles into Facebook posts.
  • Spamouflage: A Chinese network using OpenAI’s models for public social media activity research, generating texts in several languages, and debugging code for managing databases and websites.
  • International Union of Virtual Media (IUVM): An Iranian operation generating and translating long-form articles, headlines, and website tags, published on a linked website.
  • Zero Zeno: A commercial company in Israel, with operations generating articles and comments posted across multiple platforms, including Instagram, Facebook, X, and affiliated websites.

The content posted by these operations focused on various issues, including Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, European and US politics, and criticisms of the Chinese government.

Despite these efforts, none of these operations showed a significant increase in audience engagement due to OpenAI’s models. Using Brookings’ Breakout Scale – which assesses the impact of covert IO – none of the five operations scored higher than a 2, indicating activity on multiple platforms but no breakout into authentic communities.

Attacker trends

Investigations into these influence operations revealed several trends:

  • Content generation: Threat actors used OpenAI’s services to generate large volumes of text with fewer language errors than human operators could achieve alone.
  • Mixing old and new: AI was used alongside traditional formats, such as manually written texts or copied memes.
  • Faking engagement: Some networks generated replies to their own posts to create the appearance of engagement, although none managed to attract authentic engagement.
  • Productivity gains: Threat actors used AI to enhance productivity, summarising social media posts and debugging code.

Defensive trends

OpenAI’s investigations benefited from industry sharing and open-source research. Defensive measures include:

  • Defensive design: OpenAI’s safety systems imposed friction on threat actors, often preventing them from generating the desired content.
  • AI-enhanced investigation: AI-powered tools improved the efficiency of detection and analysis, reducing investigation times from weeks or months to days.
  • Distribution matters: IO content, like traditional content, must be distributed effectively to reach an audience. Despite their efforts, none of the disrupted operations managed substantial engagement.
  • Importance of industry sharing: Sharing threat indicators with industry peers increased the impact of OpenAI’s disruptions. The company benefited from years of open-source analysis by the wider research community.
  • The human element: Despite using AI, threat actors were prone to human error, such as publishing refusal messages from OpenAI’s models on their social media and websites.

OpenAI says it remains dedicated to developing safe and responsible AI. This involves designing models with safety in mind and proactively intervening against malicious use.

While admitting that detecting and disrupting multi-platform abuses like covert influence operations is challenging, OpenAI claims it’s committed to mitigating the dangers.

(Photo by Chris Yang)

See also: EU launches office to implement AI Act and foster innovation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI disrupts five covert influence operations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/31/openai-disrupts-five-covert-influence-operations/feed/ 0