open ai Archives - AI News https://www.artificialintelligence-news.com/tag/open-ai/ Artificial Intelligence News Tue, 28 May 2024 03:37:49 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png open ai Archives - AI News https://www.artificialintelligence-news.com/tag/open-ai/ 32 32 OpenAI secures key partnership with Reddit https://www.artificialintelligence-news.com/2024/05/22/openai-secures-key-partnership-with-reddit/ https://www.artificialintelligence-news.com/2024/05/22/openai-secures-key-partnership-with-reddit/#respond Wed, 22 May 2024 07:23:32 +0000 https://www.artificialintelligence-news.com/?p=14877 OpenAI has secured a deal to access real-time content from Reddit through the platform’s data API.  This allows OpenAI to incorporate conversations from Reddit into ChatGPT and other new products, echoing a previous agreement that the platform had with Google, reportedly valued at $60 million. The partnership enables OpenAI to better sample the datasets on which their models are trained,... Read more »

The post OpenAI secures key partnership with Reddit appeared first on AI News.

]]>
OpenAI has secured a deal to access real-time content from Reddit through the platform’s data API. 

This allows OpenAI to incorporate conversations from Reddit into ChatGPT and other new products, echoing a previous agreement that the platform had with Google, reportedly valued at $60 million.

The partnership enables OpenAI to better sample the datasets on which their models are trained, allowing AI systems to become more precise and context-aware. For human communication and natural language processing, this means models like ChatGPT can stay continually updated with one of the vastest collections of public discourse available, enabling them to respond more effectively.

As part of this collaboration, Reddit will be able to develop and release new AI-powered tools for its users and moderators, utilising OpenAI’s advanced language models. This collaboration could result in more effective moderation tools and a set of features specifically designed to help users better understand thread information. Features might include summarising content or assisting users in forming responses to replies without having to write everything from scratch.

The primary goal of these features is to refine language interactions for all users. Moreover, as part of this partnership, OpenAI will serve as an advertising partner, enabling Reddit to offer ads that are more tailored and relevant, utilising OpenAI’s capacity to capture the subtleties of user behaviour.

The reaction of Reddit’s community to this partnership remains uncertain, but their historical engagement and vocal opposition to unfavourable executive decisions, such as those during the protests over API pricing, suggest they may respond vigilantly. The acceptance of this partnership will hinge critically on OpenAI’s ability to maintain user privacy and adhere to Reddit’s platform norms.

From OpenAI’s viewpoint, partnering with Reddit signifies a crucial strategic development. It positions the company to highlight its prominent AI technology in direct competition with giants such as Google and Microsoft, and crucially, within the integral realm of social media. For Reddit, this collaboration could provide a substantial advantage over less progressive platforms, potentially reshaping its image and attracting more users.

The partnership offers great promise but also raises critical ethical and methodological questions. Integrating real-time user-generated data to advance AI capabilities could lead to privacy violations and potentially restrict users’ freedom of expression. Additionally, this application of AI might clash with existing ethical norms.

Steve Huffman, the CEO of Reddit, supports the integration, stating it will promote more relevant content and improve community engagement, consistent with the vision of a connected internet. However, navigating the implications of this deal is complex, particularly in light of Reddit’s history with data scraping issues and recent copyright disputes.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI secures key partnership with Reddit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2024/05/22/openai-secures-key-partnership-with-reddit/feed/ 0
AI learns how to play Minecraft by watching videos https://www.artificialintelligence-news.com/2022/06/29/ai-learns-how-to-play-minecraft-by-watching-videos/ https://www.artificialintelligence-news.com/2022/06/29/ai-learns-how-to-play-minecraft-by-watching-videos/#respond Wed, 29 Jun 2022 12:00:35 +0000 https://www.artificialintelligence-news.com/?p=12107 Open AI has trained a neural network to play Minecraft by Video PreTraining (VPT) on a massive unlabeled video dataset of human Minecraft play, while using just a small amount of labeled contractor data. With a bit of fine-tuning, the AI research and deployment company is confident that its model can learn to craft diamond... Read more »

The post AI learns how to play Minecraft by watching videos appeared first on AI News.

]]>
Open AI has trained a neural network to play Minecraft by Video PreTraining (VPT) on a massive unlabeled video dataset of human Minecraft play, while using just a small amount of labeled contractor data.

With a bit of fine-tuning, the AI research and deployment company is confident that its model can learn to craft diamond tools, a task that usually takes proficient humans over 20 minutes (24,000 actions). Its model uses the native human interface of keypresses and mouse movements, making it quite general, and represents a step towards general computer-using agents.

A spokesperson for the Microsoft-backed firm said: “The internet contains an enormous amount of publicly available videos that we can learn from. You can watch a person make a gorgeous presentation, a digital artist draw a beautiful sunset, and a Minecraft player build an intricate house. However, these videos only provide a record of what happened but not precisely how it was achieved, i.e. you will not know the exact sequence of mouse movements and keys pressed.

“If we would like to build large-scale foundation models in these domains as we’ve done in language with GPT, this lack of action labels poses a new challenge not present in the language domain, where “action labels” are simply the next words in a sentence.”

In order to utilise the wealth of unlabeled video data available on the internet, Open AI introduces a novel, yet simple, semi-supervised imitation learning method: Video PreTraining (VPT). The team begin by gathering a small dataset from contractors where it records not only their video, but also the actions they took, which in its case are keypresses and mouse movements. With this data the company can train an inverse dynamics model (IDM), which predicts the action being taken at each step in the video. Importantly, the IDM can use past and future information to guess the action at each step.

The spokesperson added: “This task is much easier and thus requires far less data than the behavioral cloning task of predicting actions given past video frames only, which requires inferring what the person wants to do and how to accomplish it. We can then use the trained IDM to label a much larger dataset of online videos and learn to act via behavioral cloning.”

VPT paves the path toward allowing agents to learn to act by watching the vast numbers of videos on the internet, according to Open AI.

The spokesperson said: “Compared to generative video modeling or contrastive methods that would only yield representational priors, VPT offers the exciting possibility of directly learning large scale behavioral priors in more domains than just language. While we only experiment in Minecraft, the game is very open-ended and the native human interface (mouse and keyboard) is very generic, so we believe our results bode well for other similar domains, e.g. computer usage.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI learns how to play Minecraft by watching videos appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/29/ai-learns-how-to-play-minecraft-by-watching-videos/feed/ 0
OpenAI withholds its latest research fearing societal impact https://www.artificialintelligence-news.com/2019/02/15/openai-latest-research-societal-impact/ https://www.artificialintelligence-news.com/2019/02/15/openai-latest-research-societal-impact/#respond Fri, 15 Feb 2019 14:04:29 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4939 OpenAI has decided not to publish its latest research fearing its potential misuses and the negative societal impact that would have. The institute, backed by the likes of Elon Musk and Peter Thiel, developed an AI which can produce convincing ‘fake news’ articles. Articles produced by the AI writer can be on any subject and... Read more »

The post OpenAI withholds its latest research fearing societal impact appeared first on AI News.

]]>
OpenAI has decided not to publish its latest research fearing its potential misuses and the negative societal impact that would have.

The institute, backed by the likes of Elon Musk and Peter Thiel, developed an AI which can produce convincing ‘fake news’ articles.

Articles produced by the AI writer can be on any subject and merely require a brief prompt before it gets to work unsupervised.

The AI scrapes data from ~8 million webpages and solely looks at those posted to Reddit with a ‘karma’ of three or more. That check means the article resonated with some users, although for what reason it cannot be sure.

Often, the resulting text – generated word-by-word – is coherent but fabricated. That even includes ‘quotes’ used in the article.

Here’s a sample provided by OpenAI:

Most technologies can be exploited for harmful purposes, but that doesn’t mean advancements should be halted. Computers have enriched our lives but stringent laws and regulations have been needed to limit their more sinister side.

Here are some ways OpenAI sees advancements like its own benefiting society:

  • AI writing assistants
  • More capable dialogue agents
  • Unsupervised translation between languages
  • Better speech recognition systems

In contrast, here are some examples of negative implications:

  • Generate misleading news articles
  • Impersonate others online
  • Automate the production of abusive or faked content to post on social media
  • Automate the production of spam/phishing content

Some advancements we don’t thoroughly understand their impact until they’ve been developed. On producing his famous equation, Einstein didn’t expect it to one day be used to construct nuclear weapons.

Hiroshima will remain among the worst man-made disasters history and we can hope it continues to serve as a warning about nuclear weapon use. There is rightfully a taboo around things designed to cause bloodshed, but societal damage can also be devastating.

We’re already living in an age of bots and disinformation campaigns. Some are used by foreign nations to influence policy and sow disorder, while others are created to spread fear and drive agendas.

Because these campaigns are not designed to kill, there’s more disassociation from their impact. In the past year alone, we’ve seen children being split from their families at borders and refugees ‘waterboarded’ at school by fellow students due to deceitful anti-immigration campaigns.

Currently, there’s at least a moderate amount of accountability with such campaigns. Somewhere along the line, a person has produced the article being read and can be held accountable for consequences if misinformation has been published.

AIs like the one created by OpenAI makes it a lot more difficult to hold someone accountable. Articles can be mass published around the web to change public opinions around a topic, and that has terrifying implications.

The idea of fabricated articles, combined with DeepFake images and videos, should be enough to send a chill down anyone’s spine.

OpenAI has accepted its own responsibility and made the right decision not to make its latest research public at this time. Hopefully, other players follow OpenAI’s lead in considering implications.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post OpenAI withholds its latest research fearing societal impact appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/02/15/openai-latest-research-societal-impact/feed/ 0