OpenAI withholds its latest research fearing societal impact

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


OpenAI has decided not to publish its latest research fearing its potential misuses and the negative societal impact that would have.

The institute, backed by the likes of Elon Musk and Peter Thiel, developed an AI which can produce convincing ‘fake news’ articles.

Articles produced by the AI writer can be on any subject and merely require a brief prompt before it gets to work unsupervised.

The AI scrapes data from ~8 million webpages and solely looks at those posted to Reddit with a ‘karma’ of three or more. That check means the article resonated with some users, although for what reason it cannot be sure.

Often, the resulting text – generated word-by-word – is coherent but fabricated. That even includes ‘quotes’ used in the article.

Here’s a sample provided by OpenAI:

Most technologies can be exploited for harmful purposes, but that doesn’t mean advancements should be halted. Computers have enriched our lives but stringent laws and regulations have been needed to limit their more sinister side.

Here are some ways OpenAI sees advancements like its own benefiting society:

  • AI writing assistants
  • More capable dialogue agents
  • Unsupervised translation between languages
  • Better speech recognition systems

In contrast, here are some examples of negative implications:

  • Generate misleading news articles
  • Impersonate others online
  • Automate the production of abusive or faked content to post on social media
  • Automate the production of spam/phishing content

Some advancements we don’t thoroughly understand their impact until they’ve been developed. On producing his famous equation, Einstein didn’t expect it to one day be used to construct nuclear weapons.

Hiroshima will remain among the worst man-made disasters history and we can hope it continues to serve as a warning about nuclear weapon use. There is rightfully a taboo around things designed to cause bloodshed, but societal damage can also be devastating.

We’re already living in an age of bots and disinformation campaigns. Some are used by foreign nations to influence policy and sow disorder, while others are created to spread fear and drive agendas.

Because these campaigns are not designed to kill, there’s more disassociation from their impact. In the past year alone, we’ve seen children being split from their families at borders and refugees ‘waterboarded’ at school by fellow students due to deceitful anti-immigration campaigns.

Currently, there’s at least a moderate amount of accountability with such campaigns. Somewhere along the line, a person has produced the article being read and can be held accountable for consequences if misinformation has been published.

AIs like the one created by OpenAI makes it a lot more difficult to hold someone accountable. Articles can be mass published around the web to change public opinions around a topic, and that has terrifying implications.

The idea of fabricated articles, combined with DeepFake images and videos, should be enough to send a chill down anyone’s spine.

OpenAI has accepted its own responsibility and made the right decision not to make its latest research public at this time. Hopefully, other players follow OpenAI’s lead in considering implications.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

Tags: , , , , , , ,

View Comments
Leave a comment

Leave a Reply