OpenAI’s latest neural network creates images from written descriptions

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E.

DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions.

“We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.

Generated images can range from drawings, to objects, and even manipulated real-world photos. Here are some examples of each provided by OpenAI:

Just as OpenAI’s GPT-3 text generator caused alarm about implications such as helping to create fake news for the kinds of disinformation campaigns recently seen around COVID-19, 5G, and attempting to influence various democratic processes—similar concerns will be raised about the company’s latest innovation.

People are increasingly aware of fake news and not to believe everything they read, especially from unknown sources without good track records. However, as humans, we’re still used to believing what we can see with our eyes. Fake news with fake supporting imagery is a rather convincing combination.

Much like it argued with GPT-3, OpenAI essentially says that – by putting the technology out there as responsibly as possible – it helps to raise awareness and drives research into how the implications can be tackled before such neural networks are inevitably created and used by malicious parties.

“We recognise that work involving generative models has the potential for significant, broad societal impacts,” OpenAI said.

“In the future, we plan to analyse how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer-term ethical challenges implied by this technology.”

Technological advancements will almost always be used for damaging purposes—but often the benefits outweigh the risks. I’d wager you could write pages about the good and bad sides of the internet, but overall it’s a pretty fantastic thing.

When it comes down to it: If the “good guys” don’t build it, you can be sure the bad ones will.

(Image Credit: Justin Jay Wang/OpenAI)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: , , , , , , , ,

View Comments
Leave a comment

Leave a Reply