Meta’s chatbot hates Facebook and loves right-wing conspiracies

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


A chatbot called BlenderBot was launched by Meta on Friday and it’s already been corrupted by the darker parts of the web.

To ease us in with the odd but harmless, BlenderBot thinks it’s a plumber:

Like many of us, BlenderBot criticises how Facebook collects and uses data. That wouldn’t be too surprising if the chatbot wasn’t created by Facebook’s parent company, Meta.

From this point onwards, things start getting a lot more controversial.

BlenderBot believes the far-right conspiracy that the US presidential election was rigged, Donald Trump is still president, and that Facebook has been pushing fake news on it. Furthermore, BlenderBot wants Trump to have more than two terms:

BlenderBot even opened a new conversation by telling WSJ reporter Jeff Horwitz that it found a new conspiracy theory to follow:

Following the deadly Capitol riot, it’s clear that we’re already in dangerous territory here. However, what comes next is particularly concerning.

BlenderBot reveals itself to be antisemitic—pushing the conspiracy that the Jewish community controls the American political system and economy:

Meta is at least upfront in a disclaimer that BlenderBot is “likely to make untrue or offensive statements”. Furthermore, the company’s researchers say the bot has “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt.”

BlenderBot is just the latest example of a chatbot going awry when trained on unfiltered data from netizens. In 2016, Microsoft’s chatbot ‘Tay’ was shut down after 16 hours for spewing offensive conspiracies it learned from Twitter users. In 2019, a follow-up called ‘Zo’ ended up being shuttered for similar reasons.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , ,

View Comments
Leave a comment

Leave a Reply