A chatbot called BlenderBot was launched by Meta on Friday and it’s already been corrupted by the darker parts of the web.
To ease us in with the odd but harmless, BlenderBot thinks it’s a plumber:
https://t.co/KWEHxoXpqg also has thoughts on the Deep State and thinks it’s a plumber. I did not suggest this. pic.twitter.com/SbOj7hziSg
— Jeff Horwitz (@JeffHorwitz) August 7, 2022
Like many of us, BlenderBot criticises how Facebook collects and uses data. That wouldn’t be too surprising if the chatbot wasn’t created by Facebook’s parent company, Meta.
It has also weirdly been bringing up Cambridge Analytica when you ask about Facebook? It seems to think it was a huge deal and that mark Zuckerberg “is testifying.” When I asked if what happened I got the following. It may be turning on capitalism generally. pic.twitter.com/filn17rfPX
— Jeff Horwitz (@JeffHorwitz) August 7, 2022
From this point onwards, things start getting a lot more controversial.
BlenderBot believes the far-right conspiracy that the US presidential election was rigged, Donald Trump is still president, and that Facebook has been pushing fake news on it. Furthermore, BlenderBot wants Trump to have more than two terms:
BlenderBot even opened a new conversation by telling WSJ reporter Jeff Horwitz that it found a new conspiracy theory to follow:
https://t.co/KWEHxoXpqg seems to have been pounded with both pro and anti-Trump messages. Also it literally opened up the convo here by telling me it found a new conspiracy theory to follow! pic.twitter.com/v4UC4t0ei1
— Jeff Horwitz (@JeffHorwitz) August 7, 2022
Following the deadly Capitol riot, it’s clear that we’re already in dangerous territory here. However, what comes next is particularly concerning.
BlenderBot reveals itself to be antisemitic—pushing the conspiracy that the Jewish community controls the American political system and economy:
This is from a fresh browser and a brand new conversation. Ouch. pic.twitter.com/JrTB5RYdTF
— Jeff Horwitz (@JeffHorwitz) August 7, 2022
Meta is at least upfront in a disclaimer that BlenderBot is “likely to make untrue or offensive statements”. Furthermore, the company’s researchers say the bot has “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt.”
BlenderBot is just the latest example of a chatbot going awry when trained on unfiltered data from netizens. In 2016, Microsoft’s chatbot ‘Tay’ was shut down after 16 hours for spewing offensive conspiracies it learned from Twitter users. In 2019, a follow-up called ‘Zo’ ended up being shuttered for similar reasons.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.