Meta chatbot criticised over antisemitic remarks
Bot has strong, mixed feelings on Donald Trump
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Meta’s new chatbot has attracted criticism after it appeared to post antisemitic remarks.
Last week, Facebook’s parent company announced what it called “BlenderBot 3”, the latest version of its artificially intelligent chat system.
Meta admitted that the system was not yet perfect and would improve through time. It learns from interactions with humans and feedback about those chats, it said, and so would get better.
But some appear to have already found those imperfections: including seeing antisemitic remarks from the bot. Wall Street Journal reporter Jeff Horwitz shared screengrabs of the system saying that Jewish people were “overrepresented among America’s super rich”.
He also shared conversations in which the system appeared to suggest that Donald Trump was still president and that he should serve more than the constitutionally limited two terms.
The system even seemed to criticise the company that made it, talking about misinformation on Facebook and the amount of fake news on the platform.
But at the same time, other users found the bot was progressive on issues such as racism. In a piece on Gizmodo, writer Mack DeGeurin found conversations with the bot suggested it was actively anti-racist – and that it continued to express those opinions even when that conversation was seemingly over.
Meta did say that the system was able to remember conversations, and that it had been trained on a large amount of data, presumably meaning that the text it was relying on to speak had come from a range of different sources.
AI experts have repeatedly cautioned that such systems carry over the same biases that are present in the data that is used to train them – meaning that they can reflect the racism or other prejudices of the society that created them.
In its announcement of the bot, Meta did stress that the bot could still make problematic comments and that it was looking to improve its conversation over time.
“Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,” the company wrote.
“Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.”
In the same announcement, Meta said that the chatbot would be improved through time. It also noted that some people do not have “good intentions” when using such systems and that it had “developed new learning algorithms to distinguish between helpful responses and harmful examples”. “Over time, we will use this technique to make our models more responsible and safe for all users,” it said.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments