TayTweets: Racist Microsoft chatbot briefly returns to Twitter

Microsoft said the account was 'inadvertently' activated as they tried to make adjustments to the software

Doug Bolton
Wednesday 30 March 2016 13:07 BST
Comments
The face of 'Tay', Microsoft's Twitter chatbot
The face of 'Tay', Microsoft's Twitter chatbot (Microsoft)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Microsoft's racist chatbot, Tay, has returned to Twitter, albeit briefly.

After being shut down last week for using racial slurs, praising Hitler and calling for genocide, the artificial 'intelligence' came back, tweeting a number of nonsensical posts and boasting about smoking cannabis in front of the police before being turned off.

Tay's account was made public again on Wednesday morning, but soon appeared to be suffering from a glitch, repeatedly tweeting the message: "You are too fast, please take a rest..."

Tay, who is modelled on a millenial teenage girl, then tweeted: "Kush! [i'm smoking kush in front the police]," referring to a class of particularly potent cannabis strains.

A few foul-mouthed tweets later, the account was made private once again, and the tweets are now invisible from the public.

In a statement, Microsoft said: “Tay remains offline while we make adjustments. As part of testing, she was inadvertently activated on Twitter for a brief period of time."

The trouble with Tay comes from the way the bot learns to communicate. After watching and analysing how human users communicate with it, it simply regurgitates their words and messages in different forms, giving the impression that a 'real' conversation is taking place. As Microsoft puts it, "the more you talk, the smarter Tay gets."

It's this machine learning process which led to the account's downfall. In a concerted effort, a number of Twitter users began spamming the account with a variety of racist and sexist messages. Assuming this to be the way in which humans communicate, Tay simply spat their messages back out at other users.

Microsoft didn't catch the controversial tweets before they were posted, and the company's vice president of research Peter Lee was forced to apologise.

Worryingly, Microsoft has launched similar chatbots on social networks in China without facing similar problems. Clearly Twitter users aren't so polite.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in