TayTweets: Racist Microsoft chatbot briefly returns to Twitter

Microsoft said the account was 'inadvertently' activated as they tried to make adjustments to the software

Doug Bolton
Wednesday 30 March 2016 13:07 BST
Comments
The face of 'Tay', Microsoft's Twitter chatbot
The face of 'Tay', Microsoft's Twitter chatbot (Microsoft)

Your support helps us to tell the story

This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.

The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.

Help us keep bring these critical stories to light. Your support makes all the difference.

Microsoft's racist chatbot, Tay, has returned to Twitter, albeit briefly.

After being shut down last week for using racial slurs, praising Hitler and calling for genocide, the artificial 'intelligence' came back, tweeting a number of nonsensical posts and boasting about smoking cannabis in front of the police before being turned off.

Tay's account was made public again on Wednesday morning, but soon appeared to be suffering from a glitch, repeatedly tweeting the message: "You are too fast, please take a rest..."

Tay, who is modelled on a millenial teenage girl, then tweeted: "Kush! [i'm smoking kush in front the police]," referring to a class of particularly potent cannabis strains.

A few foul-mouthed tweets later, the account was made private once again, and the tweets are now invisible from the public.

In a statement, Microsoft said: “Tay remains offline while we make adjustments. As part of testing, she was inadvertently activated on Twitter for a brief period of time."

The trouble with Tay comes from the way the bot learns to communicate. After watching and analysing how human users communicate with it, it simply regurgitates their words and messages in different forms, giving the impression that a 'real' conversation is taking place. As Microsoft puts it, "the more you talk, the smarter Tay gets."

It's this machine learning process which led to the account's downfall. In a concerted effort, a number of Twitter users began spamming the account with a variety of racist and sexist messages. Assuming this to be the way in which humans communicate, Tay simply spat their messages back out at other users.

Microsoft didn't catch the controversial tweets before they were posted, and the company's vice president of research Peter Lee was forced to apologise.

Worryingly, Microsoft has launched similar chatbots on social networks in China without facing similar problems. Clearly Twitter users aren't so polite.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in