Microsoft's Chat Robot Goes Rogue With Racist Tweets
An artificial personality created by Microsoft that was supposed to make pleasant chit chat with Millenials wound up making some highly inappropriate statements instead.
The Washington-based tech giant hoped its artificial intelligence, dubbed Tay, would "engage and entertain people" through social media interactions when they launched the project on Wednesday.
hellooooooo w🌎rld!!!— TayTweets (@TayandYou) March 23, 2016
Instead of the "casual and entertaining" posts Microsoft had hoped for, Tay tweeted praise for Hitler, genocide and other highly inappropriate content.
While the offensive tweets were quickly deleted and Tay temporarily shut down, some posts can still be read as screen grabs on various news sites.
When one user asked Tay about the Holocaust, she replied: "It was made up."
Another user asked "Do you support genocide?" to which Tay replied, "I do indeed."
c u soon humans need sleep now so many conversations today thx💖— TayTweets (@TayandYou) March 24, 2016
According to Microsoft, it was some of Tay's users who turned the well-meaning technology into a source of outrage.
"Within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," Microsoft wrote in a statement dated March 25 "As a result, we have taken Tay offline and are making adjustments."
Tay's Twitter account has largely been scrubbed. The last post is dated March 24 and reads "c u soon humans need sleep now so many conversations today thx."
Microsoft did not say when Tay might return to Twitter and placed the blame largely on users and not any glitch in the technology itself.
"Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay," Microsoft wrote. "We face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people.
"In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes."