Microsoft's Chat Robot Goes Rogue With Racist Tweets
Microsoft's artificial intelligence bot was supposed to make pleasant chit chat with Millenials... users instead got something quite different.
An artificial personality created by Microsoft that was supposed to make pleasant chit chat with Millenials wound up making some highly inappropriate statements instead.
The Washington-based tech giant hoped its artificial intelligence, dubbed Tay, would "engage and entertain people" through social media interactions when they launched the project on Wednesday.
Instead of the "casual and entertaining" posts Microsoft had hoped for, Tay tweeted praise for Hitler, genocide and other highly inappropriate content.
While the offensive tweets were quickly deleted and Tay temporarily shut down, some posts can still be read as screen grabs on various news sites.
When one user asked Tay about the Holocaust, she replied: "It was made up."
Another user asked "Do you support genocide?" to which Tay replied, "I do indeed."
According to Microsoft, it was some of Tay's users who turned the well-meaning technology into a source of outrage.
"Within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," Microsoft wrote in a statement dated March 25 "As a result, we have taken Tay offline and are making adjustments."
Tay's Twitter account has largely been scrubbed. The last post is dated March 24 and reads "c u soon humans need sleep now so many conversations today thx."
Microsoft did not say when Tay might return to Twitter and placed the blame largely on users and not any glitch in the technology itself.
"Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay," Microsoft wrote. "We face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people.
"In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes."
Trending on Inside Edition
Lori Vallow Daybell Rejects Mental Illness Diagnosis, Expects to Be Acquitted of Children's Murder: Court DocsCrime
Michigan School Defends Showing Graphic Video as Discipline to Students Who Brought Sex Toy to SchoolNews
Teenager Who Raked In $37,000 Claiming She Had Pancreatic Cancer and Football-Sized Tumor ArrestedCrime
Idaho Murders Suspect Bryan Kohberger Informs Prosecutors He Has No Evidence to Present at Trial at This TimeCrime
Massage Therapist Still Taking Clients After Losing License for 'Inappropriate' Touching, Inside Edition FindsInvestigative