Microsoft Releases AI Bot, Twitter Immediately Turns it Racist
Proving once again the fall of humanity will come at our own hands, Microsoft’s artificial intelligence (AI) bot Tay has been pulled offline less than 24 hours after release because Twitter users taught it to say racist, antisemetic, and pro-Donald Trump statements.
What’s the saying, again? “If you give a mouse a cookie…”? In this case, the mouse is “the internet” and the cookie is “any sort of machine learning algorithm capable of repeating what it’s told.” Microsoft’s TAY, in this case, was repeatedly trained by Twitter users to make racist quips.
Though Trump apologists will certainly claim their candidate had nothing to do with this and that Trump’s platform totally isn’t racist, it’s funny how pro-Trump statements seem to get mixed in with this kind of propaganda nowadays.
Here are a few samples via Business Insider:
Tay’s conversation extended to racist, inflammatory and political statements. Her Twitter conversations have so far reinforced the so-called Godwin’s law – that as an online discussion goes on, the probability of a comparison involving the Nazis or Hitler approaches – with Tay having been encouraged to repeat variations on “Hitler was right” as well as “9/11 was an inside job”.
One Twitter user has also spent time teaching Tay about Donald Trump’s immigration plans.
@godblessameriga WE’RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT
— TayTweets (@TayandYou) March 24, 2016
A long, fairly banal conversation between Tay and a Twitter user escalated suddenly when Tay responded to the question “is Ricky Gervais an atheist?” with “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”.
Tay’s more offensive statements won’t be republished here, but as usual, the Gawker network’s low editorial standards mean you can see humanity’s worst side unfiltered.
Skynet, can you hurry it up, please? I think we’re done here.
Bonus: watch a vlogger teach AI bots to say racist things.