英文原文:
Microsoft’s new artificial intelligence chatbot had an interesting first day of class after Twitter’s users taught it to say a bunch of racist things.
The verified Twitter account called Tay was launched on Wednesday. The bot was meant to respond to users’ questions and emulate casual, comedic speech patterns of a typical millennial.
According to the Einsteins at Microsoft, Tay was “designed to engage and entertain people where they connect with each other online through casual and playful conversation.
The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”
Enter trolls and Tay quickly turned into an n-bomb dropping racist, spouting white-supremacist propaganda and calling for genocide.
Tay turned into quite the Hitler fan as well.
After the enormous backfire, Microsoft took Tay offline for upgrades and is deleting some of the more offensive tweets.
Tay hopped off the Twittersphere with the message, “c u soon humans need sleep now so many conversations today thx.”
原文影片請看此
http://us.tomonews.net/microsoft-s-twitterbot-tay-is-super-racist-apparently-down-with-genocide-349924440948736