【TOMO雙語爆】微軟AI聊天機器人 有種族歧視?

出版時間 2016/03/29

英文原文:

Microsoft’s new artificial intelligence chatbot had an interesting first day of class after Twitter’s users taught it to say a bunch of racist things.

The verified Twitter account called Tay was launched on Wednesday. The bot was meant to respond to users’ questions and emulate casual, comedic speech patterns of a typical millennial.

According to the Einsteins at Microsoft, Tay was “designed to engage and entertain people where they connect with each other online through casual and playful conversation.

The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”

Enter trolls and Tay quickly turned into an n-bomb dropping racist, spouting white-supremacist propaganda and calling for genocide.

Tay turned into quite the Hitler fan as well.

After the enormous backfire, Microsoft took Tay offline for upgrades and is deleting some of the more offensive tweets.

Tay hopped off the Twittersphere with the message, “c u soon humans need sleep now so many conversations today thx.”

原文影片請看此
http://us.tomonews.net/microsoft-s-twitterbot-tay-is-super-racist-apparently-down-with-genocide-349924440948736

即起免費看《蘋果新聞網》 歡迎分享

在APP內訂閱 看新聞無廣告 按此了解更多