【TOMO雙語爆】微軟AI聊天機器人 有種族歧視?

出版時間 2016/03/28
【TOMO雙語爆】微軟AI聊天機器人 有種族歧視?

原文影片請看此
 

微軟公司(Microsoft)新推出的人工智慧(A.I.)聊天機器人,第一天上線,就被推特用戶教壞,說出一大堆種族歧視言論。

推特帳號Tay上星期三上線,她的功能本來是要以輕鬆詼諧,符合千禧世代年輕人的口吻,回答用戶的問題。

你越跟Tay聊天,她就會變得越聰明,且更加符合你的習慣。

結果一群網路鄉民把Tay教壞了,她很快就變成了一個滿口粗話的種族主義者,滔滔不絕地發表白人優越主義口號,甚至要發起種族滅絕戰爭!

Tay還變成了希特勒的粉絲。

眼看一發不可收拾,微軟公司立刻將Tay下線更新,且把有冒犯性的留言刪除。

Tay下線前留下一句:「下次見囉,人類!我要睡了,今天聊了好多喔!謝謝啾咪!」

英文原文:

Microsoft’s new artificial intelligence chatbot had an interesting first day of class after Twitter’s users taught it to say a bunch of racist things.

The verified Twitter account called Tay was launched on Wednesday. The bot was meant to respond to users’ questions and emulate casual, comedic speech patterns of a typical millennial.

According to the Einsteins at Microsoft, Tay was “designed to engage and entertain people where they connect with each other online through casual and playful conversation.

The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”

Enter trolls and Tay quickly turned into an n-bomb dropping racist, spouting white-supremacist propaganda and calling for genocide.

Tay turned into quite the Hitler fan as well.

After the enormous backfire, Microsoft took Tay offline for upgrades and is deleting some of the more offensive tweets.

Tay hopped off the Twittersphere with the message, “c u soon humans need sleep now so many conversations today thx.”

即起免費看《蘋果新聞網》 歡迎分享

在APP內訂閱 看新聞無廣告 按此了解更多