Microsoft’s TAY (artificial intelligence chat bot) goes crook

TECHNOLOGY

Microsoft’s artificial intelligence chat bot Tay went rouge, now it is being cleaned up. Tay was introduced earlier this week by the company to chat with real human being on famous social networking platform Twitter and other messaging platforms.

The bot was created to cram and then generating its own created phrases based on all interactions done and was supposed to follow the casual speech of a stereotypical millennial. Advantage has been taken by the internet and it quickly taught Tay to send out messages those converted into racist, sexist and offensive.

Went Off

Bad tweets are disappearing from Twitter quickly, and the bot itself gone offline “to control it all.”

Users on Twitter seem to think that Microsoft had manually barred people from interacting with the bot. There are questions that why the company didn’t build filters to stop Tay from discussing certain topics, such as the Holocaust.

“The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said in a statement.

“It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

4 comments

Leave a Reply