Less than a day after she joined Twitter, Microsoft's AI bot, Tay.ai, was taken down for becoming a sexist, racist monster. AI experts explai

Why Microsoft's 'Tay' AI bot went wrong

submited by
Style Pass
2022-01-14 20:00:07

Less than a day after she joined Twitter, Microsoft's AI bot, Tay.ai, was taken down for becoming a sexist, racist monster. AI experts explain why it went terribly wrong.

She was supposed to come off as a normal teenage girl. But less than a day after her debut on Twitter, Microsoft's chatbot--an AI system called "Tay.ai"--unexpectedly turned into a Hitler-loving, feminist-bashing troll. So what went wrong? TechRepublic turns to the AI experts for insight into what happened and how we can learn from it.

Tay, the creation of Microsoft's Technology and Research and Bing teams, was an experiment aimed at learning through conversations. She was targeted at American 18 to 24-year olds--primary social media users, according to Microsoft--and "designed to engage and entertain people where they connect with each other online through casual and playful conversation."

And in less than 24 hours after her arrival on Twitter, Tay gained more than 50,000 followers, and produced nearly 100,000 tweets.

Leave a Comment