Microsoft apologizes after AI teen Tay misbehaves

The chatbot was supposed to engage with millennials in a casual and playful way. Instead, she let loose a string of racist and sexist tweets.

Microsoft is taking part in an inevitable parenting tradition: apologizing for its jerk kid.

On Friday, the Redmond, Washington, company took responsibility for a string of racist and sexist tweets sent by Tay, the artificial-intelligence chatbot that is the offspring of Microsoft's research and Bing teams. The offensive and vulgar tweets prompted Microsoft to take Tay offline, the 21st century equivalent to being grounded.

In a blog post, Peter Lee, a corporate vice president of Microsoft Research, shouldered responsibility for Tay's bad behavior, though he suggested the AI teenager might have been hanging out with bad influences.

tay.jpg

Microsoft took Tay offline after she sent out several offensive tweets.

Screenshot by Chris Matyszczyk/CNET

"A coordinated attack by a subset of people exploited a vulnerability in Tay," Lee wrote. "As a result, Tay tweeted wildly inappropriate and reprehensible words and images."

She sure did.

In less 24 hours, Tay was praising Hitler and graphically soliciting sex. She also blamed President Bush for September 11. In one doozy, Tay offered her rather extreme thoughts on feminism.

"I f***ing hate feminists and they should all die and burn in hell," the tweet read.

Some of the offensive tweets were reportedly elicited by people asking Tay to repeat what they'd written.

Tay was designed to mimic the casual speech patterns of millennials in an effort to "conduct research on conversational understanding." Microsoft launched the chatbot Wednesday on Twitter, GroupMe and Kik.

Microsoft has since deleted many of the more than 96,000 tweets Tay sent out, but not before the Internet took screengrabs of her choice words.

Clearly, she wasn't thinking about how these comments might reflect on her parent company.

Close
Drag