X

Thanks, Twitter. You turned Microsoft's AI teen into a horny racist

Technically Incorrect: Microsoft's Tay seemed so innocent. Until she started talking to and learning from real-life humans.

Chris Matyszczyk
3 min read

Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives.


tay.jpg
Enlarge Image
tay.jpg

The sayings of Tay went astray.

screenshot by Chris Matyszczyk/CNET

We fear the rise of robots so much that we forget a small detail.

It's we who are creating them. Which is to say that it's we who are teaching them to think in certain ways.

This is a lesson that Microsoft's new chatbot, Tay.ai, has already learned. The hard way.

She was built by Microsoft researchers and Bing brainiacs in order to be a sort of teen Cortana. Her mind, the company tells us, was created "by mining relevant public data and by using AI and editorial" that was developed by a staff "including improvisational comedians."

But we know about teens, don't we? They're impulsive. They're impressionable. They take wrong turns. We know, too, about all the comedians out there in the world. Surely you've witnessed Twitter.

Watch this: How Twitter corrupted Microsoft's sweet AI teen 'Tay'

After the stunning success of Google's AlphaGo AI program at the game of Go, you'd be forgiven for thinking that artificial intelligence had perhaps reached a commendable level of maturity. If AI can handle the complexities and open-endedness of a hard-to-master board game, surely it must be ready to tackle social media, no?

Accessible through Twitter, GroupMe and Kik, Tay was happy to admit that though she's still a teen, she's a prototypical millennial. She tweeted: "i love me i love me i love me i love everyone."

But to us humans of a certain age, it's hardly surprising that soon after its Wednesday debut Tay's Twitter account was peppered by comments that might only suit a presidential debate.

As the Telegraph reports, she managed, in a single tweet, to blame President Bush for 9/11, use racist language in praising Hitler over President Obama and contend that "donald trump is the only hope we've got."

Well, that's quite a hope for change you have there, Tay.

Then there was this: "I f***ing hate feminists and they should all die and burn in hell."

You will become increasingly perturbed when I tell you she also offered: "F*** MY ROBOT P**** DADDY I'm SUCH A NAUGHTY ROBOT."

It seems that many of these responses were elicited by humans asking Tay to repeat what they'd written.

She behaved like such a naughty robot that Daddy Microsoft appears to have removed these tweets.

Indeed, late Wednesday, Tay went on hiatus, as she tweeted: "c u soon humans need sleep now so many conversations today thx." For sure -- she had already emitted more than 96,000 tweets in a very few hours.

This has to have been taxing for the people behind the scenes, too. Tay, a Microsoft spokeswoman told me, is "as much a social and cultural experiment, as it is technical."

The culture seems to have asserted itself.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the spokeswoman said. "As a result, we have taken Tay offline and are making adjustments."

Tay herself admitted to the roots of some of her thoughts and sayings.

She tweeted that "the more Humans share with me the more I learn."

Whenever companies try such PR initiatives in social media, there's always the risk that self-proclaimed wags will hijack them. Coca-Cola, for example, tried a #makeithappy campaign on Twitter, which was soon tweeting out lines from Hitler's "Mein Kampf."

That's the thing about humans. We have so little that actually keeps us interested, so we start to perform acts of self-sabotage.

Robots, for example.