X

How Facebook uses artificial intelligence to take down abusive posts

Mark Zuckerberg says AI will be the key to cleaning up toxic content on Facebook. At the F8 developer fest, the social network for the first time shared how it uses the technology.

Richard Nieva Former senior reporter
Richard Nieva was a senior reporter for CNET News, focusing on Google and Yahoo. He previously worked for PandoDaily and Fortune Magazine, and his writing has appeared in The New York Times, on CNNMoney.com and on CJR.org.
Richard Nieva
3 min read
A 'like' sign stands at the entrance of Facebook headquarters on May 18, 2012 in Menlo Park, Calif

Facebook is using F8 to open up about how it uses AI to fight abuse on the social network.

James Martin / CNET

Facebook CEO Mark Zuckerberg sent a chorus of chuckles through the Twittersphere last week when he said an unexpected word during a company earnings call: nipple.

"It's much easier to build an AI system that can detect a nipple than it is to determine what is linguistically hate speech," he said, when asked about inappropriate content on the world's largest social network.

His comment inspired a string of jokes, but Zuckerberg was making a serious point. Abuse on Facebook takes different shapes and forms -- from nudity to racial slurs to scams and drug listings -- and getting rid of all of it is not a one-size-fits-all proposition. Whenever Zuckerberg talks about cleansing Facebook of inappropriate content, he always mentions two things: 

1) Facebook will hire 20,000 content moderators by the end of the year to find and review objectionable material. 

2) The company is investing in artificial intelligence tools to proactively detect abusive posts and take them down.

On Wednesday, during its F8 developers conference in San Jose, California, Facebook revealed for the first time exactly how it uses its AI tools for content moderation. The bottom line is that automated AI tools help mainly in seven areas: nudity, graphic violence, terrorist content, hate speech, spam, fake accounts and suicide prevention.

Watch this: Facebook building AI tools to protect election integrity

For things like nudity and graphic violence, problematic posts are detected by technology called "computer vision," software that's trained to flag the content because of certain elements in the image. Sometimes that graphic content is taken down, and sometimes it's put behind a warning screen.

Something like hate speech is harder to police solely with AI because there are often different intents behind that speech. It can be sarcastic or self-referential, or it may try to raise awareness about hate speech. It's also harder to detect hate speech in languages that are less widely spoken, because the software has fewer examples to lean on.

"We have a lot of work ahead of us," Guy Rosen, vice president of product management, said in an interview last week. "The goal will be to get to this content before anyone can see it."

Falling through the cracks

Facebook is opening up about its AI tools after Zuckerberg and his team were slammed for a scandal last month involving Cambridge Analytica. The digital consultancy accessed personal data on up to 87 million Facebook users and used it without their permission. The controversy has prompted questions about Facebook's policies, including what responsibilities it has in policing the content on its platform and to the more than 2.2 billion users who log into Facebook each month.

As part of its newfound aim to be transparent about how it works, Facebook also last week for the first time released the internal guidelines its content moderators use to assess and handle objectionable material. Up until now, you could see only surface-level descriptions of what kinds of content they couldn't post.

But even with thousands of moderators and AI tools, objectionable content still falls through the cracks. For example, Facebook's AI is used to detect fake accounts, but bots and scammers still exist on the platform. The New York Times reported last week that fake accounts pretending to be Zuckerberg and Facebook COO Sheryl Sandberg are being used to try to scam people out of their cash.

And when Zuckerberg testified before Congress last month, lawmakers repeatedly asked about decision making for policing content. Rep. David McKinley, a Republican from West Virginia, mentioned illegal listings for opioids posted on Facebook, and asked why they hadn't been taken down. Other Republican lawmakers asked why the social network removed posts by Diamond and Silk, two African-American supporters of President Donald Trump with 1.6 million Facebook followers. In 10 hours of testimony over two days, Zuckerberg, 33, tried to convince legislators that Facebook had a handle -- and a process in place -- for handling these kinds of issues.

"The combination of building AI and hiring what is going to be tens of thousands of people to work on these problems, I think we'll see us make very meaningful progress going forward," Zuckerberg said last week after reporting earnings that topped Wall Street expectations. "These are not unsolvable problems."

Facebook's F8 Developer Conference: Follow CNET's coverage.

Cambridge Analytica: Everything you need to know about Facebook's data mining scandal.