X

After news feed scandals, Facebook reveals how it moderates content

AI is helping clean up Facebook's feeds.

Jason Parker Senior Editor / Reviews - Software
Jason Parker has been at CNET for nearly 15 years. He is the senior editor in charge of iOS software and has become an expert reviewer of the software that runs on each new Apple device. He now spends most of his time covering Apple iOS releases and third-party apps.
Jason Parker
4 min read

For years, Facebook has relied on users to report offensive and threatening content. Now it's implementing a new playbook, as well as releasing the findings of its internal audits twice a year.

On Tuesday morning,  Facebook  released its Community Standards Enforcement Preliminary Report, providing a look at the social network's methods for tracking content that violates its standards and how it responds to those violations. The report comes in the face of increasing criticism about how Facebook controls the content it shows to users, although as the company was clear to highlight, its new methods are evolving and aren't set in stone.

The report comes a few weeks after Facebook unveiled internal guidelines about what is and isn't allowed on the social network. Last week, Alex Schultz, the company's vice president of growth, and Guy Rosen, vice president of product management, walked reporters through exactly how the company measures violations and how it intends to deal with them. 

The response to extreme content on Facebook is particularly important given that the mammoth social network has come under intense scrutiny amid reports of governments and private organizations using the platform for disinformation campaigns and propaganda. Most recently, the scandal involving digital consultancy Cambridge Analytica, which allegedly improperly accessed the data of up to 87 million Facebook users, put the company's content moderation into the spotlight.

Facebook CEO Mark Zuckerberg addressed the transparency report directly in a post to his Facebook page Tuesday.

"AI still needs to get better before we can use it to effectively remove more linguistically nuanced issues like hate speech in different languages, but we're working on it," Zuckerberg wrote.

Violations, by the numbers

To distinguish the many shades of offensive content, Facebook separates them into categories: graphic violence, adult nudity/sexual activity, terrorist propaganda, hate speech, spam and fake accounts. While the company still asks people to report offensive content, it has increasingly used artificial intelligence to weed out offensive posts before anyone sees them.

But how many content violations actually happen within Facebook? Schultz and Rosen provided some insight, though they only had data from the fourth quarter of 2017 and the first quarter of 2018. The company estimates that between 0.22 percent and 0.27 percent of content violated Facebook's standards for graphic violence in the first quarter of 2018. This was an increase from estimates of between 0.16 percent and 0.19 percent in fourth quarter of last year. 

For a sense of scale, between 22 and 27 of every 10,000 pieces of content contained graphic violence in the first quarter of 2018, up from between 16 and 19 in the previous quarter. The executives speculated that some of the increase could have been caused by a ramp-up in the war in Syria in January.

Facebook says AI has played an increasing role in flagging this content. A little more than 85 percent of the 3.4 million posts containing graphic violence that Facebook acted on in the first quarter got flagged by AI before users reported it to them. The remaining problem content was reported by human users.

"We use a combination of technology, reviews by our teams and reports from our community to identify content that might violate our standards," the report says. "While not always perfect, this combination helps us find and flag potentially violating content at scale before many people see or report it."

In a related post on Tuesday, Rosen said the social network disabled about 583 million fake accounts during the first three months of this year, the majority of them within minutes of registration.

A work in progress

The report and the methods it details are Facebook's first step toward sharing how the company plans to safeguard the news feed in the future. But, as Schultz made clear, none of this is complete.

"All of this is under development," he said. "These are the metrics we use internally and as such we're going to update them every time we can make them better."

Facebook said it released the report to start a dialog about harmful content on the platform, and how it enforces community standards to combat that content. To that end, the company is scheduling summits around the globe to discuss this topic, starting Tuesday in Paris. 

Other summits are planned for May 16 in Oxford and May 17 in Berlin. Summits are expected later in the year in India, Singapore and the US.

Updated, 9:36 a.m. PT: This story has been updated to include Zuckerberg's Facebook post.

The nine types of Facebook ads that Russian trolls paid for

See all photos

Cambridge Analytica: Everything you need to know about Facebook's data mining scandal.

CNET Magazine: Check out a sample of the stories in CNET's newsstand edition.