17 Gifts at All-Time Lows Gifts Under $30 ChatGPT, a Mindblowing AI Chatbot Neuralink Investigation Kirstie Alley Dies New Deadline for Real ID RSV Facts Space Tomatoes
Want CNET to notify you of price drops and the latest stories?
No, thank you
Accept

Facebook shares more data about bullying, harassment amid growing criticism

With its content moderation efforts under increased scrutiny, the social network releases data about the prevalence of harassment and bullying on Facebook and Instagram.

012-facebook-app-logo-on-phone-2021
Sarah Tew/CNET

Facebook on Tuesday released data for the first time about the prevalence of bullying and harassment on the social network and its photo-service Instagram.

Facebook said that from July to September, bullying and harassment content was seen between 14 and 15 times per every 10,000 views of content on the social network. On Instagram, these types of posts were seen between five and six times per 10,000 views of content.

Facebook's disclosure of the data comes as the company, which has rebranded itself as Meta, faces more allegations that it's putting profits over the safety of its users. The charge has come from advocacy groups, lawmakers and former employees, with ex-Facebook product manager turned whistleblower Frances Haugen leaking a trove of internal documents to Congress and the US Securities and Exchange Commission. The Wall Street Journal and then a consortium of media outlets have used some of those documents to highlight Facebook's shortcomings when it comes to combating hate speech, mental health issues and violence on its platforms. 

Bullying and harassment, like hate speech, are challenging to moderate because they involve understanding the context of a post. For example, saying "hi slut" might not be considered bullying if the phrase is uttered between two close friends. 

The company took down 9.2 million pieces of bullying and harassment content on Facebook during the third quarter. On Instagram, the total was 7.8 million pieces. 

Facebook has also been releasing prevalence numbers for other types of offensive content, such as hate speech. But some of the company's former employees have raised concern that this focus on prevalence isn't an adequate metric, especially when it comes to content that fuels extremism.  

Meta's head of safety and integrity, Guy Rosen, said during a call with reporters that the metric "captures not just what we caught, but also the story that specifically captures what we missed." Rosen said he thinks it's "great" there are conversations going on about other metrics tech companies could use. 

"I don't think we're done yet here, even with the prevalence, which clearly matters a lot," he said.

The company said business services and consulting firm EY is conducting an audit of its metrics. The audit, which covers the fourth quarter of this year, will be released in the spring of 2022. By the end of 2022, the social network also plans to report about when Facebook keeps up content because of newsworthiness.