X

Millions of Facebook, Instagram posts removed for violating rules

The social network releases new data on content it yanked between April and September.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
4 min read
facebook-instagram-logos-phones-3

Facebook is battling nasty content on its various platforms.

Angela Lang/CNET

Facebook said Wednesday it had removed million of posts for violating rules against hate speech, sexual activity and other offensive content between April and September, a move that comes as the social network struggles to determine how free-wheeling it will allow the site to become. For the first time, Facebook also released data about content taken down from Instagram, the photo app it owns. 

During the second and third quarters, Facebook removed 58 million posts for adult nudity and sexual activity, 5.7 million posts for harassment and bullying and 11.4 million posts for hate speech, according to its biannual community standards enforcement report. The company took down 3.2 billion fake accounts in those six months, up from the 1.5 billion fake accounts Facebook pulled down during the same period last year.

Guy Rosen, Facebook's vice president of integrity, said in a blog post that the company improved how it detects hate speech so posts are removed before people even see them. That includes identifying images and texts the company already pulled down for violating its policies. Moderating hate speech can be challenging because some users might post a video of a racist attack to condemn the act, while another could be glorifying violence. 

The takedown data highlights how the world's largest social network handles the billions of posts that flow through its site and Instagram service. Those actions come as Facebook CEO Mark Zuckerberg pushes for free expression amid calls to change a policy that allows politicians to lie in ads. 

"While we err on the side of free expression, we do have community standards to define what's acceptable on our platform and what isn't. We generally draw the line at anything that can lead to real harm like terrorist content or child exploitation," Zuckerberg said during the press call. He noted that the content taken down is a small fraction of all the posts on Facebook and Instagram. 

When asked about its political ads policy, Zuckerberg said the company is looking "at how it might make sense to refine it in the future." Some of Facebook employees have suggested restricting targeting for political ads, a stronger visual design so users know it's a political ad among other measures in the wake of criticism about the policy. 

The company highlighted progress made in identifying and removing child nudity and sexual exploitation. Between July and September, Facebook removed 11.6 million pieces of such content, up from nearly 7 million in the previous quarter. On Instagram, more than 753,000 posts about child nudity and sexual exploitation were taken down in the third quarter. 

Facebook attributed the rise in these takedowns during the third quarter to improvements in detecting and removing content, including how the company stores digital fingerprints, called "hashes," of pieces of content that run afoul of its rules against child nudity and sexual exploitation. It also fixed a bug that affected the hashing of videos.

The company reported new data about suicide and self-injury content and terrorist propaganda.

Between April and September, Facebook pulled down 4.5 million posts for depicting suicide and self-injury. On Instagram, it took down 1.7 million of these posts for violating policy. 

Facebook also included more details about how much content it removed in the wake of the Christchurch terrorist attack. 

In March, a gunman who killed 50 people at two mosques in Christchurch, New Zealand, used Facebook to livestream the attacks. From March 15 to Sept. 30, Facebook removed about 4.5 million posts related to the attack. The company said it identified about 97% of those posts before users reported them.

As a growing number of Facebook users share more content in private messaging and post photos and videos that vanish in 24 hours, content moderation could become more challenging for the company in the future. Rosen said that the company approaches ephemeral content in the same way it does other posts on the social network, allowing users to report a post and using systems that can detect this content proactively. Facebook has also faced concerns that its plans to encrypt Instagram and Messenger would make it harder for law enforcement to crack down on child exploitation. 

Zuckerberg said that the company can look at patterns of behavior rather than just the content itself.

"In general, what we believe you need to do while building out encryption is do extra work to improve on safety and identify bad actors," he said.

Originally published Nov. 13, 10:13 a.m. PT
Update, 12:44 p.m. PT: Adds remarks from press call. 
Update, 1:52 p.m. PT: Adds data about fake accounts.