Facebook Took Action Against 1.4 Billion Pieces of Spam Content in a 3-Month Span

The social media giant says it saw an uptick in spam attacks in August.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
2 min read
Facebook logo on a phone in front of a pink background.

Facebook's parent company Meta releases a quarterly report about content moderation.

Sarah Tew/CNET

Facebook said Tuesday the amount of spam content the social network took action against grew from 734.2 million items in the second quarter to 1.4 billion in the third quarter.

Facebook's parent company, Meta, attributed the large uptick to a a jump in spam attacks in August. Meta released the data as part of a quarterly report about how the company enforces its community standards, rules that outline what content is and isn't allowed on Facebook. 

Meta doesn't specify in the report what spam attacks happened in August, and a company spokesperson didn't immediately have responses to questions about the report. The social network says it doesn't allow spam on its platform and defines it broadly as "content that is designed to deceive, or that attempts to mislead users, to increase viewership."

In late August, Facebook users complained about seeing "spam" comments made to celebrity pages flood their own social media feeds. Meta said the issue was due to a "configuration change" on the social network and resolved the problem. It's unclear if that incident was included in the spam data.

Meta also said that it's trying to make fewer mistakes when it comes to enforcing its rules against hate speech, bullying and harassment, and incitement of violence. The company said it improved its AI technology to better recognize when words that may seem offensive are being used as a joke between friends.

Meta published five reports on Tuesday, including about influence operations it pulled down, content that's widely viewed on Facebook and its work with an oversight board tasked with reviewing its toughest content moderation decisions. One of the reports also showed that Meta is receiving more government requests for user data worldwide. The number of government requests for user data increased 10.5% from 214,777 to 237,414 in the first six months of 2022. The US submitted the most requests, followed by India, Germany, Brazil, France and the UK. Meta says it will provide data to comply with local laws but also looks at other factors such as privacy and freedom of expression.