X

Facebook takes down more than 3 billion fake accounts

The social network estimates about 5% of monthly active accounts are fake.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Shelby Brown Editor II
Shelby Brown (she/her/hers) is an editor for CNET's services team. She covers tips and tricks for apps, operating systems and devices, as well as mobile gaming and Apple Arcade news. Shelby also oversees Tech Tips coverage. Before joining CNET, she covered app news for Download.com and served as a freelancer for Louisville.com.
Credentials
  • She received the Renau Writing Scholarship in 2016 from the University of Louisville's communication department.
Queenie Wong
Shelby Brown
4 min read
Facebook logo in a drawing outlining an eye, with a backdrop of ones and zeros

A transparency group is offering 15 suggestions for Facebook's content moderation policies. 

Graphic by Pixabay/Illustration by CNET

Facebook pulled down more than 3 billion fake accounts from October to March, according to a report released Thursday by the social network.

That's a record number of fake-account takedowns by the world's largest social network, illustrating the challenges Facebook faces as it tries to police hate speech, nudity and other offensive content that flows through its site. 

The company estimates about 5% of its monthly active users are bogus. About 2.38 billion people worldwide log in to Facebook every month.

"For fake accounts, the amount of accounts we took action on increased due to automated attacks by bad actors who attempt to create large volumes of accounts at one time," Guy Rosen, Facebook's vice president for integrity, said in a blog post.

Rosen said during a conference call that a large number of the fake accounts were created by spammers trying to evade the social network's detection. 

In the six months prior to October, Facebook pulled down about 1.5 billion fake accounts. The company said it remains "confident" that most of the activity and people on Facebook are real. It removes the fake accounts before users are exposed to them, the company said in a separate blog post. The social network caught most of these fake accounts quickly enough that most never became "active" and weren't counted as part of Facebook's overall user numbers, the company said. 

Though fake accounts might be abusive, they also include "user-misclassified accounts" such as when someone sets up a profile instead of a Facebook Page for a pet. A Facebook page is similar to a profile but is used for, among other things, businesses, public figures, organizations and pets.

Watch this: Is Facebook spying on you?

For the first time, the company released data about how much of the removed content inspired appeals from users and how much was restored as a result. The report also included new information about the volume of posts the company took action against for attempting to sell products that aren't allowed on the platform, such as drugs and firearms.

The company took action on 1.4 million pieces of content that tried to sell guns and 1.5 million pieces of content that tried to sell drugs. 

Facebook's report comes as it's trying to set up a board that'll decide what content gets removed or stays up on the social network after a user appeals. The social network has rules against hate speech, nudity, violence and other offensive content. But conservatives allege that Facebook is censoring conservative voices, which the social network has repeatedly denied doing. 

Meanwhile, Facebook could face a record fine of up to $5 billion from the Federal Trade Commission, the US agency that's investigating the social network's alleged privacy mishaps. Lawmakers and even some of the company's own co-founders are asking US regulators to break up Facebook. CEO Mark Zuckerberg opposes the idea, but he's said he's open to regulation including around content moderation. 

At the same time, Facebook is doubling down on messaging, groups and ephemeral content as users share more privately. That shift could make it harder for the company to detect harmful content as it tries to balance safety with privacy, Zuckerberg said during the conference call.

"It's not clear on a lot of these fronts that we're going to be able to do as good of a job on identifying harmful content as we can today," Zuckerberg said. 

On Thursday the Facebook Data Transparency Advisory Group, or DTAG, an independent group of experts established last year, also released its review of how Facebook enforces and reports on its community standards. Overall, the advisory group found that Facebook's system for enforcing its community standards and its review process -- which includes a combination of automated and human review -- is well designed.

The group still made 15 recommendations for the site, which Facebook said fell into three categories. DTAG asked for more metrics to show the social media site's efforts to enforce its policies. The metrics would include how accurate the enforcement is and how often people disagree with Facebook's decisions. Facebook should also better explain its current metrics -- what type of violation is most common, how much content is removed and more. The group also wants Facebook to make it easier for users to stay up-to-date on policy changes and let them "have a greater voice" in what content isn't OK on the site.

"It's important that we aren't grading our own homework here," Zuckerberg said. 

Originally published May 23 at 9:19 a.m. PT
Updates, 9:43 a.m.: Includes more background about Facebook's community standards enforcement report; 9:58 a.m.: Adds more background about fake accounts; 12:13 p.m.: Includes remarks from conference call and more background.

Watch this: Inside Facebook's robotics research lab