X

Facebook steps up fight against fake news in groups and messaging

The social network outlines new ways it'll police "problematic" content that's being shared in Facebook's private spaces.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
3 min read
Facebook like logo is seen on an android mobile phone
Getty Images

Facebook has said its users are posting more in the social network's private spaces, including groups and messaging, a shift that could make it tougher for the tech giant to police offensive content.

On Wednesday, the company said it's taking new steps to stop misinformation, scams and other "problematic" content from going viral on the platform, with some of the changes applying to private Facebook groups, which let users post content only group members can see. The company has been criticized in the past for not doing enough to stop misinformation about vaccines and other topics from spreading in groups.

The new steps provide a glimpse into how the world's largest social network, which has more than 2 billion users worldwide, is moderating content as users pivot to sharing more information privately

img-0216

Facebook's vice president of Integrity, Guy Rosen, speaks at a press conference on Wednesday at the tech company's headquarters in Menlo Park, California.

Queenie Wong/CNET

"Ultimately, the balance between protecting people's privacy and protecting public safety is something that societies have been grappling with for centuries probably, and we're certainly grappling with it," Guy Rosen, Facebook's vice president of integrity, said during a press conference at the company's Menlo Park, California, headquarters.

The company said that in the coming weeks it'll start looking at how administrators and moderators of Facebook groups decide what content to keep up. That'll help Facebook determine whether a group is violating the social network's rules. The company is also releasing a Group Quality feature so group administrators can see what content was removed and flagged, including fake news. Facebook groups that repeatedly share misinformation will show up lower on the social network's News Feed.

Watch this: Facebook is putting women on the front line of its war on fake news

Facebook has community standards that prohibit hate speech, nudity, violence and other offensive content. Misinformation and clickbait, though, don't always violate Facebook's rules, unless there's a risk of offline violence or the content is trying to discourage or prevent people from voting.

Antivaccine content, for example, can fall in a "gray area" because it's challenging to link content to something that happens offline, said Tessa Lyons, Facebook's head of News Feed integrity. 

Facebook said it's been using technology, human reviewers and user reports to flag and remove content in groups that violates its rules, even if the groups aren't public. That's allowed Facebook to proactively detect offensive content even before someone reports it to the company, Rosen said.

The company said it'll also soon let people remove their posts and comments from a group even if they're no longer a member.

This week, Facebook is also adding a verified badge for high-profile people in its messaging app, signaling to users whether a scammer is impersonating someone else. Earlier this year, as part of an effort to combat misinformation, the company released a tool to let users know if a message has been forwarded.

The social network unveiled a variety of other steps it's taking to combat fake news, following criticism that its efforts aren't working well enough. Facebook said it's working with journalists, fact-checking experts, researchers and other groups to find new solutions to fight misinformation more quickly. The Associated Press, which reportedly stopped fact-checking for the company in February, is returning to fact-check videos and Spanish content in the US.

Facebook acknowledged it still has more work to do as user behavior on the site changes.

Users are sharing photos and videos that vanish in 24 hours via a feature called Stories. That makes policing the content challenging.

"Now there's a clock ticking and that's actually a huge amount of pressure," said Alex Deve, Facebook's product management director who works on Stories.

Users can also use text, stickers or drawings to change a photo or video in a way that violates Facebook's rules. Someone could also string together images of a product, price tag and email to sell an item that the social network doesn't allow like guns and drugs. Individually, each image might be okay, but put together it would violate Facebook's rules. 

"We actually don't have all the answers," Deve said. "There's a lot of things here we are learning."

Originally published April 10, 10 a.m. PT.
Update, 12:17 p.m.: Adds remarks from Facebook's press conference.
Update, 2:20 p.m.: Adds more background about Facebook Stories.