Facebook: We take 'false news' in your timeline seriously

Google, LinkedIn, AirBNB and Microsoft join Facebook at a conference on abuse of online services in explaining how they use algorithms to fight fraud.

Laura Hautala Former Senior Writer
Laura wrote about e-commerce and Amazon, and she occasionally covered cool science topics. Previously, she broke down cybersecurity and privacy issues for CNET readers. Laura is based in Tacoma, Washington, and was into sourdough before the pandemic.
Expertise E-commerce | Amazon | Earned wage access | Online marketplaces | Direct to consumer | Unions | Labor and employment | Supply chain | Cybersecurity | Privacy | Stalkerware | Hacking Credentials
  • 2022 Eddie Award for a single article in consumer technology
Laura Hautala
3 min read
Laura Hautala / CNET

Facebook says it isn't in the business of deciding whether every single fact posted on your timeline is correct, but it does care about coordinated efforts to spread misinformation.

"We're concerned when falsehood becomes an industry," Michael McNally, engineering director at Facebook, told an audience of about 500 at the Fighting Abuse @Scale conference on Wednesday

The topic of the conference -- detecting and blocking fraudulent accounts -- couldn't be more timely. McNally's remarks come as Facebook deals with intense scrutiny over the spread of misinformation and divisive content on the giant social network in an alleged Russian campaign taking place around the time of the 2016 presidential election. 

It's a conference on how to stop the very kind of behavior that has Facebook -- and other tech giants including Google and Twitter -- in hot water right now. 

Machine learning to fight fraud and fake news

Speakers from Google, AirBNB, Microsoft and LinkedIn also presented at the San Francisco event, though Facebook's speakers were the most focused on the idea of fake news and misinformation. 

Others presenters at the conference spoke to the wide range of havoc that fraudsters, scammers, spammers and other criminals can create if left unchecked. It turns out, a lot of the tools currently being used to detect misinformation campaigns are adapted from one of the original online battles -- the fight against spam.

Depending on how it's used, this technology is itself controversial. Called machine learning and artificial intelligence, it's software that lets computers detect patterns and act on them. It has the power to sift through millions of Facebook posts faster than a group of humans ever could.

But these algorithms can also act on biases built into them, intentionally or not, by their human creators. That's a powerful fear at a time when conservative Facebook users are concerned the site will use worries over fake news to take down what it says are actually dissident viewpoints. Amid these concerns, Facebook released information Tuesday that details how they decide when to take down user posts.

Keeping it real in your Facebook timeline

Facebook uses a combination of human reviewers and pattern-detecting software to zero in on viral, untrue stories. You might know these as fake news, but Facebook prefers the term "false news," McNally said. 

The company's goal is to suss out coordinated efforts to spread fake news stories around major events that have the potential to cause harm, he said. As an example, McNally pointed to stories claiming that the students at Marjory Stoneman Douglas High School advocating for gun control were in fact paid actors spread around the internet after a gunman killed 17 people at the Florida school in February. 

To find stories that appear to be spreading from a coordinated campaign, Facebook starts with machine learning to scan activity on the social network and automatically detect patterns.

But that's not enough to identify a story as false. Next, Facebook selects stories that its algorithm flags and sends them to fact checkers at partner organizations. These include Factcheck.org, Snopes and PolitiFact in the US.

Once deemed false, a story can usually still go on Facebook, with certain additions. First, Facebook might flag a link before someone posts it, letting users know it's been debunked. Facebook also offers additional stories underneath the original post in an attempt to offer context and multiple viewpoints. Finally, Facebook might make the story appear smaller in your timeline to send a subtle message to users.

It turns out, "reducing the size reduces the weight you give a certain claim," McNally said.

Tech Culture: From film and television to social media and games, here's your place for the lighter side of tech. 

The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.