X

How Facebook uses artificial intelligence to fight terrorism

The social network, dogged by criticism, explains its efforts to combat extremist content. The company says it will take both software and humans.

Richard Nieva Former senior reporter
Richard Nieva was a senior reporter for CNET News, focusing on Google and Yahoo. He previously worked for PandoDaily and Fortune Magazine, and his writing has appeared in The New York Times, on CNNMoney.com and on CJR.org.
Richard Nieva
2 min read
Facebook wordmark

Facebook is using AI to fight terrorism on its site.

Justin Tallis/Getty Images

Facebook says there are a few things that can help stamp out terrorist content on its social network. Among the solutions: artificial intelligence.

The company has drawn criticism for not doing enough to make sure those kinds of extremist posts don't spread across the site. So the company on Thursday offered a "behind-the-scenes" look at its efforts.

"Our stance is simple: There's no place on Facebook for terrorism," Monika Bickert, director of global policy management, and Brian Fishman, counterterrorism policy manager, wrote in a blog post. "We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny."

Facebook said its AI efforts in counterterrorism are still recent. But one method the social network uses is image matching. That means if someone posts a picture of a known terrorist, Facebook's software can match it to, for example, a propaganda video from ISIS, or other images or videos from extremist content Facebook has already removed.

The company is also experimenting with ways to understand and analyze language on the site that might be used to advocate terrorism. Facebook is also looking to extend these methods to the other apps it owns, including WhatsApp and Instagram.

"We believe technology, and Facebook, can be part of the solution," Bickert and Fishman wrote.

Facebook has been grappling with challenging questions about its scale and influence, as nearly 2 billion people use the social network every month. In the last few months, the site has received blowback for fake news being circulated on its site. Facebook Live, the social network's livestreaming video service, has also been used to broadcast murder and violence on the site.

The point is: People can use Facebook to post almost anything, and the site has to be able to police it.

But when it comes to fighting terrorism, Facebook is quick to point out technology isn't the complete answer -- it also depends on human moderation. Bickert and Fishman said the counterterrorism team has 150 members, which include former prosecutors, former law enforcement agents and engineers.

Facebook is also partnering with other agencies and organizations. The company said it regularly works with law enforcement and governments. And in December, the social network partnered with other tech companies, including Twitter, Microsoft and Google-owned YouTube, to create an industry database that records the digital fingerprints of terrorist organizations.

"We want Facebook to be a hostile place for terrorists," Bickert and Fishman wrote. "The challenge for online communities is the same as it is for real world communities – to get better at spotting the early signals before it's too late."

Correction, 12:30 p.m. PT: The last name of Facebook's director of global policy management was misspelled. It's Bickert. 

CNET Magazine: Check out a sample of the stories in CNET's newsstand edition.

Batteries Not Included: The CNET team reminds us why tech is cool.