Facebook says it's been trying to do a better job of finding and pulling down terrorist content, and on Thursday the world's largest social network said it's seen signs of success.
Facebook said that in the third quarter, it pulled down 3 million posts related to terrorism, a drop from the 9.4 million posts Facebook removed in the second quarter. The median amount of time terrorist content stayed on the platform after users reported it also dropped, from 43 hours in the first quarter to 18 hours in the third quarter, the company said.
Social networks are under pressure to remove terrorist content before violence spills into the real world. As they increase their efforts, though, bad actors are constantly changing strategy to evade detection, the companies say. Some terrorists try to create new accounts or break up their messages, Facebook said.
"We can reduce the presence of terrorism on mainstream social platforms, but eliminating it completely requires addressing the people and organizations that generate this material in the real world," Monika Bickert, Facebook's global head of policy management, and Brian Fishman, the company's head of counterterrorism policy, wrote in a blog post.
Facebook relies on machine learning to detect terrorist content its reviewers should prioritize. Sometimes the company will automatically pull down posts, if the system determines there's "high confidence" the post contains support for terrorism. The company has also been expanding some of its tools to more languages.
In the third quarter, about 99 percent of content related to ISIS and al-Qaeda was pulled down by the tech firm before a user reported it, Facebook said.
Infowars and Silicon Valley: Everything you need to know about the tech industry's free speech debate.
Cambridge Analytica: Everything you need to know about Facebook's data mining scandal.