Facebook's security chief is warning critics that the fake news problem is more complicated than many are aware.
Alex Stamos, who's spearheading the company's probe into Russian-linked ads placed on its service during the 2016 election campaign, on Saturday defended the company's use of algorithms, which determine what users see in their news feeds, parsing out hate speech and threats of violence.
"I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech [companies]," he said in a Twitter thread. "Nobody of substance at the big companies thinks of algorithms as neutral.
"Nobody is not aware of the risks," he wrote.
Facebook has drawn flak for its role in perpetuating hoaxes and its influence on the presidential election, and has in recent months worked to combat the rise of fake news. In response, the company said in August it wouldto offer related articles on a trending topic that offer fact-check articles and other perspectives.
The company also said it would use "updated machine learning" to detect more potential hoaxes and send them to third-party fact checkers.
The abundance of fake news on the internet in the lead-up to President Donald Trump's election victory last year has become a hot-button issue, entangling tech giants like Facebook and Google. Numerous allegations say the fake news shared on the social networks helped Trump win.
Stamos' team said in August it had identified about 500 "inauthentic accounts" that bought $100,000 worth of ads that targeted highly politicized social issues such as immigration, guns and LGBT rights. Facebook has sent records of the ads to government investigators looking into Russia's alleged meddling in the 2016 US presidential election.
Facebook didn't immediately respond to a request for comment.
Special Reports: All of CNET's most in-depth features in one easy spot.
It's Complicated: This is dating in the age of apps. Having fun yet? These stories get to the heart of the matter.