X
Robert Rodriguez

Is Facebook censoring conservatives or is moderating just too hard?

The company says it makes moderation errors. Others, particularly conservatives, see censorship.

Last year, Prager University took to Twitter to complain about Facebook. The conservative organization's grievance?  Facebook  had blocked videos that were flagged as hate speech.

One of the blocked videos argued that men should be more masculine, rather than less. Another video stated it wasn't Islamophobic to argue that the Muslim world is currently "dominated by bad ideas and beliefs."

This story is part of [REDACTED], CNET's look at internet censorship around the world.

Robert Rodriguez/CNET

Facebook quickly apologized, tweeting that the blocks were mistakes. The social network, which defines hate speech as a "direct attack" based on religion, gender or other protected characteristics, said it would look into what happened.

That didn't satisfy PragerU or some of its more than 3 million Facebook followers, who accused the company of intentionally censoring conservative speech.

"They didn't do anything until there was a public outcry," said Craig Strazzeri, chief marketing officer of PragerU, adding that the social network has a history of censoring conservative speech. 

Facebook has repeatedly denied that it suppresses conservative voices.

The dust-up between PragerU and Facebook underscores one of the biggest challenges for social media companies as they try to become consistent about what content is allowed on their platforms. Content moderation errors, whether innocent or intentional, fuel an ongoing belief that social networks like Facebook,  Twitter  and Google-owned YouTube censor speech.

Conservatives are not the only ones to accuse Facebook of censorship. Some LGBQT users and some black users have made the same claim, but conservatives are the most consistently vocal. 

The allegation of anti-right bias at Facebook goes back to at least 2016, when former contractors who worked at the company told Gizmodo they'd been instructed to suppress news from conservative sources. Facebook denied the allegations.

screen-shot-2019-10-25-at-12-11-51-pm.png

One of the videos marked as Facebook as hate speech argued that the Muslim world is "dominated by bad ideas and beliefs." 

YouTube/PragerU

Conservatives cite Silicon Valley's largely liberal workforce, as well as events like the barring of figures like Milo Yiannopoulos and YouTube's demonetizing various right-of-center channels, as evidence of bias.

Tech companies have said in congressional hearings that suppressing content based on viewpoint goes against their missions. A Twitter representative told Congress this year it found "no statistically significant difference" between the reach of tweets by Democrats versus Republicans. Mark Zuckerberg , Facebook's boss, has had a quiet series of dinners with aggrieved conservatives to hear their complaints about perceived bias.

What is called censorship by some, as in the case of PragerU, has been labeled a mistake by tech companies themselves.

Facebook, which has more than 2.4 billion users worldwide, says human reviewers make the wrong call in more than 1 in 10 cases. The estimate is based on a sample of content taken down by mistake, and posts that were left up but should have been pulled down. 

It's unclear how many posts this equates to, but content reviewers look at more than 2 million posts a day, Facebook said. Twitter and Google declined to disclose their error rates.

Allegations of conservative censorship partially stem from a lack of trust in specific companies, says Jillian York, director for international freedom of expression at the Electronic Frontier Foundation. Facebook has been particularly beleaguered by scandals in recent years, ranging from content moderation spats to the infamous Cambridge Analytica case.

gettyimages-944424000

When Facebook CEO Mark Zuckerberg appeared before the Senate in 2018, he was grilled on political bias by Republican Sen. Ted Cruz of Texas. 

Pool/Getty

But even at the best of times, when intentions are both clean and clear, bias can't be ruled out, York said. 

"Most of this content moderation is still done by humans, and humans are notorious for having their own values and biases," York said.

Tech companies routinely release data about the types of content they remove from their platforms. Content moderation, though, is still an opaque process. Advocacy groups have been pushing social media companies to share more information about how they apply their policies. 

Content moderation is a "black box" that even experts are still trying to wrap their heads around, said Liz Woolery, deputy director of the free expression project at the Center for Democracy and Technology. "If we can get a better look inside that black box, we can begin to better understand content moderation at large."

How mistakes happen

Social networks might mistakenly pull down or keep up content for a host of reasons. Human reviewers sometimes have trouble interpreting the company's rules. A machine might have mistakenly flagged a post because of a keyword or a user's behavior.

PragerU's Strazzeri said Facebook told him that a worker on the company's content moderation team removed both videos after labelling them as hate speech.

"The fact that they admitted that one employee was responsible for both of them -- it doesn't sound like a mistake. It sounds like a deliberate action," Strazzeri said. 

Facebook confirmed the mistake was due to human error but declined to provide details about how it happened. The PragerU incident is just one of several high-profile errors by social networks. 

In June, Twitter apologized for suspending accounts critical of the Chinese government ahead of the 30th anniversary of the violent crackdown on pro-democracy demonstrations known as the Tiananmen Square massacre. The suspensions, which prompted concerns that the Chinese government was further suppressing free speech, were actually mistakes in a system designed to catch spammers and fake accounts, Twitter said. 

gettyimages-1027071144

Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey spoke to the House Energy and Commerce Committee in September about their companies' content moderation practices.

Drew Angerer/Getty

Other mistakes have made headlines too. Dan Scavino, the White House social media director, was temporarily blocked in March from replying to comments on his personal Facebook page because the social network mistook him for a bot. Three months later, videos about Adolf Hitler uploaded by British history teachers were accidentally flagged by YouTube for hate speech, according to The Guardian.

For its own battle with Silicon Valley, PragerU may find a powerful ally in President Donald Trump. Trump temporarily launched a website in May asking people to share information with the government if they believed their social media account had been suspended, banned or reported because of political bias

With the 2020 election cycle heating up, allegations of bias are likely to rise. Zuckerberg attempted to preempt this at a Georgetown University speech in mid-October

"I'm here today because I believe we must continue to stand for free expression," he said. 

Facebook and Twitter are often on high alert around events like elections and important commemoration days. For that reason, content moderation mistakes can be made at the most inopportune time for bloggers and creators. 

A month before the first phase of India's general election in April, Dhruv Rathee, an Indian YouTuber who posts political videos, got a notice from Facebook that he was banned for 30 days because one of his posts violated the site's community standards. 

Rathee's blocked post contained underlined passages from an Encyclopaedia Britannica biography of Adolf Hitler. "These are paragraphs from Adolf Hitler. Read the lines I underlined in red color," the post reads. Rathee was making a comparison between the German dictator and incumbent Indian Prime Minister Narendra Modi, but he doesn't mention the latter by name.

He was on the fence about whether it was a mistake made by a machine or if a Facebook worker was trying to ban him from the social network ahead of the election. 

The notice Rathee received from Facebook didn't mention which rule he violated, Rathee told CNET. There was a button to contest the decision but no way to email or call a Facebook employee for help.

So, like PragerU, he tweeted about the ban and within the same day he received a note from Facebook acknowledging it had made a mistake and would unblock his account. 

"I think it only happened because of the publicity I got from my tweet," said Rathee, who has roughly 355,000 followers on Twitter. "Someone who doesn't have that large following is helpless."

Appealing a decision

Social media users, whether or not they're high profile, say they have trouble appealing what they perceive as content moderation errors. Users have complained about automated responses or links that don't work, further fueling speculation of bias and censorship. Not everyone who has tried to appeal a decision has been successful. 

Eileen Morentz, a resident of Oakland, California, who uses Twitter to talk politics, responded earlier this year to tweets about the topic of unwanted touching. At some point in the conversation, Morentz said she tweeted that the user's viewpoint about the topic was similar to men calling women who weren't interested in sleeping with them "frigid bitches."

That's when she got a notice from Twitter saying she could delete the tweet and have her account unlocked or appeal the decision. She chose the latter, arguing to the company that she was making an analogy and not name-calling a user.

She never heard back, so she ended up abandoning her account and creating a new one.  

Whether something stays up or is taken down can come down to a moderator's interpretation of a single word or phrase. This can be harder than it sounds because of cultural context. Often slurs have been reclaimed by communities as their own. In 2017, Facebook came under fire from members of the LGBTQ community after their accounts were mistakenly suspended for using the word "dyke," according to Wired.   

gettyimages-1176377165

At a recent Georgetown University speech, CEO Mark Zuckerberg said Facebook is on the side of free speech. 

Andrew Caballero-Reynolds/Getty

It's partially for reasons like this that Facebook is creating an independent oversight board. It'll act as a Supreme Court and will be able to overrule Zuckerberg himself. 

Strazzeri, the PragerU executive, said Facebook hasn't flagged the organization's videos since the incident last year. But the nonprofit has raised censorship concerns about other social networks. 

PragerU has sued Google-owned YouTube twice over allegations of conservative censorship. A California judge dismissed one of the lawsuits in 2018. The other is still ongoing. PragerU also said that Twitter banned it from advertising. 

The organization's troubles with Facebook aren't over, Strazzeri said. Facebook users have told PragerU that they've liked a post only to go back and discover it was unliked, or find that they've unfollowed PragerU's page when they haven't. A Facebook spokesperson said the company would look into these issues if PragerU provided more information.

It's unclear whether the reported changes are real, intentional or just another mistake made by Facebook.