Drug-Resistant Fungus Computing's Top Prize Google's AI Chatbot Beat Airline Ticket Prices ChatGPT Bug 7 Daily Habits for Happiness Weigh Yourself Accurately 12 Healthy Spring Recipes
Want CNET to notify you of price drops and the latest stories?
No, thank you

Facebook says it won't fact-check politicians' posts

Politicians' comments are newsworthy and should be seen and heard, a Facebook executive says.

Facebook won't fact-check politicians' posts.
Angela Lang/CNET

Facebook said Tuesday that its efforts to reduce false news and misinformation on the platform don't apply to politicians. The company said it exempts politicians from its third-party fact-checking process and that that's been the policy for more than a year.

In Washington, DC, on Tuesday, Facebook's vice president of global affairs and communications outlined the company's policies regarding politicians, during a speech about the company's plans to prevent outside interference in the 2020 election.

"From now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard," Nick Clegg said in a post that included the full text of his speech.

Since the 2016 US presidential election, Facebook has been trying to prove it's doing what it must to combat misinformation on the site and thwart election meddling from Russia, Iran and other countries. The social network has cracked down on the issue through a variety of means, including partnerships with fact-checking organizations and advertisements in newspapers.

However, Clegg said Facebook doesn't submit posts by politicians to this process, even when they violate the company's content rules.

"We don't believe ... that it's an appropriate role for us to referee political debates and prevent a politician's speech from reaching its audience and being subject to public debate and scrutiny," he said.

Now playing: Watch this: Facebook's Zuckerberg preaches 'The future is private'

There are exceptions to this largely hands-off approach though. The policy won't apply to ads and speech that may result in violence.

"Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value," Clegg said. Factors that'll be considered include whether an election is underway in the country or if it's at war, as well as the country's political structure and whether it has a free press.

Social media giants like Facebook and Twitter have come under criticism about what content they leave up or pull down.  Twitter announced in June that it's changing how it handles tweets from politicians and government leaders that violate its rules but are still in the public's interest. The social network said it would start placing a notice over tweets that break its rules, forcing users to click or tap on the warning to see the tweet.

That policy even applies to President Donald Trump. With nearly 65 million followers, Trump is among Twitter's most followed users, and his tweets can be controversial. Twitter has faced calls to boot Trump off the platform over allegations that he's spreading hate speech or inciting violence.

Earlier this month, four Republican senators sent a letter to Facebook CEO Mark Zuckerberg accusing the company of censoring pro-life content and calling for another audit that looks into concerns that the social network suppresses conservative speech. Facebook has repeatedly denied these accusations.

"I know some people will say we should go further," Clegg said in his speech. "That we are wrong to allow politicians to use our platform to say nasty things or make false claims. But imagine the reverse.

"Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say?" he said. "I don't believe it would be. In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves."