X

Ex-Facebook security chief: Twitter labeling Trump tweets is 'smart move'

Facebook's approach to political speech contrasts with how Twitter handles the same content.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
4 min read
screen-shot-2020-06-24-at-2-13-42-pm.png

Former Facebook Chief Security Officer Alex Stamos (right) spoke at the Collision from Home conference on Wednesday. He was interviewed by Nicholas Carlson, global editor-in-chief at Insider.

Screenshot by Queenie Wong/CNET

Former Facebook Chief Security Officer Alex Stamos praised Twitter on Wednesday for how it's handled President Donald Trump's controversial tweets, calling the social network's approach a "smart move."

"Allowing something to exist without deleting them, and then taking away the amplification, is actually the smart move," Stamos said at the Collision from Home conference.

Stamos' remarks come as critics knock Facebook for its mostly hands-off approach to posts from politicians, including Trump. While Facebook has left most of Trump's posts untouched, its rival Twitter has started labeling the president's tweets. 

Twitter recently veiled two Trump tweets for, in the first instance, violating its rules against "glorifying violence" and, in the second, including a "threat of harm against an identifiable group." The notices obscuring those tweets say the posts violated the site's rules, but the tweets were left up because of public interest -- users can read them by clicking a View button. But Twitter also reduced the spread of the tweets by taking away the ability to like, reply to, or share them. You can still retweet either of the posts with a comment, and they'll show up still veiled by the notices.

Trump also posted tweets that contained false claims about mail-in ballots, and Twitter added a fact-checking link to them. And last week, the company labeled a misleading video Trump shared that included a fake CNN ticker. The notice said the tweet included "manipulated media." Twitter later removed the video because of a copyright complaint, as Facebook also did.

Stamos, director of Stanford University's Internet Observatory, said Facebook should pay attention to Twitter's example in regard to labeling problematic posts but that both sites have work to do.

"Facebook's going to have to follow ... Twitter a little bit more here," he said. "Twitter, I think, could do more too, but in both cases I think they're just gonna have to be honest and transparent about this, because the issue that's happening is they're not saying how they're making these decisions."

Pointing to the viral Plandemic video, which includes various conspiracy theories about the coronavirus pandemic, Stamos said pulling down content can fuel its spread. Facebook and Google-owned YouTube removed the video because it suggested that wearing a mask can make you sick, which the companies considered harmful misinformation about COVID-19. Twitter, which said the video didn't violate the site's rules, nevertheless tried to limit its spread by blocking hashtags for the clip and marking links to the content as unsafe. Despite those efforts, copies of the video kept popping up on the platforms. 

"People who wanted to find it still found it, but when they found it they found it in the context of, this is the forbidden fruit I was not allowed to have," Stamos said. "That in a lot of ways made it much more powerful."

Reducing the spread of disinformation and labeling it, he said, is a "good balance" and the "best way to fight disinformation."

Facebook does partner with third-party fact-checkers and places a warning notice over posts that contain misinformation. But it doesn't send posts from politicians to fact-checkers. The company says users should be able to see what politicians say and that political speech is already heavily scrutinized.

Despite concerns that a recent Trump post could incite violence against protesters, Facebook determined the president's controversial remark that "when the looting starts, the shooting starts" didn't violate its rules. Facebook CEO Mark Zuckerberg said in late May that Trump's post included a reference to the National Guard so the company read it as a warning about state action, which is allowed under its policy about the incitement of violence. 

"Unlike Twitter, we do not have a policy of putting a warning in front of posts that may incite violence, because we believe that if a post incites violence, it should be removed regardless of whether it is newsworthy, even if it comes from a politician," Zuckerberg said.

After Facebook employees staged a rare protest against the company, Zuckerberg said the social network was reviewing its policies, including about the use of state force.

It's considering whether to label posts from politicians that contain misinformation, but Zuckerberg also raised concerns that doing so includes a risk of leading Facebook "to editorialize on content" it doesn't like. 

For some time, Trump, without evidence, has accused social networks of censoring conservative speech. In late May, Trump signed an executive order that would curtail the federal legal protections internet platforms get regarding content posted by their users. The Center for Democracy and Technology filed a lawsuit against the order, alleging it violates the First Amendment and is retaliatory against Twitter.