Twitter Expands Fact-Checking Project Birdwatch in the US
Questions remain about how well crowdsourced fact checking works.
Queenie WongFormer Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
ExpertiseI've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art.Credentials
said Thursday it's expanding a pilot project that allows users to add context to misleading tweets.
As part of a project known as Birdwatch, users can add "notes" to tweets that contain false information. Twitter said a "small (and randomized)" group of users in the US will see these notes on tweets and be able to rate if they're helpful. About 10,000 people have contributed to the pilot, which was launched more than a year ago, the company said.
The expansion of the project is an example of how social networks are experimenting with ways to combat the spread of misinformation, a long-standing problem that's become a bigger concern after Russia's invasion of Ukraine. From fake profiles to misleading videos, social networks have struggled to stop a flood of lies before they go viral.
While the idea of crowdsourcing isn't new, it's not entirely clear how well Twitter's Birdwatch program has been working. Poynter, which analyzed more than 2,600 notes from Birdwatch and reviewed 8,200 ratings, said in February 2021 it found that less than half of Birdwatch users cited a source and many of the notes included partisan rhetoric. Twitter said Wednesday that the "vast majority" of notes that appear on tweets cite sources. In November, Twitter started allowing users to add notes without providing their name. Twitter said the anonymity is meant to help protect people from harassment and could potentially reduce polarization. The move, though, also makes it tougher to vet a source.
Twitter's Birdwatch program is still small and most users don't see these notes. The Washington Post reported this week that Birdwatch contributors were flagging about 43 tweets per day in 2022 before Russia's invasion of Ukraine. Birdwatch also isn't available in Russia or Ukraine.
Keith Coleman, vice president of product at Twitter, said during a virtual press conference Wednesday there are a lot of challenges that come with building Birdwatch.
"As we're expanding, we want to know that the product is really working and is really helpful," he said.
The company surveyed Twitter users in the US and found that they were 20% to 40% less likely to agree with the substance of a potentially misleading tweet after reading a note about it. Coleman didn't say how many people were surveyed or when this study was conducted. CNET asked Twitter for a copy of the survey, but the company declined to share it publicly. Twitter has also worked with The Associated Press and Reuters to rate the accuracy of the notes. Most were accurate, Coleman said.
Twitter wants people who have different points of view to contribute to Birdwatch, Coleman said. The company doesn't look at a Birdwatch user's political affiliation, gender or location but instead how they've rated notes in the past.
"If you have a note that's been rated helpful by two people who historically have always disagreed with each other, it's probably a good sign that that note is actually helpful to people from different points of view and could be worth showing," he said.
CNET wasn't invited to the press conference but viewed a recording of the call after learning about the event.
Twitter users are able to see what tweets are being fact checked on Birdwatch. Some recent examples of notes rated helpful include one that debunked a claim that US Vice President Kamala Harris told a kindergarten class that "Russia decided to invade a smaller country called Ukraine so basically that's wrong." The March 2 note cites an article from Media Matters, a left-leaning nonprofit and watchdog group, that points out that Harris was asked to explain Russia's invasion of Ukraine in "layman's terms" on the podcast The Morning Hustle.
Another note rated as helpful appeared below a video with 12 million views that describes a woman at an ice cream stand as a "real-life hero" because she appears to have prevented a girl from getting kidnapped. "This video is most likely scripted or setup. Similar video exists with same people," the note from March 1 said, citing Reddit. The note and replies to the tweet didn't appear to stop people from sharing it. As of Wednesday, the questionable video had been retweeted more than 84,000 times and quote tweeted more than 11,600 times. The quote tweets indicate that users still believe the video is legitimate.
"Our focus right at this phase is note quality — that notes are helpful to people, inform understanding, and accurate. As we scale and it becomes visible to more people, we believe it has the potential to impact virality, and this is something we'll be measuring," Coleman said in an e-mail after the press call.
Twitter users aren't alerted at the moment if a tweet has a note but Coleman said as the pilot expands it would "make sense to look at extensions like this, to help people evaluate what they're reading and sharing."
Allegations of bias in fact-checking has been an issue that social networks have grappled with as they try to combat more misinformation. Last week, Russia said it's partly restricting access to Facebook after the social network refused to stop fact-checking and labeling content posted on
by four Russian state-owned media organizations. Russia's telecommunications regulator, Roskomnadzor, alleges Facebook violated "fundamental human rights" by restricting the country's state-controlled media. Twitter said Saturday in a tweet it's also being restricted in Russia.
Facebook pays third-party fact-checkers such as PolitiFact, Reuters and The Associated Press to flag misinformation on its site. The company started a project in 2019 so "community reviewers" who are contractors can help fact-checkers spot misinformation faster. Content flagged as false is shown lower on the Feed, filtered out of an Instagram page that curates content, and is featured less prominently in Feed and Stories where users post content that vanishes in 24 hours. If you try to share a fact-checked post, Facebook shows you a notice that says there's false information in the post.
Outside of Birdwatch, Twitter has taken other steps to help curb the spread of false claims, including adding more labels to misinformation, Russian state-media links and automated accounts.
While there are people who share false claims intentionally, others might share misinformation because they simply don't know it's true.
Yoel Roth, head of site integrity at Twitter, said Russia's invasion of Ukraine has made it more apparent how important Twitter's work in combating misinformation is.
"We're acutely aware of the role that we play in society and the value that public real-time conversation has," he said.