Misinformation on Twitter could be cut in half if the social network implemented a handful of stricter measures, a new study finds.
Misinformation has become a threat to public health, but it's unclear if social networks will take the steps needed to slow the spread on their platforms.
Social media platforms such as Facebook, Instagram and Twitter are rife with misinformation that can easily go viral. One study looked at millions of tweets and found that a handful of steps could be taken to slow the spread of false information on Twitter.
Researchers with the University of Washington Center for an Informed Public found that combining multiple measures -- including deplatforming repeat misinformation offenders, removing false claims and warning people about posts that contain false information -- could reduce the volume of misinformation on Twitter by 53.4%. The study's findings were published in the journal Nature last week.
Using just one of those measures can slow down misinformation, but there are diminishing returns if taking only one step, said Jevin West, one of the co-authors of the paper and an associate professor at the University of Washington Information School. By combining multiple measures, there can be a significant improvement in the results, the study found.
Misinformation has become a threat to the public health of Americans, warn US Surgeon General Vivek Murthy and Food and Drug Administration Commissioner Robert Califf. Twitter, like other social media sites, has spent the past two years trying to stop false information about the 2020 presidential election and COVID-19 from spreading on its platform. The company's content moderation efforts have been criticized by Tesla and SpaceX CEO Elon Musk, who made a deal in April to purchase Twitter. Musk says he wants to make the platform more "free speech" oriented. In a meeting with Twitter employees in June, he reportedly said the company should "allow people to say what they want."
To determine what steps would work to slow viral misinformation on Twitter, the researchers looked at 23 million tweets related to the 2020 presidential election, from Sept. 1 to Dec. 15 of that year. Each of the posts was connected to at least one of 544 viral events -- defined as periods in which a story exhibited rapid growth and decay -- identified by the researchers. The researchers used the data to create a model that's similar to contagion models used by epidemiologists to predict the spread of an infectious disease.
With that model, the researchers were able to determine the different measures, or interventions as described in the study, that Twitter could apply to its platform to help stop the spread of misinformation. The most effective, according to the study, is the removal of the misinformation from the platform, especially if done within the first half-hour after the content is posted.
Also effective is removing repeat offenders, people who regularly share misinformation. The study suggests that Twitter implement a three-strike rule, but West said he understands the controversy around deplatforming individuals.
"We should take that one [deplatforming] serious, especially with discussions of free speech," he said.
The First Amendment to the US Constitution provides protection against the government censoring speech, but companies can decide not to allow certain types of speech on their platforms. They can have their own standards and require users to follow them.
Twitter's policy page has two sets of rules for misinformation, with varying penalties. Under its Crisis Misinformation Policy, false and misleading information about armed conflict, public health emergencies and large-scale natural disasters can result in a seven-day timeout for repeat offenders, who are given notices within 30 days. Twitter's policy on misleading COVID-19 information lists a five-strike rule that results in a permanent suspension of the offender's account. The platform would be able to better slow the spread of misinformation if its policies were more consistent, instead of having different penalties for different kinds of false claims, the study said.
West said a reduction of the amplification -- referred to as "circuit breakers" in the study -- of a repeat offender's account is also effective at slowing misinformation, without having to ban or remove an account. This would entail using Twitter's algorithm to make posts or accounts spreading false information less visible on the platform.
Twitter already takes some measures related to this, including making tweets from offending accounts ineligible for recommendation, preventing offending posts from showing up in search, and moving replies from offending accounts to a lower position in conversations, according to Twitter's policy page.
The study also references nudges. These are the warnings and tags used on tweets advising people that a post has false info. Twitter has made use of these extensively throughout the COVID-19 pandemic regarding misinformation about the virus, treatments and vaccines.
When asked for comment, a Twitter spokesperson said many of the measures explored in the study are already part of its misinformation policies. They also pointed to the company's "How we address misinformation on Twitter" page.
West said the researchers looked at Twitter first because it was the easiest platform to gather data on. He said the next big step is to use the model on other, bigger platforms, like Facebook.