Twitter's New Method for Reporting Harmful Content Is Live

The revamp will focus on what happened, instead of classifying an incident.

Erin Carson Former Senior Writer
Erin Carson covered internet culture, online dating and the weird ways tech and science are changing your life.
Expertise Erin has been a tech reporter for almost 10 years. Her reporting has taken her from the Johnson Space Center to San Diego Comic-Con's famous Hall H. Credentials
  • She has a master's degree in journalism from Syracuse University.
Erin Carson
Twitter logo on a phone screen

There's a new way to report hateful or harmful tweets.

Sarah Tew/CNET

Twitter's plans for revamping how users can report policy violations is now available globally, the company said Friday. 

The overhauled process was first outlined in a December blog post. The idea is to shift the focus to asking what happened, instead of asking the person doing the reporting to classify the incident. 

"The vast majority of what people are reporting on fall within a much larger gray spectrum that don't meet the specific criteria of Twitter violations, but they're still reporting what they are experiencing as deeply problematic and highly upsetting," said Renna Al-Yassini, a senior UX manager on the team, in that December post.

Twitter said it saw the number of actionable reports increase by 50% using the new system. The company also said its prior system left people feeling frustrated. The new approach was first tested within a small group of users in the US.