Twitter is finally doing something about trolls -- using 'behavioral signals'

Jerks still won't get banned, but their tweets won't get top billing either.

Sean Hollister Senior Editor / Reviews
When his parents denied him a Super NES, he got mad. When they traded a prize Sega Genesis for a 2400 baud modem, he got even. Years of Internet shareware, eBay'd possessions and video game testing jobs after that, he joined Engadget. He helped found The Verge, and later served as Gizmodo's reviews editor. When he's not madly testing laptops, apps, virtual reality experiences, and whatever new gadget will supposedly change the world, he likes to kick back with some games, a good Nerf blaster, and a bottle of Tejava.
Sean Hollister
Nicolas Asfouri / AFP / Getty Images

Twitter has a troll problem. Nasty comments can easily take over online conversations. But today, the company says it's doing something about it -- something that has the potential to change how your Twitter feed looks.

It actually sounds pretty simple at first: according to a Twitter blog post Tuesday, the company will simply organize conversations on Twitter differently based on "behavioral signals" designed to root out trolls in "communal areas" of the social network.

If Twitter's algorithms and human reviewers see the same person signing up for multiple accounts simultaneously, repeatedly tweeting at accounts that don't follow them, or "behavior that might indicate a coordinated attack" -- among other things -- then Twitter says it'll make tweets from those accounts less visible. You'll have to click the "show more Tweets" button to see them.

"Because this content doesn't violate our policies, it will remain on Twitter, and will be available if you click on "Show more replies" or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search," writes Twitter.

We don't know what Twitter will look like after this change, but the company says it's seen positive results in early testing, "resulting in a 4 percent drop in abuse reports from search and 8 percent fewer abuse reports from conversations."