YouTube is giving its creators new weapons to battle the trolls.
Comments on YouTube have always been an important way for its stars to build a fan base. The direct connection viewers feel through comments and social media have been crucial to the rise of the "YouTuber" celebrity. But freedom of comments on YouTube, as well as on services like Twitter, sometimes allowed abusive language to fester.
Thursday, YouTube introduced a test feature that automatically identifies potentially offensive comments and holds them for review. Creators can opt-in to the new beta tool, which uses an algorithm and then lets the creator decide whether to approve, hide or report the comments.
It's a bit like YouTube's existing tool to blacklist certain words or phrases in comments, but on steroids. YouTube's algorithms identify comments that share characteristics with other comments that have been removed by creators in the past, or that contain other signals to indicate they are likely to include inappropriate language.
In theory, it means a YouTuber wouldn't need to brainstorm a blacklist of all the words that hurt the most. The software can take of that role.
"We recognize that the algorithms will not always be accurate: the beta feature may hold some comments you deem fine for approval, or may not catch comments you'd like to hold and remove," YouTube said in a blog post. "When you review comments, the system will take that feedback into account and get better at identifying the types of comments to hold for review."
The company is also rolling out tools like pinned comments, with holds particular comments at the top of the feed; creator hearts, where the channel's owner can single out particular comments for special love; and creator usernames, which adds color to the text of his or her username, so comments from the creator pop out more in the feed. Verified creators will still have a verification checkmark appear beside their name.
YouTube provided creators with other moderation tools in the past. Earlier this year, it launched a feature that lets a channel delegate a moderator, and in 2013 it introduced the ability to blacklist words and phrases that hold a comment in review before it publishes.
The latest changes are part of YouTube's commitment announced earlier this year to come out with more creator-friendly tools. One of YouTube's creator-friendly changes, however, garnered attention for striking a sour note. In September, YouTube changed how it notifies creators when a clip's advertising is removed because of objectionable content in the clip. It sparked a social-media frenzy, with creators confused about whether YouTube had changed its standard about which clips on YouTube are allowed to make money and which are banned.
Some services have been widening users' ability to safeguard their interactions from abuse. So far this year, Reddit added a blocking tool, Microsoft introduced a website to report hate speech on its services, and Instagram set up a system that automatically blocks comments with with objectionable words.
Twitter, however, has been the poster child for the problems that can arise when abuse goes too far. In addition to high-profile celebrity departures, last month Disney reportedly walked away from a potential bid to buy Twitter partly because of an image of bullying behavior among users.
UPDATED November 4 at 11:13 a.m. PT: Adds context about other platforms' responses to abuse on their services.