X

YouTube helps creators blast trolls from comments

A test feature uses algorithms to monitor comments for bad behavior. Then it lets a video's creator decide if a flagged comment should appear on the site.

Joan E. Solsman Former Senior Reporter
Joan E. Solsman was CNET's senior media reporter, covering the intersection of entertainment and technology. She's reported from locations spanning from Disneyland to Serbian refugee camps, and she previously wrote for Dow Jones Newswires and The Wall Street Journal. She bikes to get almost everywhere and has been doored only once.
Expertise Streaming video, film, television and music; virtual, augmented and mixed reality; deep fakes and synthetic media; content moderation and misinformation online Credentials
  • Three Folio Eddie award wins: 2018 science & technology writing (Cartoon bunnies are hacking your brain), 2021 analysis (Deepfakes' election threat isn't what you'd think) and 2022 culture article (Apple's CODA Takes You Into an Inner World of Sign)
Joan E. Solsman
3 min read
Seth Rosenblatt/CNET

YouTube is giving its creators new weapons to battle the trolls.

Comments on YouTube have always been an important way for its stars to build a fan base. The direct connection viewers feel through comments and social media have been crucial to the rise of the "YouTuber" celebrity. But freedom of comments on YouTube, as well as on services like Twitter, sometimes allowed abusive language to fester.

Thursday, YouTube introduced a test feature that automatically identifies potentially offensive comments and holds them for review. Creators can opt-in to the new beta tool, which uses an algorithm and then lets the creator decide whether to approve, hide or report the comments.

YouTube has introduced new tools for creators to better manage comments.

YouTube has introduced new tools for creators to better manage comments.

Google

It's a bit like YouTube's existing tool to blacklist certain words or phrases in comments, but on steroids. YouTube's algorithms identify comments that share characteristics with other comments that have been removed by creators in the past, or that contain other signals to indicate they are likely to include inappropriate language.

In theory, it means a YouTuber wouldn't need to brainstorm a blacklist of all the words that hurt the most. The software can take of that role.

"We recognize that the algorithms will not always be accurate: the beta feature may hold some comments you deem fine for approval, or may not catch comments you'd like to hold and remove," YouTube said in a blog post. "When you review comments, the system will take that feedback into account and get better at identifying the types of comments to hold for review."

The company is also rolling out tools like pinned comments, with holds particular comments at the top of the feed; creator hearts, where the channel's owner can single out particular comments for special love; and creator usernames, which adds color to the text of his or her username, so comments from the creator pop out more in the feed. Verified creators will still have a verification checkmark appear beside their name.

YouTube provided creators with other moderation tools in the past. Earlier this year, it launched a feature that lets a channel delegate a moderator, and in 2013 it introduced the ability to blacklist words and phrases that hold a comment in review before it publishes.

The latest changes are part of YouTube's commitment announced earlier this year to come out with more creator-friendly tools. One of YouTube's creator-friendly changes, however, garnered attention for striking a sour note. In September, YouTube changed how it notifies creators when a clip's advertising is removed because of objectionable content in the clip. It sparked a social-media frenzy, with creators confused about whether YouTube had changed its standard about which clips on YouTube are allowed to make money and which are banned.

Some services have been widening users' ability to safeguard their interactions from abuse. So far this year, Reddit added a blocking tool, Microsoft introduced a website to report hate speech on its services, and Instagram set up a system that automatically blocks comments with with objectionable words.

Twitter, however, has been the poster child for the problems that can arise when abuse goes too far. In addition to high-profile celebrity departures, last month Disney reportedly walked away from a potential bid to buy Twitter partly because of an image of bullying behavior among users.

UPDATED November 4 at 11:13 a.m. PT: Adds context about other platforms' responses to abuse on their services.