X

YouTube will ask you to rethink posting that comment if AI thinks it's offensive

You'll be asked if you want to edit your comment or just post it anyway.

Joan E. Solsman Former Senior Reporter
Joan E. Solsman was CNET's senior media reporter, covering the intersection of entertainment and technology. She's reported from locations spanning from Disneyland to Serbian refugee camps, and she previously wrote for Dow Jones Newswires and The Wall Street Journal. She bikes to get almost everywhere and has been doored only once.
Expertise Streaming video, film, television and music; virtual, augmented and mixed reality; deep fakes and synthetic media; content moderation and misinformation online Credentials
  • Three Folio Eddie award wins: 2018 science & technology writing (Cartoon bunnies are hacking your brain), 2021 analysis (Deepfakes' election threat isn't what you'd think) and 2022 culture article (Apple's CODA Takes You Into an Inner World of Sign)
Joan E. Solsman
3 min read
youtube-3

YouTube has more than 2 billion monthly visitors. 

Angela Lang/CNET

YouTube will start asking commenters to reconsider posting something before it goes up if Google's artificial intelligence identifies that comment as potentially offensive, YouTube said Thursday. The new YouTube prompt suggests that commenters review the company's community guidelines if they're "not sure whether the post is respectful," and then gives the option to either edit the content or post anyway. 

screen-shot-2020-12-03-at-5-54-47-pm.png

The new comment prompts will suggest a commenter reconsider their post if it's potentially offensive. 

YouTube

"To encourage respectful conversations on YouTube, we're launching a new feature that will warn users when their comment may be offensive to others, giving them the option to reflect before posting," YouTube said in a blog post announcing the feature and other measures meant to improve inclusivity on the platform. 

The feature is now on Android. Comments that don't trigger the reminder can still be removed by YouTube later if they're found to violate the service's community guidelines, which are essentially YouTube's rule book of what's allowed and what crosses the line. But comments that trigger the warning won't necessarily be removed if posted. 

YouTube's system identifies potentially offensive posts by learning from what's been repeatedly reported by users. 

YouTube is a company that's had no shortage of problems to reckon with over the years, including misinformation, conspiracy theoriesdiscriminationharassment, videos of mass murder and child abuse and exploitation -- and its comments remain notorious for their potential to turn toxic.  

YouTube's massive scale -- serving 2 billion monthly users and ingesting more than 500 hours of video uploads every minute -- means the company must rely on machine learning not only to recommend what else to watch but also to police its platform. For instance, the company announced in September that artificial intelligence would start automatically determining which videos need to be blocked from underage viewers

YouTube said that since early 2019, the number of comments removed from the site daily for hate speech has multiplied by 46. Between July and September, it terminated more than 54,000 channels for hate speech, out of 1.8 million total terminated channels -- it said that was the most hate speech terminations in a single quarter, three times more than the previous high in mid-2019 when the company updated its hate speech policy.

The warning feature was announced alongside other measures meant to improve inclusivity on YouTube. 

The company said it would be testing a new filter in the comments management system for channel owners, which will siphon out potentially inappropriate and hurtful comments that've been automatically held for review, so creators don't need to read them if they don't want to. 

Starting next year, initially in the US, YouTube will ask creators to take optional surveys that identify their gender, sexual orientation, race and ethnicity. The company said that this data will help it "look closely at how content from different communities is treated in our search and discovery and monetization systems" and "for possible patterns of hate, harassment and discrimination that may affect some communities more than others." 

"Our creators' privacy and ability to provide consent for how their information is used is critical. In the survey, we will explain how information will be used and how the creator controls their information," the company said. "For example, the information gathered will not be used for advertising purposes, and creators will have the ability to opt-out and delete their information entirely at any time."

Watch this: YouTube bans more COVID-19 misinformation, Netflix ends free trials in US