YouTube will ask you to rethink posting that comment if AI thinks it's offensive
You'll be asked if you want to edit your comment or just post it anyway.
YouTube will start asking commenters to reconsider posting something before it goes up if Google's artificial intelligence identifies that comment as potentially offensive, YouTube said Thursday. The new YouTube prompt suggests that commenters review the company's community guidelines if they're "not sure whether the post is respectful," and then gives the option to either edit the content or post anyway.
"To encourage respectful conversations on YouTube, we're launching a new feature that will warn users when their comment may be offensive to others, giving them the option to reflect before posting," YouTube said in a blog post announcing the feature and other measures meant to improve inclusivity on the platform.
The feature is now on Android. Comments that don't trigger the reminder can still be removed by YouTube later if they're found to violate the service's community guidelines, which are essentially YouTube's rule book of what's allowed and what crosses the line. But comments that trigger the warning won't necessarily be removed if posted.
YouTube's system identifies potentially offensive posts by learning from what's been repeatedly reported by users.
YouTube is a company that's had no shortage of problems to reckon with over the years, including misinformation, conspiracy theories, discrimination, harassment, videos of mass murder and child abuse and exploitation -- and its comments remain notorious for their potential to turn toxic.
YouTube's massive scale -- serving 2 billion monthly users and ingesting more than 500 hours of video uploads every minute -- means the company must rely on machine learning not only to recommend what else to watch but also to police its platform. For instance, the company announced in September that artificial intelligence would start automatically determining which videos need to be blocked from underage viewers.
YouTube said that since early 2019, the number of comments removed from the site daily for hate speech has multiplied by 46. Between July and September, it terminated more than 54,000 channels for hate speech, out of 1.8 million total terminated channels -- it said that was the most hate speech terminations in a single quarter, three times more than the previous high in mid-2019 when the company updated its hate speech policy.
More streaming advice
- 10 Ways to Save Money on Streaming
- How to Cut the Cable TV Cord in 2023
- See More at Streaming TV Insider
The warning feature was announced alongside other measures meant to improve inclusivity on YouTube.
The company said it would be testing a new filter in the comments management system for channel owners, which will siphon out potentially inappropriate and hurtful comments that've been automatically held for review, so creators don't need to read them if they don't want to.
Starting next year, initially in the US, YouTube will ask creators to take optional surveys that identify their gender, sexual orientation, race and ethnicity. The company said that this data will help it "look closely at how content from different communities is treated in our search and discovery and monetization systems" and "for possible patterns of hate, harassment and discrimination that may affect some communities more than others."
"Our creators' privacy and ability to provide consent for how their information is used is critical. In the survey, we will explain how information will be used and how the creator controls their information," the company said. "For example, the information gathered will not be used for advertising purposes, and creators will have the ability to opt-out and delete their information entirely at any time."