YouTube is testing a program for creators to self-report controversial content

Product Chief Neal Mohan says the video site wants to "trust" creators who flag their own content that might not be advertiser-friendly.

Richard Nieva
Richard Nieva Former senior reporter
Richard Nieva was a senior reporter for CNET News, focusing on Google and Yahoo. He previously worked for PandoDaily and Fortune Magazine, and his writing has appeared in The New York Times, on CNNMoney.com and on CJR.org.
2 min read

YouTube wants creators to report their own controversial content, profanity included.


YouTube has been dogged by scandals over the content on its platform, including videos related to extremism and child exploitation. Meanwhile, video creators have complained about YouTube's rules for demonetization, which shuts their videos out from earning advertising dollars. 

Neal Mohan, the Google-owned video site's product chief, on Wednesday said the company is testing a program to deal with those two problems. Mohan said YouTube is experimenting with a "self-certification" program that lets video creators self-report to the site about what's in their videos, and if they contain swearing or controversial content. The idea is for video makers to let YouTube know which of their content follows YouTube's guidelines for monetization. 

Read: Best vlogging cameras and accessories for YouTube  

"Creators themselves are the ones who know best what's in their content," Mohan said during the TechCrunch Disrupt conference in San Francisco. "They can tell us that ahead of time, and we will use that in a way in which we trust the creators."

Mohan said the company will perform audits and checks to prevent abuse. "If someone sort of violates that trust, it's hard for them to be part of this program," he said.

Still, YouTube has had trouble in the past preventing people from gaming its systems and abusing its services. The platform has in the past unwittingly helped distribute misinformation and helped pedophiles find child-related content. 

As part of the program, YouTube will let creators know why their videos were demonetized, a YouTube spokeswoman said. While the program is only in testing right now, Mohan said YouTube eventually wants to move more toward that kind of model. 

Over the years, YouTube has drawn blowback from advertisers over toxic content on the platform. In 2017, the company faced a boycott from several brands, including AT&T and Johnson & Johnson, because their ads were running alongside extremist videos. Within the creator community, the scandal was called the "adpocalypse."

YouTube has also faced severe criticism recently over how it more broadly deals with content creators. In August, a group of YouTube creators said they joined forces with IG Metall, a German metal workers union, to demand more transparency from the Google-owned video platform on how moderation and demonetization decisions are made. YouTube said it would meet with the group but that it wouldn't negotiate on the union's demands.

And last month, YouTube caused an uproar over its verification program. The company initially said it would move away from using subscription numbers to determine verification. But after video creators complained, YouTube said users who are currently verified wouldn't lose their status. The company also said creators with 100,000 or more subscribers will still be able to apply for verification. The rollout of a new verification badge, which was supposed to start next month, will also be delayed until next year.

Originally published Oct. 2, 12:29 p.m. PT.
Update, 1:40 p.m. PT: Adds additional comment and background from YouTube.

Watch this: YouTube's machine learning can't keep up with its promises (The Daily Charge, 9/9/2019)