YouTube AI to automatically block videos that violate age restrictions

Creators can appeal the age-restriction blocks they believe are wrong.

Joan E. Solsman Former Senior Reporter
Joan E. Solsman was CNET's senior media reporter, covering the intersection of entertainment and technology. She's reported from locations spanning from Disneyland to Serbian refugee camps, and she previously wrote for Dow Jones Newswires and The Wall Street Journal. She bikes to get almost everywhere and has been doored only once.
Expertise Streaming video, film, television and music; virtual, augmented and mixed reality; deep fakes and synthetic media; content moderation and misinformation online Credentials
  • Three Folio Eddie award wins: 2018 science & technology writing (Cartoon bunnies are hacking your brain), 2021 analysis (Deepfakes' election threat isn't what you'd think) and 2022 culture article (Apple's CODA Takes You Into an Inner World of Sign)
Joan E. Solsman
2 min read

YouTube is the world's biggest source of online video, with more than 2 billion visitors monthly.

Angela Lang/CNET

YouTube will use machine learning to automatically apply age restrictions on videos, the Google-owned video site said Tuesday, widening its use of artificial intelligence to automate blocking some videos from viewers who either aren't signed into a YouTube account or are signed in as a viewer under the age of 18. 

Creators who believe their videos were blocked unfairly can appeal. YouTube said these automated age restrictions and some tweaks to what it categorizes as inappropriate for people under 18 will all "roll out over the coming months."

Currently, YouTube has a human team that applies the age restrictions when it reviews a video that isn't appropriate for younger viewers. "Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age restrictions," YouTube said. 

YouTube, with more than  2 billion monthly users, is the world's biggest online video source -- so big, in fact, that it's the world's top source for kids' videos too. Content for kids is one of the site's most-watched categories, but YouTube has come under fire for a range of scandals involving children. It was slapped with a record $170 million US penalty because of the the data YouTube collects on kids without parents' consent. YouTube has also faced scandals involving videos of child abuse and exploitation and nightmarish content in its YouTube Kids app, pitched as a kid-safe zone. 

Hundreds of hours of video are uploaded to YouTube every minute, making comprehensive human review impossible. So YouTube executives have touted machine learning as a crucial tool to supplement human moderation. But content decisions made by automated algorithms can be prone to mistakes. These errors can occur when value judgments and context is important to make the correct call; other times, a new kind of problem arises that software hasn't been trained to address. 

Footage of a mass shooting at a New Zealand mosque last year, for example, was initially able to spread on YouTube partly because machine learning had trouble detecting it automatically.