Esto también se puede leer en español.

Leer en español

Don't show this again

Google Stadia review T-Mobile CEO to step down 2021 Ford Mustang Mach-E electric SUV The Mandalorian Walmart Black Friday 2019 Early Black Friday Deals

Mozilla is sharing YouTube horror stories to prod Google for more transparency

Researchers should have more access to YouTube's algorithms to figure out how the video site makes its recommendations, Mozilla says.

YouTube logo on a smartphone screen

How you get from one YouTube video to the next is a tangle that researchers would like to figure out.

Angela Lang/CNET

Mozilla is publishing anecdotes of YouTube viewing gone awry -- anonymous stories from people who say they innocently searched for one thing but eventually ended up in a dark rabbit hole of videos. It's a campaign aimed at pressuring Google's massive video site to make itself more accessible to independent researchers trying to study its algorithms. 

"The big problem is we have no idea what is happening on YouTube," said Guillaume Chaslot, who is a fellow at Mozilla, a nonprofit best known for its unit that makes and operates the Firefox web browser. 

Chaslot is an ex-Google engineer who has investigated YouTube's recommendations from the outside after he left the company in 2013. (YouTube says he was fired for performance issues.) "We can see that there are problems, but we have no idea if the problem is from people being people or from algorithms," he said. 

YouTube is the world's biggest online video source, with 2 billion monthly users, and 70% of the time people spend watching videos there is the result of one of its recommendation engines. But YouTube has come under fire for how its recommendation technology can inflame problems on the service, such as pedophilia, extremist content or misinformation. 

In a statement, YouTube said that it has working on improving recommendations since the beginning of the year and aims to elevate authoritative material.

"While we welcome more research on this front, we have not seen the videos, screenshots or data in question and can't properly review Mozilla's claims," a YouTube spokesman said. "Generally, we've designed our systems to help ensure that content from more authoritative sources is surfaced prominently in recommendations. We've also introduced over 30 changes to recommendations since the beginning of the year, resulting in a 50% drop in watchtime of borderline content and harmful misinformation coming from recommendations in the U.S."

Now playing: Watch this: YouTube's machine learning can't keep up with its promises...
6:29

Ashley Boyd, Mozilla's vice president of advocacy, said that YouTube's claim about the 50% drop in time spent watching borderline or harmful misinformation is exactly the kind of data YouTube should allow independent researchers to verify.

"That's the old era of 'Trust us, we've got this,'" she said. "Show us your work." 

Borderline content is how YouTube describes videos that aren't glaring misinformation but still questionable. YouTube originally announced its 50% reduction statistic in a June blog post about its progress tackling hate on the platform. 

"What we're trying to do is provide a window into the consumer concern about this," Boyd said. "There's a question about whether people care about this problem because they haven't left the platform. We don't think that's a good measure of whether people care."

Mozilla is publishing 28 stories it's terming #YouTubeRegrets; they include, for example, an anecdote from someone who who said a search for German folk songs ended up returning neo-Nazi clips, and a testimonial from a mother who said her 10-year-old daughter searched for tap-dancing videos and ended up watching extreme contortionist clips that affected her body image. CNET hasn't independently verified any of the stories Mozilla published.

Originally published Oct. 14 at 9 p.m. PT. 
Update, Oct. 15: Adds YouTube statement and more details from Mozilla.
Correction, Oct. 16: Chaslot was previously a Google engineer.