X

New Zealand mass shooting shows tech companies can't control viral tragedies

Facebook and YouTube are struggling to keep videos of a terror attack designed to go viral off their platforms.

Alfred Ng Senior Reporter / CNET News
Alfred Ng was a senior reporter for CNET News. He was raised in Brooklyn and previously worked on the New York Daily News's social media and breaking news teams.
Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Alfred Ng
Queenie Wong
5 min read
NZEALAND-CRIME-SHOOTING

A police officer secures the area in front of the Masjid al Noor mosque Friday after a shooting incident in Christchurch.

Tessa Burrows/Getty Images

For every video of the mass shooting in New Zealand that YouTube and Facebook block, another two or three seem to replace it.

On Friday, a gunman in Christchurch attacked Muslims praying at a mosque and livestreamed the shooting on Facebook. The social network removed the video and deleted the shooter's account. But that didn't stop the clip from spreading across the internet. The shooter referenced PewDiePie, a popular, if controversial, YouTube star and Fortnite, the hit social game, ensuring the video circulated wider and deeper on the web.

The roughly 17-minute video was downloaded from Facebook. Then it was re-uploaded to YouTube multiple times, with new posts often appearing within minutes of each other. YouTube is encouraging users to flag any videos showing this clip and said it's been removing thousands of videos related to the shooting in the last 24 hours.

"Shocking, violent and graphic content has no place on our platforms, and we are employing our technology and human resources to quickly review and remove any and all such violative content on YouTube," a YouTube spokesperson said in a statement. "As with any major tragedy, we will work cooperatively with the authorities." 

screenshot-2019-03-15-at-10-17-51-am
Enlarge Image
screenshot-2019-03-15-at-10-17-51-am

Re-uploads of the clip have been plaguing YouTube's moderators, who are struggling to remove the videos.

Alfred Ng / CNET

The video-streaming giant uses algorithms, such as Content ID, that automatically detect when copyrighted materials like songs and movie clips are uploaded onto its platform, so they can be taken down by copyright owners.

Google, which owns YouTube, didn't specify what tools it was using to help control the spread of the New Zealand video, saying only that it was using smart-detection technology to remove the clips.

The search for the violent videos underscores the difficulty social media companies have in detecting and removing hateful videos and comments. In what's become a sad practice, videos of tragedies bounce around the web as tech giants try to purge them. Critics have pointed out that the New Zealand shooter was able to livestream his rampage for more than a quarter of an hour before Facebook shut it down.

"This is flatly unacceptable," Farhana Khera, the director of Muslim Advocates, said in a statement. "Tech companies must take all steps possible to prevent something like this from happening again."

Authorities in New Zealand reported that 49 people were killed and at least 20 wounded at two mosques. Three people have been arrested in connection with the attacks, and one suspect has been charged with murder.

With more than 2 billion monthly active users on Facebook and nearly 2 billion monthly logged-in users on YouTube, these social media platforms have an enormous reach. 

Facebook said it's continuing to search for any instances of the video on the social network, using reports from the community and human moderators, as well as tech tools. The social network has set up a system designed to automatically detect and remove clips that are visually similar to the original video. It's also scanning for similar audio.

"New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter's Facebook and Instagram accounts and the video," Mia Garlick, a Facebook New Zealand spokeswoman, said in a statement. "We're also removing any praise or support for the crime and the shooter or shooters as soon as we're aware."

In addition, Facebook intends to take down any content that claims the shooting didn't occur or that survivors are crisis actors.

Facebook's efforts haven't stopped clips or links to the Facebook Live video from making their way to other social media sites, including Twitter, where they attracted thousands of views. Twitter, which prohibits users from glorifying violence on the site, uses a mix of technology and human reviewers to find the videos but also encourages users to report the content.  

Reddit was also banning groups, including the r/watchpeopledie subreddit, after users shared a link to the shooter's live video last night. 

"We are very clear in our site terms of service that posting content that incites or glorifies violence will get users and communities banned from Reddit," a Reddit spokesperson said. "Subreddits that fail to adhere to those site-wide rules will be banned."

People were also reporting that they saw the video being shared in groups on Facebook-owned messaging app WhatsApp.

Tech giants, including Facebook and Google, have automation that's worked in the past for removing extremist videos.

In 2016, The Guardian reported that Facebook and Google used algorithms similar to Content ID to automatically remove videos linked to ISIS. This technology looks for videos that have already been uploaded and flagged as violations. It then blocks those videos without requiring a human being to review them.

Facebook uses similar tools for blocking revenge porn on its website, the company revealed in 2017.

The gunman in New Zealand promoted his livestream and a manifesto on his Facebook account, along with 8Chan, a fringe message board, looking to use the internet to make his mass murder go viral.

In his manifesto, the gunman referenced pop culture topics like PewDiePie, Fortnite and the video game Spyro the Dragon, in an attempt to draw more attention to his mass shooting. At one point, the shooter says, "Remember, lads, subscribe to PewDiePie."

The reference forced the YouTuber, whose real name is Felix Kjellberg, to tweet that he was "sickened" by the shooting.

As clips of the shooting continue to resurface, experts worry the video will inspire the next mass shooter.

"This is one of the dark sides of social media, and something that's almost impossible for the companies to do anything about. They're not going to be able to block this material in real time," said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights. "It's a real conundrum about the dangers that social media can facilitate."

Tom Watson, the deputy leader of the UK's Labour Party, also called out tech platforms for struggling to stop the video's spread. In a statement, Watson said he'd be writing to social media companies to ask why they failed to remove the clips.

In a tweet, Watson said YouTube should have suspended all new uploads until it could prevent the New Zealand mass shooting video from spreading.

"The failure to deal with this swiftly and decisively represents an utter abdication of responsibility by social media companies," Watson said. "This has happened too many times. Failing to take these videos down immediately and prevent others being uploaded is a failure of decency." 

Watch this: Facebook is putting women on the front line of its war on fake news

Originally published March 15, 8:24 a.m. PT
Updates, 9:26 a.m.: Adds comment from Muslim Advocates, background; 1:05 p.m.: Includes comment from Reddit and information about Twitter and WhatsApp; 1:28 p.m.: Adds more background, PewDiePie's response. Correction, March 15 at 4:13 p.m.: Corrects Tom Watson's affiliation; March 16: Adds details about efforts to remove the video and related content.