X

Facebook, YouTube said to be automatically blocking extremist videos

Web giants are quietly using digital fingerprints to identify and remove Islamic State videos and other similar material, Reuters reports.

Steven Musil Night Editor / News
Steven Musil is the night news editor at CNET News. He's been hooked on tech since learning BASIC in the late '70s. When not cleaning up after his daughter and son, Steven can be found pedaling around the San Francisco Bay Area. Before joining CNET in 2000, Steven spent 10 years at various Bay Area newspapers.
Expertise I have more than 30 years' experience in journalism in the heart of the Silicon Valley.
Steven Musil

Facebook is among a group of companies reportedly using automated technology to identify and remove terrorist propaganda.

CNET

Facebook and YouTube are among a group of popular web sites that have quietly begun using automation to remove extremist content from their sites, Reuters reported Saturday.

Originally developed to identify and remove copyright-protected material, the technology is looking for unique hashes, or digital fingerprints, to remove Islamic State videos and other similar material, two sources familiar with the process told the news agency. Such technology could be used to prevent reposts of content already deemed unacceptable but not identify new extremist content.

Representatives for Facebook and Google did not immediately respond to requests for comment.

The automated process is a step toward eradicating violent propaganda from the web in the face of increasingly common terrorist attacks around the world. In December, President Barack Obama asked the web's social-media giants to help prevent terrorist attacks by monitoring hateful content and removing speech as well as terrorist activity that appears on their networks.

Facebook, Twitter, Microsoft and YouTube in May agreed to a new European Union code of conduct that takes aim at illegal hate speech and terrorist propaganda posted online. Under the new rules, they have committed to reviewing within 24 hours of receipt the majority of notifications about a social media post that may contain hate speech. They've also agreed to remove the post if necessary.