Facebook and YouTube are among a group of popular web sites that have quietly begun using automation to remove extremist content from their sites, Reuters reported Saturday.
Originally developed to identify and remove copyright-protected material, the technology is looking for unique hashes, or digital fingerprints, to remove Islamic State videos and other similar material, two sources familiar with the process told the news agency. Such technology could be used to prevent reposts of content already deemed unacceptable but not identify new extremist content.
Representatives for Facebook and Google did not immediately respond to requests for comment.
The automated process is a step toward eradicating violent propaganda from the web in the face of increasingly common terrorist attacks around the world. In December, President Barack Obama asked the web's social-media giants to help prevent terrorist attacks by monitoring hateful content and removing speech as well as terrorist activity that appears on their networks.
Facebook, Twitter, Microsoft and YouTube in May agreed to a new European Union code of conduct that takes aim at illegal hate speech and terrorist propaganda posted online. Under the new rules, they have committed to reviewing within 24 hours of receipt the majority of notifications about a social media post that may contain hate speech. They've also agreed to remove the post if necessary.