"Using our apps to harm children is abhorrent and unacceptable," Antigone Davis, who oversees Facebook's global safety efforts, said in a blog post Tuesday.
The move comes as the social network faces more pressure to combat this problem amid its plans to enable default encryption for messages on Facebook Messenger and Facebook-owned photo service Instagram. The end-to-end encryption would mean that except for the sender and recipient, messages couldn't be viewed by anyone, including Facebook and law enforcement officials. Child safety advocates have raised concerns that Facebook's could make it harder to crack down on child predators.
The first tool Facebook is testing is a pop-up notice that appears if users search for a term that's associated with child sexual abuse. The notice will ask users if they want to continue, and it includes a link to offender diversion organizations. The notice also says that child sexual abuse is illegal and that viewing these images can lead to consequences including imprisonment.
Last year, Facebook said it analyzed the child sexual abuse content reported to the National Center for Missing and Exploited Children. The company found that more than 90% of the content was the same or similar to previously reported content. Copies of six videos made up more than half the child exploitative content reported in October and November 2020.
"The fact that only a few pieces of content were responsible for many reports suggests that a greater understanding of intent could help us prevent this revictimization," Davis wrote in the blog post. The company also conducted another analysis, which showed that users were sharing these images for other reasons outside of harming the child, including "outrage or in poor humor."
The second tool Facebook said it's testing is an alert that'll inform users if they try to share these harmful images. The safety alert tells users that if they share this type of content again, their account may get disabled. The company said it's using this tool to help identify "behavioral signals" of users who might be at a greater disk of sharing this harmful content. This'll help the company "educate them on why it is harmful and encourage them not to share it" publicly or privately, Davis said.
Facebook also updated its child safety policies and reporting tools. The social media giant said it'll pull down Facebook profiles, Pages, groups and Instagram accounts "that are dedicated to sharing otherwise innocent images of children with captions, hashtags or comments containing inappropriate signs of affection or commentary about the children depicted in the image." Facebook users who report content will also see an option to let the social network know that the photo or video "involves a child," allowing the company to prioritize it for review.
During report by Business Insider. From July to September, Facebook detected at least 13 million of these harmful images on the main social network and Instagram., online child sexual abuse images have increased, according to a January