X

Facebook Parent Meta Impacted Palestinians' Human Rights, Report Says

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
2 min read
A pensive Mark Zuckerberg, with his head framed in the outline of a smartphone

Meta CEO Mark Zuckerberg

James Martin/CNET

What's happening

Meta released a report that shows how the social media giant impacted human rights in the Israeli-Palestinian conflict in May 2021.

Why it matters

Content moderation in languages outside of English has been an ongoing challenge for social media companies. Meta is making changes in response to the findings.

Facebook's parent company, Meta, made content moderation mistakes that impacted the human rights of Palestinians during an outbreak of violence that happened in the Gaza Strip in May 2021, a report released Thursday shows.

Meta asked consulting firm Business for Social Responsibility to review how the company's policies and actions affected Palestinians and Israelis after its oversight board, which examines some of the social media company's toughest content moderation decisions, recommended the company do so.

The report showed that Meta's actions removed or reduced the ability of Palestinians to enjoy their human rights "to freedom of expression, freedom of assembly, political participation, and non-discrimination." It also underscores the ongoing challenges the company faces when it comes to moderating content in languages outside of English. Meta owns the world's largest social network Facebook, the photo-and-video service Instagram and the messaging app WhatsApp.

BSR said in the report that it spoke to affected stakeholders and that many shared "their view that Meta appears to be another powerful entity repressing their voice." 

The findings outline several content moderation errors Meta made amid the Israeli-Palestinian conflict last year. Social media content in Arabic "had greater over-enforcement," resulting in the company mistakenly removing posts from Palestinians. BSR also found that the "proactive detection rates of potentially violating Arabic content were significantly higher than proactive detection rates of potentially violating Hebrew content."

Hebrew content experienced "greater under-enforcement" because Meta didn't have what's known as a "classifier" for "hostile speech" in that language. Having a classifier helps the company's artificial intelligence systems automatically identify posts that likely violate its rules. Meta also lost Hebrew-speaking employees and outsourced content moderation. 

Meta also wrongly pulled down content that didn't violate its rules. The human rights impact of "these errors were more severe given a context where rights such as freedom of expression, freedom of association, and safety were of heightened significance, especially for activists and journalists," the report stated.

The report also pointed out other major content moderation mistakes on Meta's platforms. For example, Instagram briefly banned #AlAqsa, a hashtag used to reference the Al-Aqsa Mosque in Jerusalem's Old City. Users also posted hate speech and incitement to violence against Palestinians, Arab Israelis, Jewish Israelis and Jewish communities outside the region. Palestinian journalists also reported that their WhatsApp accounts were blocked.

BSR, though, didn't find intentional bias at the company or among Meta employees but did find "various instances of unintentional bias where Meta policy and practice, combined with broader external dynamics, does lead to different human rights impacts on Palestinian and Arabic speaking users."

Meta said it's making changes to address the problems outlined in the report, which The Intercept obtained before its publication. The company said, for example, that it will continue to develop and deploy machine learning classifiers in Hebrew.

"We believe this will significantly improve our capacity to handle situations like this, where we see major spikes in violating content," said Meta's director of human rights, Miranda Sissons, in a blog post.