Facebook is not doing enough against anti-Palestinian hate

Facebook is not doing enough against anti-Palestinian hate
Opinion: Facebook's Oversight Board decision to investigate if there is bias against Palestinians is welcome, yet it is barely tackling an endemic problem.
6 min read
11 Oct, 2021
A Palestinian demonstrator holds a banner during a protest against blocking of Facebook to Palestinian's accounts in front of the Office of UN Special Coordinator for the Middle East Peace Process (UNSCO) in Gaza City, Gaza on 29 September 2016. [Getty]

Facebook's Oversight Board has recommended an independent review into whether or not the platform's content moderation policy is biased against Palestinians in favour of Israel.

The decision comes after years of complaints from Palestinians and their supporters that not only is Facebook deliberately censoring them and violating their freedom of speech, but the platform is also failing to address anti-Arab and anti-Palestinian hate speech, particularly during times of violent unrest. The situation is replicated across all of Facebook's brands, including Instagram and Whatsapp. There were at least two cases of Palestinian citizens of Israel being killed by Israeli lynch mobs who used Facebook-owned Whatsapp to organize the attacks.  While the Oversight Board's decision to hold Facebook to account is welcome, it is not enough to tackle the endemic problem of violations against freedom of speech and government interference on the platform. 

How did we get here?

The Oversight Board's decision was prompted by an incident on 10 May 2021, when an Egyptian Facebook user shared a news story from the verified Al Jazeera Arabic page to his 15,000 followers, referencing commentary from the spokesperson of al-Qassam Brigades, the military faction of the Palestinian political party, Hamas. The post was removed and then later restored after the user appealed directly to Facebook's independent Oversight Board. 

"The Arab Center for the Advancement of Social Media (7amleh) documented in the period of May 6-19, 2021 there were 1,090,000 comments relating to Palestine on social media, and 183,000 instances of hate speech against Palestinians"

The lack of transparency surrounding Facebook's content moderation policy was made clear during the Oversight Board's investigation, when Facebook was still unable to explain how two human moderators had judged the Egyptian user's Al-Jazeera post to violate community standards, as the company does not require moderators to record their reasoning behind their decisions at any stage of the moderation process.

However, this particular instance of over-moderation against Palestinians is just the tip of the iceberg. During May this year as tensions escalated, Palestinians were faced with mounting on-the-ground human rights violations including ethnic cleansing in East Jerusalem, intercommunal violence in Palestinian-majority cities in Israel, the Israeli army storming of Al-Aqsa Mosque, and an 11-day Gaza attack. Palestinians took to social media to document the events in real-time, and share the unfiltered reality of their lives with the rest of the world.

Perspectives

Yet, Palestinian users soon found that their content was being removed or restricted. The Arab Center for the Advancement of Social Media (7amleh) documented in the period of May 6-19, 2021 there were 1,090,000 comments relating to Palestine on social media, and 183,000 instances of hate speech against Palestinians. During the same period, Instagram removed or restricted at least 250 pieces of content related to Palestine and the #SaveSheikhJarrah campaign, while Facebook removed at least 179. These numbers represent just those content removals self-reported by users, and the true figures are likely to be much higher.

At the Palestine Digital Activism Forum in March 2021, Facebook discussed its human rights policy during a session, stressing its commitment "to be a place for equality, safety, dignity, and free speech." However, as events folded during May this year, it soon became clear that Facebook's policy of restricting Palestinian voices and their documentations of human rights abuses is still firmly in place. In fact, Palestinians are not the only marginalised community to experience such restrictive policies, as activists in Colombia and Kashmir reported similar issues. 

Official explanations

When faced with accusations of bias, Facebook often dismisses its removal of Palestinian posts as a mistake made by human moderators. For example, when Facebook-owned Instagram removed posts about Israeli police storming Al-Aqsa mosque, a Facebook spokesperson claimed that human moderators had mistaken the words "Al-Aqsa mosque" for "Al-Aqsa Martyr's Brigade" an armed Palestinian group, showing the platform’s lack of cultural understanding concerning the MENA region.

In other cases, the issue of over-moderation is blamed on vague technical issues, such as a fault within the algorithm. Indeed, Instagram promised to tweak its algorithm in response to accusations of widespread censorship from activists in Palestine, and even their own employees. However, these explanations typically only focus on the technical issues for such content removals and do not address the high frequency and diverse types of censorship documented by rights advocates. 

It is also likely the Israeli Ministry of Justice's Cyber Unit is behind many of these content removals. Since 2015, the controversial unit has submitted tens of thousands of removal requests to Facebook, according to the Office of the State Attorney's annual reports. In 2016, Facebook began having regular meetings with senior Israeli government officials, leading  Justice Minister at the time, Ayelet Shaked to claim that Facebook was compliant with 95% of their removal requests. During the period of 6th-19th May as unrest spread across the region, 7amleh contested content removals across Facebook and Instagram, and 20% were restored.

Upon investigation, it was clear that much of the removed content did not go against community standards, and in fact, documented human rights violations against Palestinians on the ground. This raises a number of red flags, such as a lack of transparency into why certain content is removed - including documentation of rights abuses - and why Facebook allows governments to interfere with user content to such a far-reaching extent. 

Moving forward

While the Oversight Board's recommendation of an independent investigation into bias in relation to Palestine and Israel is a step in the right direction, glaring questions still remain, not only about how content is reviewed and moderated but also how Facebook plans to protect freedom of expression and protect against government interference. This case is not an isolated incident and stresses the need for a thorough and transparent public audit of Facebook's policies and actions regarding Palestine. 

Earlier this year, 7amleh along with 16 organizations and over 50 artists, journalists, and human rights defenders around the world, launched the #StopSilencingPalestine campaign to demand full transparency and an end to Facebook's censorship of Palestinian voices. The coalition supports the Oversight Board's recommendations and calls on Facebook once again to:

  1. Public Audit: An independent, public audit of content moderation policies with regards to Palestine and a commitment to co-design policies and tools that address issues or over-moderation discovered during the audit. Furthermore, rules should be based on existing human rights frameworks and must be applied consistently across jurisdictions.
     
  2. Government request transparency: Complete transparency on requests -both legal and voluntary - submitted by the Israeli government and Cyber Unit, including a number of requests, type of content enforcement; and data regarding compliance with such requests. Users should also have the opportunity to appeal content decisions.
     
  3. Automation transparency: Transparency with respect to where automation and machine learning algorithms are being used to moderate content related to Palestine, including error rates as well as classifiers used.
     
  4. Dangerous organizations: Transparency regarding any content guidelines or rules related to the classification and moderation of terrorism and extremism. Companies should, at minimum, publish any internal lists of groups classified as "terrorist" or "extremist." Users cannot adhere to rules that are not made explicit.
     
  5. Commitment to co-design: Commitment to a co-design process with civil society to improve upon policies and processes involving Palestinian content.

 

Nadim Nashif is a digital rights activist and communications specialist volunteering with 7amleh - The Arab Center for the Advancement of Social Media.

Opinions expressed in this article remain those of the author and do not necessarily represent those of The New Arab, its editorial board or staff.