Palestinians are countered by Israeli forces when they demonstrate on the 20th anniversary of the massacre of Hebron. (Source: Wikimedia)

Social Media Algorithms Censor, Exacerbate Conflict between Israel, Palestinians

In the most recent exchange of warfare between the Palestinians and the Israelis, there has been a lot of fallout over how it played on social media. And it set off a lot of fireworks within those social media companies. This latest explosion in the Middle East was nothing like the Arab Spring a few years ago. Social media has now become the pipeline for information and instruction for various groups such as Boogaloo Boys, The Oath Keepers and the Palestinian civilians. All political movements rely heavily on Facebook, Instagram, and more.

A piece from Seattletimes.com shines a light on what happened and what went wrong. Written by Garrit De Vynck and Elisabeth Dwoskin, the article shows how much trouble is still brewing in the halls of these tech companies. And that’s without even going into the major transparency bills that the EU has passed over tech’s major objections.

Just days after violent conflict erupted in Israel and the Palestinian territories, both Facebook and Twitter copped to major faux pas: The companies had wrongly blocked or restricted millions of mostly pro-Palestinian posts and accounts related to the crisis.

Activists around the world said the companies failed a critical test of whether their services would enable the world to watch an important global event. The companies blamed the errors on glitches in AI software.

In Twitter’s case, the company said its service mistakenly identified the rapid-firing tweeting during the confrontations as spam, resulting in hundreds of accounts being temporarily locked and the tweets not showing up when searched for. Facebook-owned Instagram gave several explanations for its problems. They said it included a software bug that temporarily blocked video-sharing. And saying its hate speech detection software misidentified a key hashtag as associated with a terrorist group.

The accounts were reestablished but the damage was already done inside and outside on the streets. The problem is AI algorithms are not capable of discerning informative political speech with inflammatory speech.

The Palestinian situation erupted into a full-blown public relations and internal crisis for Facebook. CEO Mark Zuckerberg recently dispatched the company’s top policy executive, Nick Clegg, to meet with Israeli and Palestinian leadership, according to the company. Meanwhile, Palestinians launched a campaign to knock down Facebook’s ranking in app stores by leaving one-star reviews. The incident was designated “severity 1” — the company’s term for a sitewide emergency, according to internal documents reviewed by The Washington Post and first reported by NBC. The documents noted that Facebook executives reached out to Apple, Google and Microsoft to request that the posts be deleted.

The story compares how other social movements, such as Black Lives Matter,  have also bumped heads with social media tech companies and law enforcement. Jillian York, a director at the Electronic Frontier Foundation, an advocacy group that opposes government surveillance, has researched tech company practices in the Middle East. She said she doesn’t believe that content moderation — human or algorithmic — can work at scale.

“Ultimately, what we’re seeing here is existing offline repression and inequality being replicated online, and Palestinians are left out of the policy conversation,” York said.

Facebook spokeswoman Dani Lever said the company’s “policies are designed to give everyone a voice while keeping them safe on our apps, and we apply these policies equally.” She added that Facebook has a dedicated team of Arabic and Hebrew speakers closely monitoring the situation on the ground, but declined to say whether any were Palestinian. In an Instagram post May 7, Facebook also gave an account of what it said led to the glitch.

Twitter spokeswoman Katie Rosborough said the enforcement actions were “more severe than intended under our policies” and that the company had reinstated the accounts where appropriate.

“Defending and respecting the voices of the people who use our service is one of our core values at Twitter,” she said.

Even though these incidents seem oceans away and don’t seem to be affecting you, that’s a mistake. The world is no longer oceans apart digitally even though it might be politically. What these companies decide will eventually come to cross your path as well.

read more at seattletimes.com