If you’ve been following the social-media campaign that was recently unleashed by the Israeli army on a multitude of platforms — from Twitter and Facebook to Instagram and Tumblr — as part of its attack on Hamas guerillas in the Gaza Strip, you know that we are seeing the birth of a whole new way of experiencing a war: in real time, and with live reports from the combatants themselves. But while some might argue that more information about such events is good, it also highlights just how much of our perception of such a conflict comes to us through proprietary platforms like Twitter and Facebook and YouTube. What duties or responsibilities do they have (if any) to monitor or regulate that information?
One of the most obvious examples of this occurred very early in the attack, when the Israeli Defence Forces’ official Twitter account posted a tweet that warned Hamas leaders not to “show their faces above ground” because the army was about to launch missiles into their area of the Gaza Strip. This arguably qualifies as a direct and specific threat of violence, which is against Twitter’s terms of service — but so far the tweet remains, and the IDF account has not been sanctioned (there were some reports that it had been suspended, but those appeared to involve another unrelated account). In fact, the IDF account is marked as officially “verified” by Twitter.
We recommend that no Hamas operatives, whether low level or senior leaders, show their faces above ground in the days ahead.—
IDF (@IDFSpokesperson) November 14, 2012 When does Twitter decide to block content?
So far, Twitter hasn’t responded to a request for comment on how it is handling the Israeli conflict and the fact that it is playing out live on the network — complete with photos of rocket attacks, burned-out buildings and even dead bodies (I’ll update this post if and when Twitter responds). The company has often spoken of its responsibility as the “free-speech wing of the free-speech party,” but for the most part that has involved promoting the rights of individual users in the Arab Spring and Occupy Wall Street protests, not the interests of governments and armies.
Arguably, Israel would be well within its rights to ask Twitter to remove or censor tweets by Hamas, which is defined by the Israeli government as a terrorist organization. If Twitter has selectively censored tweets by Nazi sympathizers after a request from the German government — using the new powers it introduced earlier this year — then how would it justify not giving Israel the same ability to block Hamas tweets, or filter them based on certain geographical limits?
And it’s not just Twitter, of course: the Israeli army has been uploading videos of rocket attacks to YouTube as the campaign has been unfolding, and some are fairly graphic — including one that blew up a car carrying the head of the Hamas military wing. That video was removed Thursday morning by YouTube, and it appeared that the site might have decided it breached their terms of service, but then the company said it had removed the video by mistake and it was reinstated.
Threats of violence and shocking images are also something that Facebook has been known to remove, but for now at least the network says it won’t be removing content posted by the Israeli Defence Forces — which includes an app that curates photos from Instagram, many of which the army said were taken on the ground during its attack on the Gaza Strip.Our new information gatekeepers are inscrutable
But according to Mike Isaac of All Things Digital, the Facebook spokesperson he heard from didn’t say why the content from the Israeli Defense Forces was being left up, or under what circumstances it might be taken down — leaving the question open of what Facebook would see as offensive content in the context of a war. And that reinforces the same problem that has arisen before with Facebook and other similar social networks as a platform for speech: namely, they are effectively a series of black boxes when it comes to decision-making around what gets removed.
When YouTube removed the initial IDF video, it wasn’t clear whether that was an editorial decision or one made in error by an algorithm. When Facebook deletes accounts belonging to Arab women who are fighting for their rights, it isn’t surprising that this is seen by some as censorship, even when it might just be an errant algorithm. And while Google and Twitter both put up lists of the requests they get from officials, the reality is that they remove or filter out plenty of content and never mention it. And when Google selectively blocks a video like “The Innocence of Muslims,” there is no court of appeal that will hear arguments about that decision.
So while it’s a great thing to have all these sources of information — assuming that you believe more information is better, even if it is coming from the communications branch of the army — it is almost all being hosted by proprietary services (although the IDF also has an active blog where it has been posting live updates and even infographics). And while they have all expressed their commitment to free speech in some form or another, they have absolutely no obligation to uphold that, or to tell users when information has been removed, or why.
We may have disrupted our old information gatekeepers — newspapers, television, even governments — but in many ways we have just exchanged them for shiny new ones. And they are just as inscrutable, if not more so.