Skip to content

Hosni Mubarak – My Part In His Downfall – 25th January 2021

Last updated on January 26, 2021

-- 10 min read --

Today is the 10th anniversary of the start of massive protests in Egypt that led to the removal of their autocratic leader, Hosni Mubarak.

These protests came hot on the heels of demonstrations in Tunisia that had forced their President Zine El Abidine Ben Ali from power on January 14th 2001.

The people who had political agency during those times were the many thousands of men and women that took to the streets to demand change.

These political activists were able to use online tools, including the ones offered by Facebook, the company I worked for at the time, but it is important not to credit the tools with causing or sustaining political change – this is something that people do.

If you are interested in how I see the relationship between social media and these events more broadly, you may wish to listen to a conversation I recorded with Nicklas Lundblad last week

For this post, I want to explore one specific aspect of what happened – the area where I had a walk-on role in 2011 – which is how decisions are made within platforms about taking down or leaving up content promoting political protests.

This is not just a matter of historical interest but is relevant for decisions that confront platforms right now, in 2021, about content related to protests around the world, notably in Russia and the US.

As I write, I have no doubt that there are discussions taking place within groups of staff at all the major social media platforms about how to handle requests from Russian prosecutors to remove content calling for protests in support of Alexei Navalny.

I am confident of this because I have seen many similar requests in the past where Russian prosecutors declare Navalny, and other opposition activists, to be in breach of local laws on extremism and say that their events are unauthorized and must not be publicised.

In the US by contrast, it is very unlikely that the courts or government agencies would order platforms to remove calls to protest, as this would breach the famous First Amendment, but there are calls on companies not to give a platform to people promoting the kind of violent protests we recently saw in Washington DC.

[NB If I am citing three specific protest movements – in Egypt in 2011, and today in the US and Russia – this is NOT to claim any equivalence between these but because I want to explore how decisions made have to reflect the important differences between them.]

Formulaic Rules Rule

There are strong incentives for platforms to try and create what we might call ‘formulaic’ rules.

These are ones where you can classify types of content/behaviour and always take a pre-defined action when you see this.

Some examples will help illustrate what I mean by this :-

If content includes <racial slur word from list> 
and is <not self-referential> 
or <satirical> 
then <remove content> 
and <warn user>
If image includes <symbol of terrorist organisation from list> 
and is <not condemnatory> 
or <a news story> 
then <remove content> 
and <ban user>.

This kind of rule-making helps with consistency as it makes it more likely that whoever is doing a review (whether a human or machine) will take the same action for the same content.

It can also help reduce the scope for individual reviewer biases to creep in if everyone is working to lists of prohibited terms and organisations rather than being asked to decide for themselves what is acceptable.

[NB There is of course still scope for systemic bias when people upstream of the individual reviewers prepare the detailed criteria that will be given to human teams and coded into machine algorithms.]

This formulaic approach is especially valuable for platforms when they are trying to enforce their rules effectively at scale, as is the case for many major internet services.

It can also be beneficial to users in helping them to feel they are being treated fairly, but this depends on there being openness and clarity about the criteria being applied to a particular decision.

Returning to our theme of political protests, we can look at candidates for a formula that could be applied to decisions about content promoting political demonstrations.

Option 1 – Politics Rules

A simple rule would be never to touch political protest content :-

If content relates to <political protest> 
then <reject complaint> 
and <do not remove or restrict content>.

This approach would be most consistent with the political culture of the US and would certainly provide platform with a rationale for refusing to act on protests against authoritarian regimes.

But the fact that platforms have their own rules around who can use their services, and how they can use them, means that they are likely to want to impose some limitations for consistency.

For example, if a platform has decided an organisation is dangerous and banned them from having any kind of presence, then it would not make sense to allow people to organise protests in the name of, or in support of, that organisation.

Platforms usually do not allow people to use their services to incite violence, and we might expect these to be blanket prohibitions rather than saying ‘no incitement to violence unless in a political cause’.

Platforms may also find that people organising political protests break their general rules, for example by using a false identity or by spamming other users of the service to promote the event.

Option 2 – Politics Rules with Platform Rules Override

So the first refinement we need to make to any general permission for political protest content is to make it clear that the platform’s own rules may sometimes impose limitations :-

If content relates to <political protest> 
then <do not remove or restrict content> 
unless <group behind protest is prohibited> 
or it <incites violence>
or it <breaches other platform rules>

At this stage, we have introduced three elements of platform judgement into the formula.

Platforms decide who should, or should not, be included in their list of prohibited organisations and these decisions can be highly contentious if they lead to the exclusion of political factions that have some popular support.

Facebook’s decision to ban some nationalist organisations who organise events in Poland, for example, has led to the Polish government proposing a law that is aimed squarely at forcing the platform to lift these bans.

Decisions about banning organisations may have been made ahead of considering any particular protest but sometimes they are made in real-time, ie the question about whether to ban a group is triggered by content related to a particular protest.

Decisions about whether a specific call to protest violates a platform’s rules on inciting violence are necessarily context-specific and will often have to be made in very short periods of time between a protest being promoted and its due date.

The specific nature of political protests means this judgement can be highly complex when compared with the explicit ‘hey, let’s go beat some people up’ calls to violence that are the more usual target of such a rule.

Protesters may be non-violent but highly aggressive in the language they use to express their anger about an issue, and violent groups and individuals may take part in a protest whether or not the organisers have encouraged this.

In some contexts, we might predict that there will be a violent reaction from other groups, either opposing civilian factions or state forces, to a protest – there will be violence, but the original protesters will be the victims of it rather than the instigators.

Where a society is divided along ethnic or religious lines, there may be a familiar pattern of protests triggering counter-protests that seem inevitably to lead to inter-communal violence – the original protesters may not be explicitly calling for violence but historic evidence suggests it will happen if their protest goes ahead.

So, we see that within this element of the formula – a determination about incitement to violence – there is scope for another nested formula considering factors like the proportion of protest supporters who use violent language and the local societal context.

Finally, platforms may need to exercise their judgement about whether breaches of other more technical rules are serious enough to justify removing political protest content.

For example, they may get complaints from people that a protest organiser is ‘spamming’ them and have to decide whether to remove that person, along with the protest content they have created, or treat unsolicited communications related to protest differently from other spam.

One way for platforms to avoid having to make judgements about sensitive political issues like which protests should be allowed would be for them to defer to public bodies like the courts on these questions.

So the next option to consider is how this might work as a formula.

Option 3 – Politics Rules with Court Order Override

Our third option reads quite simply :-

If content relates to <political protest> 
then <do not remove or restrict content> 
unless <ordered to do so by a court or regulator>.

In this model decisions about which organisations should be banned, or whether a protest should not go ahead because of fears about violence, are made by the courts rather than by the platforms.

Laws governing political protests may look at other factors including threats to public order (a lower bar than incitement to violence), impact on traffic and businesses, and bureaucratic requirements like seeking a permit or providing notice.

The risks of following this approach are obvious in the context of authoritarian regimes where protest banning orders may be issued based on no higher principle than protecting the powerful.

A platform that has committed to following human rights principles, eg by signing up to the Global Network Initiative, will find that this is incompatible with complying with orders to limit political protest from some regimes, and so may want to apply a human rights override to the court order override.

Questions around legal compliance are not limited to authoritarian regimes but can also be important in more open, human rights respecting, countries in relation to when a protest might be definitively considered to be illegal.

In France, for example, you may be required to give notice and secure a permit for a protest to be considered legal.

If someone is calling people to protest without having secured such a permit, is the call itself illegal or does the illegality only begin when people arrive at the protest without the correct documentation?

In other cases, such as in Spain during the 15-M protests of 2011, lower courts may have declared a protest illegal but this decision is being challenged by the protest organisers in higher courts.

Should a platform act on the first notice it receives or wait until all avenues of appeal have been exhausted before removing the protest content?

These questions may seem like legal pedantry, but the answers will have a profound impact on people’s right to express themselves and organise politically. 

So, even if it is following a “remove where illegal” approach, a platform has to make a judgement as to whether they will restrict content at the first whiff of illegality or protect the content for as long there is any legal doubt still in play.

Considering these additional factors, we might revise the legal compliance formula to read:-

If content relates to <political protest> 
then <do not remove or restrict content> 
unless <ordered to do so by a court or regulator> 
and <the order comes from a human rights compliant body> 
and <the illegality is definitive>.

We have now introduced two different elements of platform judgement – on whether the issuing body for an order is “human rights compliant” and on whether the question of illegality has been settled.

Sometimes these judgements will have been made upstream as a platform regularly receives the same kinds of orders from a particular country and has already taken a view on whether the authorities there do comply with human rights standards and on the legal status of their orders.

In other cases, decisions will have to be made in real time as platforms try to understand the nature of a novel type of court order from a familiar country, or any type of order from an unfamiliar country.

Putting It All Together

In practice, platforms are likely to want to consider both their own standards and legal orders which would mean implementing this formula :-

If content relates to <political protest> 
then <do not remove or restrict content> 
unless [<group behind protest is prohibited> 
or it <incites violence> 
or it <breaches other platform rules>]
or [<ordered to do so by a court or regulator> 
and <the order comes from a human rights compliant body> 
and <the illegality is definitive>].

This formula depends on some big judgement calls that staff at the platform have to make and this is the key point that I wanted to draw out in this post.

Unless you support my first option – that there should never be interference with political protest content – then someone has to decide whether or not to remove particular calls to protest.

Matters are complicated by the fact that in many cases there will be very little time in which to make that decision and it may have to be based on imperfect information.

None of this is ideal situation from a theoretical point of view but it is a practical reality for many people who work for the platforms.

Many Hands Make Good Judgement

There are a number of factors that help people to make the best possible decisions in these circumstances.

The first, and most important in my view, thing you are looking for is good knowledge about the local conditions in which a protest is happening.

This means advice from people who have local language expertise, follow the local media, understand politics, and are able to analyse situations without allowing their own views to intrude excessively.

If we go all the way back to those demonstrations during the Arab Spring, we were fortunate at Facebook to have Arab-speaking staff who fitted this description and could explain what was happening on the ground.

Next up, you need good legal advice to help you understand which laws are relevant to the decisions you are making and how these work in practice so you can assess how to treat any court orders related to the content.

As a complement to the legal advice, you want information about the state of the rule of law in the relevant country, and how far it is regarded as operating within, or in breach of, global human rights standards.

Finally, you need experts in your own platform’s policies who can help you to understand any areas where the protest content may be in breach of the rules and the range of possible mitigation measures you might deploy.

Armed with all of this, you have a hope of making reasonable decisions and I would suggest that the people who work at major platforms, as a rule, have the skills and commitment to do this as well as anyone could.

There is of course scope to improve any decision-making process, and part of this is to be honest and rigorous in evaluating previous decisions.

Post Hoc Scrutiny

The decision-making process I have described involves many areas where platforms will be exercising their own judgement, and while I argue they are best placed to do that, it is also right to hold them accountable for the choices they made in specific cases.

The decision by Facebook to refer its suspension of Donald Trump’s account to its External Oversight Board will provide interesting insights into how that decision, which was linked to a violent protest event, was made.

Consideration of other decisions by platforms to leave protest content up in defiance of orders to take it down by affected governments may be even more sensitive but worth doing for just that reason.

Through a process of scrutinising specific decisions we can develop a body of ‘case law’ for platform decisions on protest content over time that will help everyone to understand how the different factors come into play, in a much more comprehensive way than the sketch I have drawn in this post.

It is this analysis that will ultimately stand up or disprove my assertion that there are ‘good people making hard decisions as best they can’ in the platforms, and provide a benchmark for comparing this status quo with other potential models.

Afterword – The Post Title

If you are British and of a certain age, you may recognise the title of this post as a tribute to Spike Milligan’s account of his wartime service, “Hitler – My Part in His Downfall”.

Spike had a way with words and his title nicely captures a profound truth – that millions of people can contribute to major struggles while also being relatively unimportant bit part players.

His account of the mostly mundane life of a British soldier during the Second World War reminded me of the stories my late father used to tell of his war service in various barracks and garrisons.

My father delighted in recounting as his moment of greatest heroism the time that he let some prisoners go shopping for souvenirs in Trieste in defiance of orders.

The prisoners were Cossacks and they wisely never came back from their shopping trip, as my father knew they wouldn’t, because they were due to handed over to Stalin’s troops as part of the Yalta Conference agreement.

This is a digression to another time, but is a reminder that we often have to exercise our own judgement based on the information we have available, knowing that to avoid making any decision would itself be a choice.

Leave a Reply

Your email address will not be published. Required fields are marked *