Skip to content

All the President’s Tweets – 27th May 2020

There is a lot of discussion about Twitter’s treatment of content posted by President Trump this week.

Twitter decided not to take any action in relation to claims by the President implying that the death of a Congressional intern many years’ ago was more than a tragic accident.

They did decide to apply a ‘label’ to other tweets where the President made claims that voting by mail (“postal voting” in British English) leads to increased fraud.

Each of these decisions raises interesting questions for tech regulation and I will look at the first one here with a post later on the second.

Defamation

In many countries, there would be a legal remedy to force the removal of content that is intended to harm someone’s reputation by suggesting they committed a serious crime.

The target of the attack would be able to complain to the courts that the content is defamatory and obtain an order for it to be removed.

Platforms would then need to decide whether to comply or accept significant legal risk by defying an order and becoming partly liable for the defamation themselves.

In many cases, platforms will restrict access content (at least in the territories where the court has jurisdiction) on receipt of an order finding that something is defamatory.

There is an interesting court case involving an Austrian politician and a Facebook user that shows how this can work outside the US.

In this case, the Austrian court found that it was defamatory for the Facebook user to have called the politician a ‘corrupt oaf’ and ordered wide-ranging content removal.

I personally think this is far too low a bar to consider claims as defamatory but would not have the same reservations if the politician had been falsely accused of murder.

In the current Twitter case, it is unlikely the principal target would be able to obtain such a court order because of the particularly high bar for public figures in the US to win defamation cases

They might also be concerned that the conduct of such a defamation case in open court can do more damage to their reputation even if they can eventually win, ie that this would play into the hands of the accuser.

This is sometimes called the ‘Streisand effect’ after a court case involving singer Barbara Streisand.

There are principled and practical reasons for platforms generally to follow relevant defamation law standards in these cases.

Societies hold varying views about where a claim against someone is so terrible that it should be illegal and they codify these in their defamation and other speech laws.

There is an argument that it is more principled for platforms to follow those local standards by acting on court orders (where there is good rule of law) than privileging a standard they have devised and seek to apply universally.

As a practical matter, deciding on whether something is true or false may require obtaining and assessing massive amounts of information from both sides, as happens in any significant court defamation case.

This is not something that platforms have the ability to do and so there is a risk of their decisions being low quality as they rely on partial (in both senses) information.

Common Decency

In the case of the President’s tweets, the target was a public figure but the most heartfelt complaint about them has come from the widower of the deceased intern.

This begs the question around whether other factors should come into play that we might call ‘common decency’.

There have been other cases where concerns about the damage being caused by attacks of this kind are top of mind.

Claims that victims of the Sandy Hook school shooting in the US were ‘crisis actors’ have rightly received a lot of attention in this context.

The first place we might look for decency is in the speaker, and in many cases we do see people withdrawing and apologising for comments on social media that cause hurt to others. 

It is important not to underestimate the power of social persuasion on most people as it can be much faster and more effective than either platform or legal remedies.

Seeing the speaker admit their mistake is more powerful than the sense that the speaker holds to their original position but has been ‘censored’.

But where the speaker is unwilling to show common decency, and may even revel in the pain they are causing to others, then this route is rapidly exhausted.

The question we might then ask is whether platforms should apply some kind of ‘common decency’ standard where a speaker will not do this themselves.

While they are not necessarily expressed using this language, some of Facebook’s Community Standards do have this effect in practice.

The incitement to violence section prohibits content that appears supportive of mass murderers.

There is a harm justification for this as expressions of support for acts of mass violence could lead others to follow the same path.

But it also has a significant ‘common decency’ effect in removing content that would be very distressing for families of murder victims.

The rules against bullying and harassment are intended to prevent individuals from coming to harm and are less restrictive when it comes to public figures.

These rules may similarly have a common decency effect in protecting people who are being attacked in conspiracy theories.

For example, the bullying nature of content directed against the Sandy Hook survivors formed part of the case for Facebook’s removal of arch conspiracy theorist Alex Jones.

Lessons

Each platform has its own standards and ‘case law’ for applying these so any approaches may not necessarily become universal.

We should also recognise that the impact of a standard may feel different when applied to services where there is a tradition of more ‘robust’ debate.

But we have some pointers to where platforms might go if there is an appetite to do more to protect those who may suffer damage because of someone’s speech.

They might more actively direct people to defamation law as the ‘right’ way for them to seek remedies against content that they feel is false and harmful.

This would be quite a shift away from the current position where platforms which have grown up with US speech norms are sceptical of defamation laws and inclined to discourage rather than encourage their use.

If we look at copyright claims, we can see a trajectory from platforms being hostile and resistant to takedown requests to them accepting the inevitable and building systems to catch and remove violating content.

Defamatory content is more complex as it often cannot be ‘digitally finger-printed’ in the way that music and video can be but these are degrees of difficulty not absolute bars to technical solutions.

There are huge speech implications if societies decide to move in this direction but it is better to consider these dispassionately now than rely on the fact that the world is not going to change.

The introduction of more platform standards that have a ‘common decency’ element is driven less by legal rights than by consumer preference.

As platforms become more widely used, they tend to adopt more rules not fewer in response to incidents that are widely seen to be wrong.

It is hard for any business to be on the wrong side of consumer sentiment for long periods of time.

There is always the risk that ‘hard cases make bad law’ and that a platform responds to what it believes is consumer sentiment only to find that its response causes more problems than it solves.

But we can step back from specific incidents and look for overall patterns and a direction of travel over time in sentiment towards content that causes deep pain and offence.

The US President’s tweets this week have put these questions of decency firmly on the agenda.

Once we have disentangled them from the politics, I would expect this to push consumer sentiment towards more decency (and consequent restrictions).

We saw this with the Sandy Hook conspiracies which put Alex Jones on the wrong side of public sentiment making it easier for platforms to restrict him.

But this is an untested hypothesis and I hope to see research that helps us to understand how consumer sentiment is actually moving.

These shifts in sentiment will drive both the legislative response in countries considering regulation and any updates to platform policies.

Leave a Reply

Your email address will not be published. Required fields are marked *