Skip to content

A Misinformation Regulator Part 2 – 9th July 2020

-- 10 min read --

I shared some thoughts about how a UK misinformation regulator might work last week and received some useful feedback from people who think a lot about these things.

This has prompted me to take a step back and explain in this post why I proposed this kind of body by looking at it in the context of some of the alternatives.

As I do this, I will refer to our interests as both ‘citizens’ and as ‘consumers’ – a model I have borrowed from the UK communications regulator Ofcom as I believe it nicely describes the different but complementary interests we have in this space.

It shall be the principal duty of OFCOM, in carrying out their functions—

(a) to further the interests of citizens in relation to communications matters; and

(b) to further the interests of consumers in relevant markets, where appropriate by promoting competition.

Communications Act 2003, Section 3.1

To root this in reality, I want to consider the scenario we expect to encounter at some point in the next year where there is (hopefully) a Covid-19 vaccine available and some people arguing online against taking the vaccine.

On Vaccination

To be upfront around my own position, I am about as ‘pro-vaxx’ as you could get.

I regard vaccines that have been developed with good modern scientific process and certified by reputable agencies as safe, and if the advice is for us all to use them then I will wholeheartedly support that position.

There is a role for healthy scepticism so we don’t get sloppy and there have been examples of certification of products in the past that later turned out to be dangerous.

But there is no rational case to think the vaccination system is fundamentally broken or corrupt, and that all of the medical and political establishment are conspiring to harm rather than protect people.

Having set out my personal views, I also believe that others may take a different stance for all sorts of reasons and it is unhelpful to think of this in terms of ‘smart, right’ people on one side and ‘dumb, wrong’ people on the other.

I have written more about how there can be mixed motives for pushing conspiracy theories in a previous post on the 5G Covid-19 conspiracy.

We need always to keep our eyes firmly on the prize, which in this case is having people take up vaccinations to a sufficient level for there to be actual ‘herd immunity’.

The harm here is not in having an anti-vaxx opinion per se, but in the reduced take-up of vaccinations that we believe may occur because of those opinions.

We still do not understand fully the relationship between people seeing misinformation online and their actions, but the research base is starting to be built.

If the evidence shows that measures like adding fact check labels to anti-vaxx content will make people more likely to get vaccinated then there is a strong public interest case for using these tools.

But if the evidence suggests that visible labels actually makes doubters less likely to get vaccinated, then I would happily forego the satisfaction of correcting people in order to achieve the greater goal.

Like the process of developing a vaccine itself, developing a ‘vaccination’ against anti-vaxx campaigns will require trials of a range of different treatments, and these will each need to be evaluated for effectiveness.

Enough of my anti-anti-vaxx partisanship, now back to government…

UK Government Options

In the forthcoming Covid-19 vaccination battles, we can reasonably assume that the UK government’s target will be for take-up of the vaccine to be as high as possible.

We can predict that the government will be challenged by constituents and the media about anti-vaxx activity online.

There are broadly two approaches they could take in response that require no changes to legislation or intervention by a regulator.

Hands Off.   The government does not take any position on speech that is not illegal.  If there is legal speech that is of concern, then this is a matter between the platforms, their customers and civil society.
Lean On.   The government takes a view that anti-vaxx activity online is harmful and strongly denounces platforms in speeches and the media for not ‘doing more’ to tackle it.  The government will summon platform executives to lean on them for some kind of action.

And there are two approaches they could take if they were minded to update the law and/or empower some kind of regulatory body.

Takedown Order.  The government believes some anti-vaxx speech is so harmful to public health that it has made this explicitly illegal. An authority will issue takedown orders for this content and platforms must comply with these.
Advisory Notice.  The government has empowered a body to analyse misinformation which will look at anti-vaxx content. This body will refer illegal content to relevant authorities and issue public advice on how platforms should treat anti-vaxx content. 

Status Quo Scenarios

Based on our experience of other situations where there has been significant public concern about online content, it seems very unlikely that the UK government would take the Hands Off approach.

As citizens, we expect our government to take a view on matters that could have a significant impact on public health, so there would be genuine popular pressure for them to engage in some way.

In the absence of regulation, the most common approach we have seen to date is the Lean On scenario, where Ministers announce that they are summoning tech companies to ‘read them the riot act’ about the contentious content of the day.

These summonses are part theatre as they allow the government to appear to be doing something, and part substantive as targeted companies often feel compelled to turn up at the meeting ready to ‘give’ something.

While some information about what has been demanded and agreed may make it into the public domain, this is a largely opaque process.

This process is also open to government linking content questions with other regulatory tools in their private conversations with platforms.

It is instinctive for a politician who is angry about something that they believe is genuinely harmful to their constituents to use all the weapons at their disposal.

And this is not one sided as platforms may also bring in other issues like jobs and investment when defending themselves in private conversations.

I am not claiming that this is a deep conspiracy, but rather an inevitable consequence of settling content questions in negotiations rather than in some kind of transparent regulatory framework.

Any changes in approach are framed as voluntary action on the part of companies, and there is generally no written document from government ordering specific actions that can be debated or challenged. 

We cannot test whether any resulting restrictions are compatible with the UK’s freedom of expression obligations under the European Convention on Human Rights, as company actions are out of scope and there has been no explicit government order.

As a citizen, I may be pleased that my government has engaged, but frustrated that I cannot hold them accountable if I disagree with the particular outcome.

As a consumer, I may likewise be pleased that my service provider has acted, but unsure about whether this is voluntary or under coercion, and unclear about how closely they are aligned with my government’s position.

Returning to our near-future scenario, we can predict with a high level of confidence that, with the legal/regulatory status quo, the UK government will lean on platforms to act against anti-vaxx speech when this becomes a high profile issue for a Covid-19 vaccine.

This will consist of aggressive public statements by Ministers and the summoning of platforms for private meetings.

In those meetings, Ministers will deploy a range of threats to try to bring platforms into line, but they are unlikely to publish any detailed specification of exactly what they want done.

The platforms will make what they think are reasonable concessions to the government based on their existing policy approach and their assessment of the negative consequences of being seen to be at odds with the government on a public health issue.

This model can work, and indeed has worked on many occasions in the past, but it often lacks transparency and it can make it difficult to hold both platforms and government to account.

You may want to return to this option, after considering and rejecting the regulated scenarios, and explore how it might be improved and made into the long-term method for addressing misinformation.

An interesting example for this might be the EU’s Code of Conduct on Countering Illegal Hate Speech Online which is in essence a product of the European Commission leaning on platforms to do something.

The Code has its critics, including concerns about it being non-statutory, but it has the benefit of being more detailed and transparent than most exercises of this kind, and it includes mechanisms for platform accountability.

The Code also presents itself as being limited to ‘illegal’ hate speech so is not on the face of it dealing with legal-but-harmful content like misinformation [though this is debatable given uncertainties around what precisely constitutes illegal hate speech in different EU countries].

New Regulation Scenarios

If we are concerned about transparency and accountability, then the usual way to address this would be to capture the relationship between the government and platforms in some kind of regulation.

There are two models here that I have called the Takedown Order and the Advisory Notice models.

I am using Takedown Order as a shorthand for a variety of mechanisms where a platform could be made to remove or restrict access to content.

You can get a good sense of the volumes and types of restrictions platforms are making today because they feel legally compelled to do so by looking at their transparency reports.

Facebook provides an interesting narrative description of the orders it has received :-

Jul – Dec 2019 Update

We restricted access in the United Kingdom to content on the basis of valid court orders; requests from the National Offender Management Service pursuant to the Offender Management Act 2007; requests from the Environment Agency pursuant to the Environment Protection Act 1990; requests from the Advertising Standards Authority; requests reported by the Labour Party and the Community Security Trust (a trusted flagger under the European Commission Code of Conduct on Countering Illegal Hate Speech Online) related to alleged locally illegal hate speech; requests from police services related to ongoing criminal matters; and requests related to the unauthorized sale and promotion of regulated goods and services.

We also restricted access to 21 items in response to private reports of defamation.

Facebook Transparency Report, UK Content Restrictions

Even without a change in the law, some anti-vaxx content might be restricted following a takedown order, eg where it uses someone’s copyrighted material or is defamatory of an individual.

But, we can also anticipate that much anti-vaxx content would not breach any existing laws, eg where it is expressing an opinion, or misrepresenting scientific data.

The obvious question then is why government, if it has concerns, does not plug this gap by creating a mechanism in law for takedown notices to be issued for anti-vaxx content.

The government would have to demonstrate that any new legal restrictions are necessary and proportionate to deal with the public health risk in line with their obligations under human rights law.

The great strength of this approach is that the lines of accountability are crystal clear – your government has ordered your speech to be restricted, and you can take this up with your government if you disagree with what is being restricted.

There would be some challenges in terms of legislative timetables.

It is time-consuming to make new primary legislation – this is a strength as it means that laws are properly considered, but also a limitation when it comes to rapidly changing situations.

A temptation would be to create powers in primary legislation giving Ministers generic powers to order action against misinformation, with a lot of the detail left to secondary legislation that can be pushed through more quickly.

As a citizen, I may be comfortable with my government ordering takedowns in some contexts, eg to prevent a public health crisis, but much less comfortable if this tool were used in other areas, eg misinformation used to attack the government.

This is a perennial problem in considering whether to support giving government new powers – is there a material risk they could be abused by some future government (even if you trust the present one)?

In the UK, we have the European Convention on Human Rights as a tool that should constrain any government tempted to overly restrict speech, but the political will to respect this mechanism seems weaker now than previously.

So, a Takedown Order model is best for accountability of the government and there are, at least today, important checks and balances to control how government could use such a power.

But any legally ordered takedowns would not be happening in a vacuum and we need also to consider the consumer dynamics that would be in play.

In terms of actual impact on content, government ordered takedowns might turn out to be a relatively unimportant sideshow compared with actions taken by platforms at their own initiative in response to consumer concerns.

This might still be the correct route for government to take where it legislates for any new restrictions it thinks are appropriate, and then adopts a Hands Off position in respect of platforms engaging with their consumers.

But it may feel unsatisfactory to us as citizens and consumers if there are two unconnected and very different processes being run by governments and platforms to address what is the same harm.

And this brings me to the Advisory Notice model that I described in some detail in my last post.

A key motivation for developing this model was wanting to bring together all disparate regimes that could apply to particular types of alleged misinformation.

As a citizen, I would be able to understand what my government thinks needs to be done in the notice, and, as a consumer, I would be able to judge a service according to how it responded to that notice.

In many cases, this may show alignment between the views of both government and platforms about what needs to happen, and this is the ideal place to be from the citizen-consumer perspective.

I prefer mechanisms that tend to unite rather than divide both sets of interests as it gives me no satisfaction to see my government and the services I use at odds with each other over important public safety issues.

What I have not yet worked through sufficiently are some of the freedom of expression questions thrown up by the model I described.

In the Takedown Order model, there would be a formal process for testing any proposed restrictions against human rights standards when the legislation is drafted, and there are procedures to challenge government’s use of their powers.

In the Advisory Notice model, if there is a statutory basis to the body that does this work, then this would also need to comply with the UK government’s human rights obligations.

The mandate given to the body would have to be carefully drafted to make sure that the correct standards are being applied when it considers its response to claims.

The fundamental question is the extent to which such a body could issue advice about legal-but-harmful content and still be considered as acting within the UK’s human rights obligations.

If it can only advise on content that has already been made illegal then it would still have some value in directing traffic and clarifying the UK legal position on types of content to the public, but it would be a more timid beast than the one I described initially.

A Hybrid?

I have presented four models as though they were either-or choices but government could adopt a mixture of all of them.

It is unlikely any government would have the time or energy to make all harmful content illegal (even if this would pass the human rights test) but they might introduce some new legislation especially in the health misinformation space.

The Online Harms Bill signals an intent to shift much of the responsibility for negotiations with platforms to a regulator.

But the extent to which government will be able to do this in practice depends on the question of how far the regulator’s scope is limited to illegal content.

These are interesting questions for legislators whenever this gets to Parliament, and I appreciate having a chance to work through them with experts before we get to that stage.

Leave a Reply

Your email address will not be published. Required fields are marked *