Government interest in gaining access encrypted communications has been discussed over many years in multiple countries and is back on the agenda in the context of the UK’s Online Safety Bill.
This Bill gives a regulator, OFCOM, a wide range of powers to order internet services to take steps to combat a range of harms including, but not limited to, child sexual abuse material and content deemed to encourage terrorism.
There is no clause that says explicitly that the regulator can order a service that currently offers end-to-end encrypted communications to stop doing so.
But nor does it say that the regulator cannot make such an order if it sees this as necessary to meet the Bill’s objectives and there are some goals that appear hard to achieve in relation to encrypted content.
We are rather presented with a case study for the use of intentional ambiguity.
We will soon debate an amendment that seeks to resolve this ambiguity by making it clear that the regulator cannot issue orders that would remove or weaken end-to-end encryption.
[NB Finding specific amendments can be a nightmare given the way they are collated and stuck in PDF files but they should all be somewhere on the ‘Amendment Paper’ tab of the Parliamentary website page for the Bill].
I plan to take part in the debate and argue for this amendment but, jumping straight to the punchline, I am confident the Government will reject it and is extremely unlikely ever to agree to make such a change to the Bill.
But I also think it extremely unlikely the UK internet regulator will, at least for the foreseeable future, issue an order requiring services to decrypt end-to-end communications even if they have powers that could allow them to do so.
What the UK Government is seeking is not to force the issue by actually issuing orders that would result in the withdrawal of services but rather to have in its pocket the legal power that it could issue such an order if it chose to do so.
In this post, I hope to share some insights into the thinking behind this approach and show how it forms part of a pattern of engagement between governments and technology companies.
Assumptions of Goodness
In exploring this I will be assuming good intent on the part of both the internet services and the UK government and its agents.
Tech company leaders generally want their services to be used for positive purposes and will have feelings of sadness, disappointment, anger and shame when they see them being used to cause harm to others.
Employees of security services are usually motivated by wanting to protect people from harm and are mindful of the need to exercise the great powers they have been given in a responsible way solely in the public interest.
There will of course be dishonourable exceptions to each of these descriptions but they are, in my experience, closer to the norm than some characterisations of tech people as uncaring about harms and of security people as oblivious to human rights.
What this means in practice is that there is often far more agreement between UK authorities and tech companies about what needs to be done to identify and deal with people who carry out seriously harmful acts against others than is always apparent.
We can get some insight into the level of day-to-day cooperation in the Transparency Reports produced by many major providers which show that, for example, Meta received nearly 2,000 requests for data a month from the UK in the first half of 2022 and provided information in response to over 80% of these.
As well as these formal requests, there are discussions that take place continually between a range of agencies in the UK and all of the major tech companies often in a highly cooperative spirit where both sides agree with doing more to tackle a particular harm.
But in the context of these generally good relations, end-to-end encryption has become a point of intense friction and to understand this we need to consider the goals of each party and why this is one area where agreement has proved elusive.
Security Service Interests
There are three classes of benefit for the security services in being able to see the contents of communications between people.
Intelligence – knowing who the bad guys are, what they are planning, and with whom they are working is key to being able to prevent harmful acts from taking place, and to dismantling criminal organisations and networks.
Evidence – the content of communications is often essential to securing a criminal conviction and this will especially be true where the offence itself relates to the act of communication, eg a plan to carry out a terrorist act, or the circulation of grossly illegal images.
Deterrence – this is a more contentious “benefit” in a free society, but there is a line of argument that some people will refrain from illegal acts if they fear their communications could be accessed, and conversely that there will be extra communications-related crimes if people feel entirely safe from detection.
Security Service Toolkit
There is a wide range of technical and legal tools that UK security services can already use to see the content of encrypted communications.
The most obvious is to obtain an unlocked device where the communications of interest are available in plaintext (they would have been encrypted when in transit but local apps have to decrypt content to show it to you).
They can obtain an unlocked device from the target of their investigations or from someone associated with them who was looped into communications.
If they pick someone up and they are unwilling to unlock their devices then the UK has long had legal powers to threaten them with a lengthy prison sentence if they refuse to cooperate (certain conditions being met).
For high value targets, they can seek a warrant to place spyware on a person’s device that effectively makes all of their communications using any service available to those carrying out the surveillance.
An example of this method is the Pegasus software which has been extensively written about after disclosures that it had been used for a range of purposes that appear to be violating of human rights.
We should assume that the sophisticated UK security services have access to similar software tools and that they will feel comfortable deploying them where this is deemed to be necessary and proportionate in terms of the UK’s human rights framework.
They might also use a more brute force variant of the spyware technique with directed surveillance such as hidden video cameras to capture someone’s access codes and gaining physical access to their devices.
Where there is a network of targets using a particular service then it may be justifiable to seek to compromise the entire network, collecting all the data that passes over it, as happened with a service called Encrochat in 2020.
These methods are all useful when the targets are already known and can be taken into custody or made subject to technical or physical surveillance, but they leave some goals unmet.
Security Service Wishlist
Where a new person of interest appears and a threat seems imminent then the existing methods for obtaining their communications may feel inadequate especially where there is no physical access to the target and their devices.
The Holy Grail in this scenario is for the security services to be able to see all their communications in real-time just as they might listen to their calls with a phone ‘tap’ or read their letters through mail interception.
The fact that internet services providers cannot offer this access for end-to-end encrypted services, even where they agree on the risk and are cooperative, is a major frustration for security services that they express privately and publicly.
They may accept that the nature of the technology is such that they cannot expect to have the same access as they do for other channels but they will rarely admit this not least because they want to keep the pressure on service providers to get as much other help as they can.
This other help might be in the form of providing communications data (who messaged who, when and from where) or of deploying systems to detect patterns of concerning behaviour and provide identifiers for these users.
Security services are in the market for as much information as they can get and if the threat of a decryption order may encourage a hesitant company to offer other useful data in order to avoid this being carried out then they will see this as a useful tool.
There is also a lot of discussion of ‘client-side scanning’ which generally means using tools that check audio-visual content against a database of known bad content to see if there are matches.
Where there is a match then various consequences could follow from just blocking the content being sent through to sending an automated report to a law enforcement agency.
An order to implement this kind of scanning might be issued to the operating system provider rather than directly to the messaging service so it could be argued that this is not strictly requiring any decryption.
But it is hard to see how a messaging service could continue to describe itself as private and secure if it knew content was being scanned by a third party and did nothing to prevent this from happening.
This conflict between describing a device or service as ‘secure’ and client-side scanning came to the fore when Apple proposed then abandoned putting software on its phones that would scan iCloud content.
The Deterrence Conundrum
A potentially unhelpful side effect of the lobbying by the UK Government and its allies for access to encrypted communications is that this may undermine the very deterrent effect that they would want to see.
Whenever they say that end-to-end encryption means bad guys are safe from surveillance and detection and name particular services then they are fuelling the idea that bad content can be sent with impunity over these platforms.
If you subscribe to the belief that bad behaviour reduces with fear of detection then there would be benefits to talking up rather than talking down the risks of using encrypted services.
It would be accurate and legitimate to point out to people that 1) others can report their messages to platforms, which involves sending their content in unencrypted form for review, that 2) people they communicate with have copies they could turn over at any time, and that 3) there are a number of ways their devices could be accessed if they become a suspect.
If the goal is reducing harm then the UK Government should consider whether shifting their rhetoric in the direction of ‘don’t kid yourself you have impunity when using an e2e service, there are lots of ways we can still get you (in a legal and human-rights compliant way)’ might be better than their current line.
Why Are Services Holding Out?
You may still be wondering why, if tech company people are as public-spirited as I suggest, they won’t just offer the kind of access that security services would find useful, especially as holding out is causing them pain.
Counter-arguments often follow a ‘slippery slope’ approach – “yes, we accept the UK authorities are likely not to abuse access, but if you concede here then other less savoury entities could demand the same access.”
I have an aversion to slippery slope arguments and think there is a stronger principled debate to be had about how we expect companies to behave in respect of consumers of their services.
The whole direction of data protection and consumer law over recent years has been towards requiring businesses to be explicit and detailed about the nature of the products they are offering with the threat of huge penalties if they do not deliver on their commitments.
There has also been a strong emphasis in multiple pieces of legislation on enhancing privacy, empowering users, and preventing access to data without people’s consent.
The law encourages companies to offer secure services and to be absolutely scrupulous in living up to commitments they have made about security.
A shift towards end-to-end encrypted services, not just for communications but for websites and other applications, is a natural response to this policy environment and one that a significant part of government actually desires.
If we look at the recent debate in US political circles about TikTok as an example, it steers that company precisely in the direction of storing more data in such a way that it can only be accessed by the user and not the company to ensure it cannot be surreptitiously provided to a third party.
It may be politically expedient to celebrate when company access to data is limited in the name of the fight back against ‘surveillance capitalism’ at the same time as slamming companies for not increasing their access to data in the name of ‘online safety’ but we have to recognise the contradiction.
As policy makers, this is a situation where ‘cakeism’ is not the answer and we have to make a choice – either a) to allow companies to offer genuinely secure end-to-end encrypted communications to people in the UK, or b) to make it clear that the only messaging services on offer will allow some third parties to access your messages without your consent.
I believe that the UK Government is still closer to the first position as it is nervous about losing public support if it went fully down the second path and services started to withdraw from the UK market, but with two provisos.
First, if another mainstream country did manage to force service providers to compromise their encryption and they did not withdraw from those markets, then the UK would be encouraged to follow suit.
Second, the UK Government may feel more comfortable issuing orders that would effectively prevent a currently unencrypted service from making the transition to become end-to-end encrypted if it feels this would not lead to withdrawal from the market.
We should also note that there is a strong ‘game of chicken’ dynamic in all of this that could lead to someone being run over if there are miscalculations on either the government or the industry side.
It’s going to be a lively debate next week and thereafter.