Skip to content

3 – island life

I flagged jurisdiction as one of the significant factors in regulating tech. Or to switch to simpler Anglo-Saxon language, it’s about whose rules apply where. This post will flesh that out by walking us through the experience of networked services as they develop internationally.

The arrival of the Internet was seen by many of those close to its creation as an opportunity to create a new space with new rules that did not depend on any existing legal framework.  

The initial technology of the Internet was largely built by more prosaic folks from the East coast of the US but was adopted by those with more revolutionary aspirations on the West coast.   

The essence of this revolutionary idealism is captured in John Perry Barlow’s “A Declaration of the Independence of Cyberspace”  published in 1996. The opening sentence is a call for governments to back off :-

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

John Perry Barlow

In this model, genuinely new territory appears from nothing and has no rulers.  (In other contexts, the concept of “Terra Nullius” is a contentious one as it was used by colonial powers to assert claims over land that was certainly not unoccupied. This attitude is best summed up by Eddie Izzard’s sketch “Do you have a flag?”).

But on odd occasions land can appear that really is nobody’s.  We saw this in 1963 when a volcano off the coast of Iceland pushed up enough material to create a new island that was named Surtsey after the Norse fire god Surtr.  

Surtsey eruption courtesy of National Oceanic and Atmospheric Administration

Surtsey’s legal status had to be settled and it was attached to the nearest national territory – Iceland. 

If we consider properties on the internet as new territory that emerges from the ocean in the spaces between countries then it can appear that nobody’s rules apply when a space first emerges.  But each space on the Internet is managed by a person or group of people. And those managers are located in a physical space and have a legal identity.  

Spaces on the internet are not in fact Terra Nullius but fall within the existing jurisdiction of whoever is controlling that piece of cyberspace.  For most of the large tech properties this is uncontroversial as they are happy to be established so they can benefit from the rule of law.  Some people will try to stay outside of any legal framework by using technologies to obscure their location but the technology does not actually place them out of jurisdiction, it merely makes enforcement harder.

Interestingly, Iceland has tried more than most countries to retain the spirit of a free cyberspace when it passed the International Modern Media Institute resolution in 2010.  This initiative would still recognise a legal jurisdiction – that of Iceland – but seeks to make that a ‘good’ jurisdiction for people interested in transparency and free expression to choose. 

Many of the big tech companies that are the focus of attention today are effectively islands off the West coast of the US and are clearly within US legal jurisdiction.  There are also very large islands sitting off the coast of China whose primary legal framework is Chinese. Current arguments about Huawei reflect precisely this understanding that the company has to respect Chinese law though there are different views about what that means in practice. This is equally true for software-based services. 

From the perspective of a tech company, the ideal legal situation would be to have to comply only with the laws of their ‘home’ jurisdiction.  This would produce the simplest compliance regime and mean you are dealing with a familiar legal framework for the leadership of the entity.

In practice, this is what most tech entities do – they will work on compliance in their home regime as their first concern and will maintain this position for as long as possible.  If they produce hardware then they are likely to have to worry about compliance with other regimes as soon as they start shipping products to other countries. If they run a software-based service then they may acquire users in other countries way ahead of their having any expertise in other legal regimes.

In some ways, it is remarkable how far online services have been able to grow without having to consider other regimes.  This in part reflects a ‘Pax Americana’ where the legal norms in the US were deemed sufficiently close to those in other regions to not require urgent enforcement of local laws.  The standout exception to this is China which made it clear that services not prepared to subject themselves to Chinese law could not operate there. 

If we continue to think of online services as islands moored off the coast of California then when people in other countries use those services they are effectively spending time within the jurisdiction of US law (or Chinese or French or UK law etc depending on where the service has its HQ).

If the service is small or the activities people are carrying out on it are innocuous then this situation may be acceptable to third countries.  Governments can permit their people to visit these islands without any reason to intervene.  

So what changes that and stimulates government interest?  It is really the perception that there is harm in some form.  This harm could be physical, with terrorism and child abuse often triggering major concern, or economic, such as a perception that revenue is being unfairly extracted, or social, where there may be very different views on what constitutes legitimate political activity and what is illegal subversion of the state.

The perception of the extent of harm is the key determinant for how strongly motivated governments are to act. Once a government has become concerned, they can take several actions.

The most drastic is to prevent people from visiting an island altogether.  This expresses itself in firewalls and other measures to restrict access.

Three countries do this on a wholesale basis – China, North Korea and Iran.   Others restrict access at the margins, eg UK courts may order blocks on file-sharing sites.  A common feature of legislative proposals in many countries is that they include some kind of blocking power.  The question then is how ‘bad’ a site has to be to justify such an extreme measure.

If government deems an island to be bad but not so bad that it has prevent access altogether then it will approach the managers of the island with conditions. These could relate to any of the Data, Content or Commerce layers.  

The EU Data Protection Regulation is an interesting example of this conditionality.  It explicitly requires entities outside EU jurisdiction to treat the data of EU-based ‘visitors’ in accordance with EU law not whatever local law would otherwise apply.

The so-called ‘right to be forgotten’ is an example of how conditions are applied at the Content layer.  This effectively orders the managers of the island to hide certain content from any visitors who arrive from the EU.  The Content still exists on the island and EU law does not necessarily prevent it from being shown to visitors from anywhere else according to a Sept 2019 judgement from the European Court of Justice.

Where laws require different rules to be applied to people from different jurisdictions, there are broadly two ways a space can do this,

They can establish different landing points for visitors from different countries and vary their experience according to how they arrived.  This is what happens when a service directs people to different domains depending on their country of origin, for example,, etc for visitors from Germany, France and Italy. They may then take more or less robust steps to direct visitors from those countries to their respective landing pages rather than to a generic global one.

If they have a single landing point, eg, then another method is to ‘tag’ each visitor with a label according to where the service thinks is their home country.  This is done using ‘geo-location’ technologies, principally seeing which internet service provider (generally a national entity) has been allocated the IP address being used by the visitor.  

Of course, these geolocation methods are not foolproof.  There are many free and commercial services that exist precisely to allow people to appear to be coming from a location other than their actual physical location.

If you expect perfect enforcement, ie that nobody from your country will ever see an item of Content, then the only certain ways to do this are either to have the Content removed globally or to block access to the hosting service. In the case of blocking a service this again may be subject to the same kinds of workarounds, using Virtual Private Networks and similar technologies, that apply for individual content items. 

The assumption of many in the technology community is that location-masking technology is being used to enhance the rights of the user, eg by avoiding content restrictions their country imposes.  An angle that may become more significant over time is what happens when enhanced rights are bypassed. For example, if someone in the EU consistently uses a VPN that make them appear to be in the US, should a service provider apply a GDPR treatment to their data or apply US law?

Encryption is a significant factor affecting the balance between governments and service providers in applying the law.

Not so long ago, most websites worked over unencrypted HTTP connections.  Sensitive services like banking used the encrypted protocol HTTPS but it is only recently that general purpose services like search and social networking have started to use encrypted connections by default.

This means that third parties are not able to ‘snoop’ on traffic between a user and a website which has important privacy benefits.  But it also means that governments cannot themselves block traffic to specific locations within a larger website.

In the pre-HTTPS world the Turkish government, for example, could have ordered local ISPs to prevent access to ‘’.  This would have left Wikipedia generally accessible but restricted access to the specific page.

In the HTTPS world local ISPs know that people are visiting Wikipedia but have no way of knowing which specific pages they are accessing and no technical means to restrict access at the page level.

In some ways, the old world was simpler as services could effectively tell governments to do their own dirty work of blocking access to content. Of course, the price of this was that governments could not only block access to content but see who was viewing that content.  The shift to HTTPS has been a huge improvement from a user privacy point of view and there is no reason to think people will want to go backwards.

In the new world, governments must ask the managers of the service to block the content and if they refuse then both sides go into a stand-off over full site blocking.

In terms of the island analogy, pre-HTTPS governments could themselves put blinkers on their citizens so they would not see certain parts of the island, post-HTTPS governments need to ask the island manager to block off certain parts of the island from view by their citizens.

[NB sharp-eyed readers will note that this blog uses old-fashioned HTTP. There is a bit of work for me to do to move to HTTPS with my hosting provider but I felt OK launching without encryption given this is public content with no expectation that people will share anything sensitive here].

Content restrictions are a common concern of governments, but there are equally interests in Data for both privacy and security reasons.

On the security side, the interest of governments is usually in knowing what their people were doing when on the island. This may be for intelligence purposes to prevent crime in their real homes or because there are suspicions that their activity on the island is itself a breach of local criminal law. 

The move towards encrypted connections has again made it less likely that a government can get the data it wants by capturing it from a local ISP.  They may seek to gain access to the devices being used by their targets using their own technology or buying services in from specialist companies who offer this (another part of the tech sector that we should consider for new regulation!). 

Or they may ask the service provider directly for the information they want. A whole host of jurisdictional questions come into play when such a request is made.

There are likely to be laws that govern data disclosure in the primary jurisdiction of the service provider. These may prohibit or place conditions on releasing data to third parties including foreign governments.

The service provider may have equipment or people in the requesting country who face formal or informal legal risk if the request is not met. The request may relate to persons who are in yet another jurisdiction that has its own rules over when and where personal data can be disclosed.

We are only just at the starting point of understanding how all these obligations should be married together and of developing a common understanding of when data should and should not be disclosed. And given the wide variation in legal standards between countries this may be something that can never be fully resolved.

And our friend Speed of technological development again comes into play as the increasing use of privacy enhancing technologies complicates the picture. We may end up agreeing a legal regime that permits cross border service provider disclosure at a time where there is little or no intelligible information left for them to disclose.

When governments decide that something is sufficiently concerning – whether related to Data, Content, or Commerce – that they are prepared to act unilaterally then several factors will determine how service providers react :-

1 – the gap between the new demand and the service provider’s normal modus operandi. Requirements to tweak user agreements for legal compliance, or to take down content that is similar to content the service already restricts may not cause significant issues for service providers. At the other end of the spectrum, orders to restrict mainstream (opposition) political content or provide personal data on people who share such content will raise major human rights concerns.

2 – the physical footprint of the entity providing the service. This will vary greatly across the sector from telecoms companies that need major facilities and staffing in each country where they operate through to operators of software-based services that do not need any equipment or people outside the country where the service has been developed. A physical presence changes everything and regulation will itself be a major factor in the decisions entities make about where to establish themselves.

3 – the corporate status of the entity providing the service. A major company with a significant business at stake is generally going to be wary of overtly defying the law. We can argue about creative interpretations of the law and the extent to which public companies try to find work-arounds but straight out refusal to comply with court orders, at least in major countries, is unusual. Smaller operations and ones that are not trying to develop a commercial play may be more relaxed about simply staying out of markets where there are negative court judgements.

4 – the willingness of the legislating country to sanction the service. The ultimate sanction that countries have is to order their telecoms networks to block access to a service. This may be done secretly and unilaterally, eg in a national firewall like China has, or using a legal process where blocking is a penalty for failing to comply with a court order, eg Turkey’s blocking of Wikipedia after their refusal to remove a piece of content.

As a general rule, once a service becomes sufficiently influential to be of concern to governments, it will have developed a footprint and corporate status that mean it has a bias towards compliance.

So we see that the major internet companies are willing to restrict access to items of content in response to legal process in many countries. Many service providers will also respond to requests for data even though the response may often be just to re-direct authorities to relevant international judicial processes.

It is common now for companies to publish regular transparency reports that shed light on the extent of these content restrictions and data requests, eg for Facebook, Twitter and Google.

Example 3 – Content Restrictions in Turkey

Turkish law grants the courts and the national telecoms regulator broad powers to order restrictions on internet content, notably under law 5651. There has been increasing criticism about the rule of law in Turkey over recent years, but it is a democracy and signatory to the European Convention on Human Rights.

The authorities in Turkey frequently send content restriction requests to global service providers. These may reflect the personal concerns of an individual, eg someone has created a fake account using their photo, through to highly political concerns of the government, eg leaked content about the activity of security forces in Syria.

You can see that at least some of these requests result in content restrictions in the transparency reports of the large user-generated content platforms – Google, Facebook, and Twitter.

The Wikimedia Foundation’s transparency report shows that it had not agreed to to any government requests to restrict content as of June 2019. We know that this includes rejecting requests from the Turkish government in 2017. Wikipedia was then blocked from 2017 until a court ordered the block to be lifted at the end of 2019 and access was restored from January 2020.

The Turkish authorities have blocked or throttled access to a range of internet services over the years. A revision to law 5651 in 2014 made this easier by creating a body of internet access providers who were legally required to enforce any such blocks.

Service providers therefore know that the threat of sanctions is very real in Turkey. Where their judgements may vary will reflect a complex equation factoring in the extent to which any particular request contradicts their own mission alongside considerations of their physical presence and corporate responsibilities in the country.

Compliance with the request to Wikipedia would have gone entirely against their core mission. They were being asked to edit content that was demonstrably accurate but uncomfortable for the Turkish government. They have volunteer contributors and editors in Turkey, as they do in every country, who they would have wanted to protect but they do not have a ‘corporate’ presence with associated commercial interests there.

Wikipedia’s users in Turkey certainly suffered from access to the service being more difficult and perhaps riskier (using a VPN to circumvent blocks might be monitored by governments) during the period of the block but this was felt to be a less evil than compromising their core mission. And their stance in resisting the request has been vindicated by an eventual Constitutional Court judgement that may benefit all service providers by making it harder for the Turkish authorities to defend full service blocks in future.

Following my island analogy, the Turkish Constitutional Court effectively ruled that it was excessive for the government to prevent all Turkish people from going to Wikipedia island simply because the island’s managers refused to hide one part of the island from Turkish visitors.

I will dig into how commercial providers of user-generated content services manage content restrictions in later sections. There are different dynamics at play when a service sees itself as a platform for third parties to publish their content as opposed to one that publishes its own content (albeit that this may have been crowd-sourced as in the case of Wikipedia). An areas of concern will be the how commercial providers weigh each factor and in particular whether these as primarily commercial decisions. We should certainly not disregard commercial considerations – they are real – but these will generally be part of a broader equation.

Summary :- I use an analogy comparing online spaces with islands to structure a discussion of jurisdictional questions. I describe the key considerations for both entities running internet spaces and governments when thinking about jurisdiction.

Leave a Reply

Your email address will not be published. Required fields are marked *