UK Government Proposes Digital Harms Legislation to Regulate Online Content
Two of the most powerful Ministers in the UK government, the secretary of state for Digital, Culture, Media & Sport (DCMS) and the secretary of state for the Home Department, have published a document, titled Online Harms White Paper, designed to control content on the internet. Technically, this is a consultation document open to public comment until July 1, 2019; but it clearly forms the basis of intended legislation: “Following the publication of the Government Response to the consultation, we will bring forward legislation when parliamentary time allows.”
The document (PDF) describes a new internet regulatory regime designed to prevent online abuse of children, vulnerable people, and even democracy (fake news). It takes its lead from the EU’s GDPR, and even proposes an independent regulator similar in vein to the GDPR regulator (the Information Commissioner’s Office — ICO). If this becomes law, it will have ramifications for the entire global internet.
The basic principle is to make websites responsible for the legality of user-generated content, and in doing so to force the companies concerned to better ‘police’ what appears on their websites. In theory, this will prevent hate speech, bullying, terrorist recruitment, self-harm advice and anything else the government classifies as contributing to online harm.
The intention of the UK government is well-received. “There are many caveats here about what creating a new regulatory body will actually entail,” Nathan Wenzler, senior director of cybersecurity at Moss Adams told SecurityWeek, “and how far the government can or should go to require companies to deal with harmful content in a more consistent and thorough manner. But, if even a portion of what is recommended here is done, we could see a significant reduction in child exploitation, terrorist activity, cyberstalking and harassment, black market activities and much more.”
It is also seen as the spark that might ignite a more global approach to tackling online harms. “I see the UK’s actions to combat and regulate online harms to be just as impactful and trendsetting as GDPR was for privacy,” said David Ginsburg, VP of marketing at Cavirin. “It should really serve as a foundation and a framework for action within the EU and even in the US. The positive outcome will be that the internet properties, in having to comply with the proposed UK regulations, will by necessity have to raise their standards globally, much like their response to GDPR.”
Matt Walmsley, EMEA director at Vectra, notes that artificial intelligence is both the cause and cure for some of the problems. “We’ve already seen with the Cambridge Analytica scandal that the speed and scale at which data is now processed and acted upon can have significant societal impact. Much of these new capabilities are underpinned with artificial intelligence.” But he adds, “using AI to help spot low provenance ‘Fake News’ is a likely technical response we can expect from content platform providers.”
One of the big concerns about this proposed legislation is that it could easily deteriorate into a form of censorship. It would be for the regulator, ultimately responsible to parliament, to decide what should or should not be allowed. The UK has no legal right to freedom of expression similar to the First Amendment. It is signed up to the European Convention on Human Rights, where Article 10 gives everyone the right to freedom of expression; but this is then taken away again in the next paragraph: “The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society…”
Jim Killock and Amy Shepherd, both of the Open Rights Group, wrote, “If it’s drawn narrowly so that it only bites when there is clear evidence of real, tangible harm and a reason to intervene, nothing much will change. However, if it’s drawn widely, sweeping up too much content, it will start to act as a justification for widespread internet censorship.”
As ambitious as this proposal is, there are those who think it doesn’t go far enough — even from within the wider UK government. Lord Gilbert of Panteg, chairman of the Lords Communications Committee, commented in a statement, “I welcome the Government’s White Paper… [but]… The need for further regulation of the digital world goes beyond online harms, however. A comprehensive new approach to regulation is needed to address the diverse range of challenges that the internet presents, such as misuse of personal data and the concentration of digital markets.”
His committee published its own proposal, in March 2019, ‘Regulating in a digital world’. It comprises ten wide-ranging principles for a new approach to regulating the internet and covering areas like accountability, transparency, and respect for human rights and equality.
The bottom line for any legislation is, however, its enforcement. Without effective enforcement, a law might as well not exist. The authors of this document clearly have GDPR in mind (it is mentioned several times) for both local and international enforcement — so we can expect any ensuing legislation to come with fines set at a similar level. However, the document notes that “under GDPR, the extent of compliance by companies based outside the EEA is still relatively untested.”
“Legislation is often a blunt tool, and could only be part of a solution, but it’s the most obvious one for a government to wield,” comments Vectra’s Walmsley. The danger is that a blunt tool is only able to strike large targets — such as Facebook, Twitter and LinkedIn — that have a UK presence. This is politically acceptable, and appears to address the problem; but is likely to miss the myriad small and private websites located around the world that spread hate, terrorism, racism, child pornography and more.
The UK has no practical jurisdiction over foreign websites, that could be located in countries with strong freedom of expression laws, or rogue nations with zero incentive to cooperate. The only enforcement possible against such websites would be to block them at ISP level. This is already legally possible — but ineffective. Consider The Pirate Bay. Technically, this is blocked within the UK. At the time of writing, it is possible to access https://thepiratebay.org/ from the UK.
One approach taken by the site is to keep changing its URL, so that ISPs need to change what they block. But even when it is blocked, it can still be accessed via a VPN — or via a constantly changing range of foreign proxy sites that are not blocked.
The UK government had planned to introduce a ‘porn pass’ scheme at the beginning of April (now thought to be delayed until the end of April). The idea is simple. ISPs are to block access to porn sites unless the user has legally acquired a porn pass.
VPN expert Ray Walsh from BestVPN.com blogged in March pointing out that tech-savvy kids will just use a VPN to by-pass the blocks. “What’s more, because young people are often more tech-literate — it seems likely that they will be among the first to bypass UK porn blocks altogether. This means that the very teenagers the porn block is meant to protect from age-inappropriate content will probably be the first to work out how to access it.”
Exactly the same principle will apply to any site blocked at the ISP level. If someone wants to access an extreme right-wing hate site, he or she will still do so via a VPN. The only solution to this would be to ban VPNs — just as China does (and it still doesn’t work). The irony in the Online Harms White Paper is that it may end up controlling only those large tech giants that are genuinely — although perhaps not yet very successfully — attempting to conform to government wishes. The small and more nasty hate and terrorism sites may remain largely unaffected.