The UK government has now released its awaited Online Harms White Paper., detailing some potential changes to the law regulating intermediaries to try to curb damaging material found on the Internet. To say that the white paper has been controversial would be an understatement. While there are specific problems with the document, I will concentrate on what I see as some of the wider failings from an Internet regulatory perspective.
The overarching tone of the report is that it buys into the narrative that the Internet is an overwhelmingly bad place, inhabited by terrorists, paedophiles, Satanists, and trolls. The public, and particularly children, are just one click away from a torrent of filth, and it is up to the government to keep them safe. As the miscreants responsible for these harms hide behind a veil of anonymity, platforms are to blame for the existence of all type of objectionable content, and they must be made directly liable for what happens online.
The first problem evident in the White Paper is in the very definition of online harms. The report takes a scatter-shot approach and lists everything that could be considered harmful, from serious content to more vague and mild postings. So we go from harms such as child abuse, terrorism, and slavery, to things like excessive screen time, disinformation, trolling, and sexting. While the paper is clear that there are degrees of harm in each of the listed categories, the proposed solutions seems indiscriminate, and there is practically no granularity with regards to the solutions offered.
And what is this solution? The main one is as follows:
“The government will establish a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services. Compliance with this duty of care will be overseen and enforced by an independent regulator.”
This is the creation of a duty of care from platforms so that they “prevent other persons coming to harm as a direct consequence of activity on their services.” The regulator will come up with a series of codes of practice detailing actions to be taken by the platforms. The paper includes a series of other actions, such as enhanced transparency and accountability, and the provision of enhanced complaint and flagging mechanisms.
Another feature of the proposed regulatory framework is that it seems to cover most technologies and platforms. There is a clear understanding that if you regulate one area such as social media, harms will spread to other platforms, so the White Paper tries to propose an overreach into services such as hosting. Interestingly, the White Paper will not cover private communications as this could infringe privacy between the parties. This could simply encourage most harm to take place in private platforms such as WhatsApp.
Another troubling section is one detailing technological solutions. “Companies should invest in the development of safety technologies to reduce the burden on users to stay safe online.” In other words, Nerd Harder.
It would be easier to say that something must be done, and these proposals are a good starting point. Who could disagree with keeping children safe online, right? The problem is in the detail. The inclusion of some practices such as cyberstalking, cyberbullying and trolling make things extremely vague, particularly when these practices are not defined properly, or even defined at all. There is a danger of conflating all online harms, and while the paper clearly makes some actions such as terrorism and child abuse a priority, there is no detail on how exactly platforms are expected to respond differently to something like disinformation and harassment. This is a fundamental problem with the White Paper, the fact that we will have a strong regulator in charge of overseeing diametrically opposed subjects like revenge porn and the Flat Earth make me fear for eventual overreach, even if we are promised the use of proportionality principle and risk assessment. At the very least, the paper should quite simply do away with anything dealing with minor annoyances, and stop calling them harms. Drop cyberbullying, trolling, cyberstalking, harrassment, and similar ill-defined harms, and concentrate on the big and serious issues such as terrorism and slavery.
There are also a few underlying assumptions to the paper that make me think that it doesn’t understand the Internet. The main problem with the paper in its present form is that it seems to think of the Internet as a centralised system where the platforms reign supreme, and that removing content from the platforms will severely curb their ill effects. This works only in the increasingly centralised system that we have, where we have been allowing the big tech giants to accumulate more power. But the Internet is built on resilience, and the continuing existence of services such as the Pirate Bay, even after years of blocking and adverse legal decisions, attest to the fact that if there is something that the Internet is good at is content dissemination. If you squeeze the platforms, the harms will migrate elsewhere.
Don’t get me wrong, the Internet can indeed be a horrible place inhabited by opinionated idiots outraged by the latest film featuring a woman. Tragedies such as the Christchurch video serve as a reminder that people are being radicalised online, and that this has an effect in the real world. But we also should realise that nothing in the White Paper would have prevented Christchurch, the radicalisation doesn’t take place in the platforms subject to regulation. Leaving out the Dark Web, private messaging, and decentralised services just means that we will have gated communities surrounded by dystopian anarchies.
But perhaps the worst part of the White Paper is that it completely shifts all the blame onto platforms, while the truth is staring at us in the face. We are the ones that propagate these harms. We are the ones sharing the Christchurch video, sharing disinformation, posting abusive memes, contributing with abuse, and encouraging to a toxic online environment. Any visit to the comment section in a random YouTube video will show you the reality of the problem at hand: we are the online harms. The change must start with us.
1 Comment
News of the Week; April 17, 2019 – Communications Law at Allard Hall · June 15, 2019 at 6:43 am
[…] Online harms white paper misses the mark (Andres Guadamuz) […]