There is an overwhelming narrative at the heart of the current push for Internet regulation, and it is that the Internet is like the Wild West, and unregulated anarchic cesspit filled with filth, terrorism, abuse, and Nazis. At every corner teenagers are presented with self-abuse images prompting them to commit suicide. Facebook is one right-wing propaganda machine tricking the old into voting for Brexit/Bolsonaro/Trump. Youtube is filled with extremist videos. Social media helps to spread misinformation in subjects that range from flat earth to anti-vaccination. Children are constantly cyberbullied and at the mercy of grooming gangs. Whatsapp is just a giant fake news delivery machine. Women are constantly abused whenever they interact online in any shape or form.
Undoubtedly all of the above online problems are true, although the prevalence and reach of some issues is sometimes blown out of proportion, but there is no doubt that these harms require action. But most of the growing number of solutions that have been proposed seem to be missing the realities of online spaces and interaction. I have been surprised by an obsession with online platforms such as Facebook, Apple and Google, and there is little attempt to try to think rationally about where online harms are really taking place, and what shape should effective regulation take.
The first problem is one of classification of harms, but that is a subject for another post. For now, let’s concentrate on the problem that is often cited at the top of most regulatory efforts, that of terrorism. While it is clear that terrorism is an offline problem, it does have a sizeable online element in its causes and how it spreads. It is undeniable that extremists can use the Internet to spread their message, to radicalise and recruit new followers. To a lesser extent, online tools can be used to coordinate attacks, and in one occasion, to broadcast the atrocity in real time.
For many years, social media, tech giants, platforms, and even encryption have been blamed after each terrorist attack. Technology makes for an easy scapegoat in complex religious, social and political situations, so we are often presented with calls for platforms to monitor private communications or to create backdoors. But even before any regulation took place, terrorists adapted suspecting surveillance, and attacks such as the Bataclan in Paris were committed entirely offline, using nothing more sophisticated than burner cell phones and text messages. No calls to ban mobile phones ensued.
But the number of attacks had a lasting effect on the visible online presence of Jihadists online. It was evident that social media was filled with several propagandist accounts, and intermediaries eventually acted to remove any extremist content from their platforms, with Twitter boasting that they had removed over 350k accounts in 2017. But evidently, removing a social media account does not solve the problem, extremists started to congregate in different venues and technologies, sometimes darker and more difficult to monitor. Private encrypted communications such as Telegram are the most popular choice.
As the ISIS threat waned slightly, the focus became right-wing terrorism. This is a movement that arose almost entirely online, and was under-researched until the number of attacks started to mount, from Breivik to Christchurch, there is a clear thread of online radicalisation that fosters the rise of lone wolf terrorist attacks, clearly politically motivated, and intending to cause dissent and chaos.
Christchurch shocked everyone, particularly because of the livecasting element. There have been immediate calls to do something, with Jacinda Ardern stating that there is “no right to livestream murder”. These proposals continue the trend of placing all of the responsibility on platforms and service providers. But this continues to miss the nature of the online element when it comes to terrorism. Online radicalisation does not take place entirely on platforms like Facebook or Twitter, it is all a part of wider trends, of various channels of communication and content that ranges from private chats to open tweets.
Evidence of this is the Easter Sunday attacks in Sri Lanka. On April 4, Indian intelligence sent specific warnings to the Sri Lankan government that it had been informed by an operative that there would be a bomb attack against several targets, the origin of these appears to have been an informant who had trained with the bombers. The attack proceeded entirely offline.
So the challenge with regulating terrorist content online becomes evident, as the platform element is just a small part of the problem. While there may be some radicalisation taking place, nowadays this is most likely to take place in dark parts of the Internet: private chats, forums, etc. The visible, and often shocking spectacular element, such as the Christchurch livestream, are just a minor part of the problem. The issue is made more complex by the fact that what could start life as everyday racism or misogyny could end up in a terrorist attack.
So right now all the anger is directed at the easy targets, the large platforms. Regulate hate online, and terrorist attacks will stop, or at least diminish. Jack, ban the Nazis already.
But the evidence does not support that strategy, if you regulate hatred out of one platform, it migrates, it moves, it festers in the dark. A ban on Milo and Infowars had no effect on Christchurch, just as eliminating 350k ISIS Twitter accounts didn’t stop Easter Sunday. The solution is not online.
Moreover, regulating hate speech online at the platform level is a messy business. As more right-wing accounts have been banned online, this seems to have little effect when the hatred is spouted by the most powerful man in the world. It is good to ban blatantly anti-Semitic speech, until you start seeing it repeated at the highest level of political parties in the right and the left. The problem doesn’t stop with a few Facebook and Twitter accounts, it starts with Tony Robinson nominated to the European Parliament, and with Nigel Farage and that infamous poster.
The problem is not online.
For sure, social media enhances disinformation, some people are swayed by misleading propaganda on Facebook, but these are just part of the problem. The fast spread of disinformation evidenced by the rise of the anti-vaxx movement is enhanced by social media, but what this does is to uncover serious problems with critical thinking and education. People are not given the tools to identify misinformation, be it in a paper pamphlet, the side of a bus, or an Instagram meme.
Once again I have to state that I do not advocate doing nothing, but it is frustrating seeing all of this regulatory energy being placed on the wrong target. The assumption is that by making platforms more liable the problems will be solved, but there is no evidence that this is the case at all.
The Internet is resilient, just ask The Pirate Bay.
1 Comment
News of the Week; May 1, 2019 – Communications Law at Allard Hall · June 18, 2019 at 7:15 am
[…] Can we ever regulate online spaces? (Andres Guadamuz) […]