Twitter on Thursday announced that its “hide replies” feature is now available globally. This means that you can now hide any unsavoury comments on…
Not too long ago, the only application of the word ‘virus’ in the context of IT and the internet was a malicious piece of code that wreaked havoc with your computer. Fast forward a couple of years, and whilst ‘virus’ still means the same thing, it is overshadowed by ‘Viral’ – the mystical holy grail of online marketing. Every online marketer and content creator dreams of creating something that ‘goes viral’, spreading like wildfire through the web: a snowballing meme gathering millions upon millions of views.
Perhaps however there is a dark side to this phenomenon. Perhaps the system – the environment that allows for this kind of virality can be utilised for harm as well as good. Perhaps the system is agnostic to the nature of the content that becomes viral, and as such shows no judgement and has no safety measures.
A quick example: the recent Jackie Chan death hoax which quickly gathered steam throughout the internet during the past week. Starting as a fake CNN article, the page was racking up 1000 visits per minute, and became a trending topic on Twitter. One can debate over the exact value versus the detrimental effect of something like this, but fundamentally it is a lie, and a potentially damaging and highly upsetting lie at that. There are numerous instances of this kind of death hoax, including ‘victims’ such as Nelson Mandela and Jaden Smith.
Death hoaxes, however misleading and damaging they might be, are not the only cause for concern. Viral content can be anything, or can be a ‘carrier’ for anything – for example a malicious script that attempts to extract personal information from, or inflict damage to a user’s computer. Each example of the dark side of virality has its own risks and negative effects, and each undermines the value of the services on which they rely.
The reason that these ‘dark memes’ work so well is unfortunately a feature of the system in which they exist and propagate. The web that we know today is akin to a tightly packed assembly hall in high school, except it includes almost everyone you have ever known, and everyone is simultaneously sitting right next to each other at all times. We all know how head-lice, chicken pox and the common flu spreads in an environment like that. Not only is there the issue of virtual proximity, but also the kind of behaviour that is encouraged is similar to the break time after the assembly: everyone is frantically talking to each other, pushing ‘cool’ information around and trying to keep up with the latest gossip.
Essentially services such as Facebook and Twitter, as well as of course the harder to track but highly influential email and IM services create this hotbed of virality. They enable and promote the quick and easy sharing of web content, regardless of what it is. Most of the time this allows marketing and other kinds of ‘fun’ messages to rack up large amounts of views in a short space of time, but every now and then it is a lie or a something worse that goes viral. The vast majority of web content does not go viral, but when something hits the tipping point it seems that the speed with which it grows, and the virtual distances it covers are too great for warning messages and corrections to adequately keep pace.
Thus far nothing appears to have caused any seriously significant of lasting damage, but perhaps it is only a matter of time. Is there a solution to this problem? Unfortunately it appears to be the case that the system is self-regulating — this would not be a problem if self regulation was sufficient, but perhaps we have the financiers of Wall Street and the US government to thank for giving us the assumption that self-regulating systems are good and natural. If the system remains self-regulating, then we should make the assumption that the parts of the system (us people) have neither the capability nor the inclination to stop malicious viral outbreaks.
This implies that for full safety some form of regulation body is required , but practically speaking this seems impossible. Firstly there is the issue of consent: even if this regulatory body only had access to public information, a lot of people would be averse to the idea of an organisation monitoring all online (social) activity. Secondly there is the issue of choosing this body – who is in charge? What are their interests? Are all nations represented? Thirdly and perhaps more fundamentally, even if such an organisation existed, and identified a malicious piece of web content at an early stage in its journey to viral success, how would the warning message be spread? Just as in the self-regulating system, the warning message relies on the same channels and the same components of the system (again, us people) to spread at an adequate pace so as to be effective.
And so it seems that it comes down to us individuals to be responsible and cautious when it comes to sharing web content. Of course a death hoax is arguably quite benign, but no one would feel good after finding out that they have unwittingly infected all of their Facebook friends with some kind of virus. It’s not going to go away, and there are always going to be stupid people, so take care of yourself and don’t drink and click.