Social media are increasingly scrutinized over their role leading up to and following several violent events in the last month. In California, a gunman opened fire in a synagogue in Poway. And in New Zealand, two consecutive terrorist attacks on mosques took place last month. Prior to both the California and New Zealand attacks, the shooters had posted information and evidence on the anonymous online message board 8chan. In late April, a man killed 10 people by driving a van down a busy street in Toronto. The attacker had posted his plans online and was linked to an online (incel) community. During Easter, in Sri Lanka, Islamists called for violence on Facebook before conducting attacks killing more than 250 people. Social media were temporarily banned in the Asian country in order to prevent the spreading of misinformation after the tragic suicide bombings. The concerns were that social media networks such as Facebook, Instagram, WhatsApp and Viber would fuel further hate speech and incite more (retaliatory) violence.  

It is doubted how effective social media bans or regulation are in reducing real-world violence. Some argue that bans would even do more harm than good, only leading to more repression, deepening isolation and preventing people from organizing themselves in non-violent local groups. Leaving media platforms in charge of the fight against misinformation has also shown to be ineffective. In January, YouTube announced that it would “begin reducing recommendations of borderline content,” including videos “making blatantly false claims about historic events like 9/11.”  The recent Notre Dame fire has shown that these efforts have not yet resulted in improvements in fighting conspiracy theories. As Notre Dame burned, an algorithmic error at YouTube put information about 9/11 under news videos. This shows that applying algorithms to fix the conspiracy problem and limit misinformation will not suffice.

As written earlier, societies are forced to seriously consider social media regulation. As violent content is clearly leading to real-world violence, the question remains, how to filter or discern alarming messages from jokes or false warnings in a time of immense quantity and frequency of online speech. Almost two billion people click on YouTube videos every month one can make a YouTube video in fifteen seconds. We are starting to comprehend that everything is potentially suspect – every post, every meme, every link, every quote, writes The Atlantic. The internet has become a breeding ground for intolerant content that, when it goes viral, might lead to real violence.

Possible implications:

  • Last year, we already wrote about the risk of an infocalypse, which term was coined by the technologist Aviv Ovadya, who has described the scenario of a crisis of misinformation. The rise of A.I. is likely to lead to a growing stream of fake news and conspiracy theories and it will be increasingly difficult to trace the authenticity of news sources. An arms race will commence between fake news generators and fake news detection systems.
  • Further violent attacks linked to online content will lead to more governments trying to regulate social media. However, this may also be perceived as a crackdown on the freedom of online speech. Governments failing to genuinely fight real-world violence linked to social media content will enjoy less legitimacy. Furthermore, online violence is often rooted in real-world struggles, such as economic precariousness, isolation, loneliness and polarization of society, issues that have to be addressed by the state for the battle against online violence to be effective.

RISKS MARKED ON THE RISK RADAR AS NUMBER 3: infocalypse

The Risk Radar is a monthly research report in which we monitor and qualify the world’s biggest risks to watch. Our updates are based on the estimated likelihood and impact of these risks. This report provides an additional ‘risk flection’ from a political, social, economic and technological perspective.
Click here to see the context of this Risk Radar.