NewsPoliticsTechnology and InnovationThe Macroscope

The Paris Call: the end of internet moderation by commercial parties?

What happened?

On November 12, at the UNESCO Internet Governance Forum (IGF), President Macron launched the Paris Call for Trust and Security in Cyberspace. It is a call for governments and private sector actors to work together in fighting cybercrime such as discrimination, abuse and cyber hacks. The call is supported by 51 countries (almost the entire EU) and more than 370 suppliers (e.g. Kaspersky, Facebook, Google, Microsoft). China, Russia and the U.S. did not join. A first initiative is a cooperation between the French government and Facebook. For six months, French investigators will monitor Facebook’s policies and tools for blocking posts and photos that attack people on the basis of their race, ethnicity, religion, sexuality or gender.

What does this mean?

Macron argued that collaboration with governments is needed because globally dominant (privately owned) digital platforms result in practices that undermine democracy. Moreover, a rather alarming documentary on so-called cleaners of social media that premiered earlier this year, paints a less neutral picture of the role that private parties such as Facebook and Google play in moderating the internet. Not only do they neglect their content moderators (who have to sift through 25,000 – often shocking – photos and videos a day), but their (classified) policies to censor content that contains violence, abuse, discrimination etc. are also presented as biased. A video that, for example, shows an illegal airstrike that flattens a hospital might be censored, not because it is too horrible, but because showing the attack does not serve the interests of the commercial party that owns the platform.

What’s next?

When governments get access to the regulation principles and the enforcement of, for example, moderating social media platforms, the smoke screen of how companies decide what can stay up and what must be deleted might be lifted. Indeed, for many it seems rather contradictory that democracies in which the digital space has become common good, leave the moderation of their content solely to commercial companies that keep their policies classified. Depending on how far the intervention of governments eventually will go, this could drastically change the way private parties organize and manage their platforms. Moderating content according to the lines of their commercial interest might become history. When this leads to discontent, it could stimulate the rise of a decentralized internet.