Fake Facebook profiles, automated Twitter bots and hacked accounts are already an everyday reality to us. The negative effects of fake news are now widely acknowledged, the Russian meddling in the U.S. presidential election being the most notorious case. It showed us that only a small team of Russian trolls could influence and steer the opinions of people on the other side of the globe. This process would have been impossible without the help of media organizations’ hunt for clicks and viewership. In the attention economy, big tech platforms are calibrated to reward information that is often misleading or polarizing, prioritizing clicks over quality of information. Now that we are aware of the force and consequences of fake news, our eyes have been opened to a new risk.

 

According to technologist Aviv Ovadya, who already warned us about the 2016 fake news crisis before the U.S. presidential elections, we are now heading for a crisis of misinformation. This time, it is about the growing difficulty to separate reality from manipulated content. A driving force of this can be found not necessarily in new, but in more sophisticated misinformation techniques. It is now easier than ever to build false perceptions due to easy-to-use, seamless technological tools to manipulate perception and falsify reality. When we switch on our television and see images of a bombing near our house and the images are convincingly real, there is not much reason for us not to believe them. On the other hand, if we are aware of all the manipulated content on television, we might not even try to figure out what just happened. No source will be reliable anymore because anything could be manipulated. 2018 will show the first signs of being an era in which it will become impossible to separate manipulated content from real content. AI technology, for example, will make it possible to create believable imitations of someone. Imagine a conversation with your mother that turns out to be a conversation with AI, based on her social media behavior. This will eventually lead to what Ovadya calls ‘infocalypse’. The problem is that technologies that enhance or distort what is real are evolving faster than our ability to understand, control or mitigate it.

 

Illustrative are developments such as the open-source software that makes it easy to make convincing face-swap porn. Similarly, it is now possible to manipulate videos by combining and mixing recorded video footage with real-time face tracking. We are familiar with the basic face-swap option of Snapchat, where two faces in a picture are swapped, but having faces swapped in a video is already a lot more advanced and disturbing. Moreover, it is now possible to make a realistic, lip-synced video, as this video of a synthesized Obama shows us. Another development is ‘automated laser phishing’, a tactic that involves AI creating personalized and believable messages from traces on social media and other publicly available data. The advances in AI and deep learning are further pushing the boundaries of producing information, regardless of human checks and input. Imagine realistic pictures being created in a way that we cannot trace their origin: are these real or created by algorithms? For instance, ‘generative adversarial networks’ or GANs are neural networks able to learn and create content without human input.

 

The risks are manifold and as yet largely unknown. For one, there are the geopolitical risks. Influencing masses by leading them to believe something deeply affecting has happened, faking a speech by a leader declaring war, for example, can have severe consequences, especially in democracies. The question will be how to successfully convince audiences that some messages and news are real. Another consequence is what Ovadya calls ‘reality apathy’. An ongoing wave of misguiding information or a number of hoaxes might lead people to give up and stop informing themselves, also harmful to the functioning of democracy.

 

RISK MARKED ON THE RISK RADAR AS A NEW RISK, NUMBER 3: Infocalypse