The rapid deployment of AI in many domains is making us vulnerable in many ways. As the scope of AI applications widens, so does the range of failures. Well-known is the case of an Uber autonomous vehicle killing a pedestrian at the beginning of last year. But more and more examples of AI making ‘wrong’ decisions are popping up. In the AI Now Report 2018, which called 2018 a “dramatic year in AI”, three risks are identified.
First, AI is amplifying widespread surveillance. The rise of unregulated facial recognition systems creates a risk for citizens since the technology is hard to ‘escape’ or to ‘opt out’ of it, while it is easy for third parties to link other personal data to the once identified face. This leads to insights about persons that have little scientific backing, but are of high impact. History teaches that the pseudoscience of linking characteristics to one’s character or personality has often led to discriminatory purposes. When thinking of ever-expanding, large-scale surveillance techniques, China’s social credit system and surveillance activities in the Xinjiang to exert control over the Uighur population comes to mind. However, as the report notes, it is crucial to acknowledge that many of the same infrastructure already exist in the U.S. and is used by law enforcement – while rarely being open to public scrutiny. In an interview with MIT Technology review, futurist Amy Webb points to this specific risk. In China, the central authority enables the government to test and build AI services that incorporate data from 1.3 billion people. However, in the U.S. this is left to the private sector, by big tech parties who are constrained by the short-term demands of a capitalistic market, make long-term, thoughtful planning for AI impossible. Moreover, large-scale surveillance techniques are spreading over the globe at great speed, often to countries with authoritarian regimes.
Second, 2018 showed an enormous increase of governments adopting Automated Decision Systems (ADS) while these often show flaws when implemented. By implementing ADS, governments try to save costs and make systems more efficient in domains like criminal justice, child welfare, education and immigration. However, the risk that come with these systems are that, once a flawed decision is made, the impact of the scale far exceeds that of the human case-by-case decision-making process, automatically affecting large numbers of people. Next to possibly affecting big groups of people, ADS are mostly not designed to mitigate incorrect decisions. Moreover, the decisions are even hard to replicate or understand, creating a ‘black box’ problem.
The third risk is the fact that man-machine interactions, such as in the case of the Uber accident, there is no discrete regulartory or accountability category in place, called the ‘accountability gap’. Another point in case is IBM’s Watson recommending unsafe and incorrect treatments for cancer patients. As we have written before, concerns are growing over the reliability, openness and fairness of these systems that are increasingly not only informing, but steering human decision-making.
The impact of these three risks can be understood in terms of “The Vulnerable World Hypothesis” of philosopher and author of the book “Superintelligence” Nick Bostrom. Bostrom states that we have reached a level of scientific and technological progress that has created unprecedented welfare and prosperity, but also rendered humans the capacity to destabilize civilization, fundamentally disrupt economies or even destroy the world. Bostrom doesn’t claim that all people will use these technologies for bad, but given the huge heterogeneity of human beings, even the smallest chance of having a morally corrupt or angry individual with a demonic plan (e.g. creating a huge bomb, spreading a deadly virus) significantly raises the chance of devastating results (i.e. having a vulnerable world). In the WEF Global Risks Report 2019, the vulnerable world created by large-scale AI applications are also listed among the top risks. For instance, research shows that AI can be used to engineer more and potentially more impactful data breaches than we saw in 2018.
Historically, the benefits of technologies have outweighed the risks and their destructive capacity. However, this balance is now tilting with the decreasing cost of many radical and exponential technologies and open-source software, making it easier for individuals and groups to employ them, possibly with bad intentions (e.g. terrorist groups). Advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions, cyberwarfare leverages asymmetrical power relations and enables small groups to spread fake stories easily, while advances in AI could trigger an AI arms race, as we wrote in an earlier Risk Radar. As such, the vulnerable world hypothesis offers a new perspective from which to evaluate the risk-benefit balance of the development of next-gen technologies, as well as a renewed interest in enlarged preventive politics and governance.

Possible implications:

  • More systemic flaws in surveillance techniques harming civil rights
  • AI arms race
  • Other systemic, yet unforeseeable failure do to the large-scale deployment of AI across multiple domains, for instance in the Internet of Things that connects billions of devices
  • Shift towards a unipolar world order, in which a centralized body controls the development and deployment of these technologies of mass destruction


The Risk Radar is a monthly research report in which we monitor and qualify the world’s biggest risks to watch. Our updates are based on the estimated likelihood and impact of these risks. This report provides an additional ‘risk flection’ from a political, social, economic and technological perspective.
Click here to see the context of this Risk Radar.