What happened?

The AI Now Institute recently called 2018 a “dramatic year in AI”. It mentioned Microsoft’s contract with the U.S. Immigration and Customs Enforcement (ICE) to use facial recognition for border control, Amazon and Google working with the Pentagon to develop a cloud-computing platform and smart weapons for the U.S. military and IBM’s Watson recommending unsafe and incorrect treatments for cancer patients. Furthermore, AI is increasingly being deployed in public spaces (e.g. U.K. Metropolitan Police using facial recognition in shopping areas to detect criminals), without having a proper regulatory framework backing these initiatives. The annual AI Now Report 2018 named five major flaws, such as insufficient accountability and inadequate civil rights and regulations.

What does this mean?

Philosopher Nick Bostrom conceptualized the dangers of deploying AI in public spaces more fundamentally in his recent paper “The Vulnerable World Hypothesis”. Bostrom states that we have reached a level of scientific and technological progress that has created unprecedented welfare and prosperity, but also rendered humans the capacity to destabilize civilization, fundamentally disrupt economies or even destroy the world. Bostrom doesn’t claim that all people will use these technologies for bad, but given the huge heterogeneity of human beings, even the smallest chance of having a morally corrupt or angry individual with a demonic plan (e.g. creating a huge bomb, spreading a deadly virus) significantly raises the chance of devastating results (i.e. having a vulnerable world).

What’s next?

Historically, the benefits of technologies have outweighed the risks and their destructive capacity. But now, this balance is tilting with the decreasing cost of many radical and exponential technologies and open-source software, making it easier for individuals and groups to employ them, possibly with bad intentions (e.g. terrorist groups). Advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions, cyberwarfare leverages asymmetrical power relations and enables small groups to spread fake stories easily, while novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage. As such, the vulnerable world hypothesis offers a new perspective from which to evaluate the risk-benefit balance of the development of next-gen technologies, as well as a renewed interest in enlarged preventive politics and governance.