Written by Sjoerd Bakker
February 25, 2021

With the advent of autonomous machines, such as autonomous vehicles, robots and even weapons, comes a need to embed some kind of morality into these machines. By definition, autonomous systems have to make choices of their own accord, to go left or right, to kill or not to kill, and we want these choices to reflect our own values and norms. One way of achieving this is for developers to translate explicit normative rules into code. Another way, arguably more democratic, is to crowdsource morality. For instance, by asking the public to “vote” on all sorts of moral dilemmas (e.g. the well-known trolley problem) or to let autonomous systems learn from our actual behavior (e.g. from observing how we drive). Interestingly, such forms of crowdsourcing could actually result in autonomous systems whose behavior aligns with local values and norms, instead of some kind of desired universal morality. The downside, however, would be that those systems, especially those that mimic our behavior, would not be able to make “better” decisions than we humans can.

Burning questions:

  • Would these forms of crowdsourcing morality lead to increased public trust in autonomous systems and allow for greater societal acceptance?
  • Is the end-goal of moral AI systems to have them align with our norms and values, or is there potential for robots to behave better than we do?
  • Could machines ever become morally superior to humans and what would this mean for the future of humanity?