MobilityNewsPoliticsTechnology and InnovationThe Macroscope

Moral responsibility and autonomous machines

What happened?

Last year, 37,133 people were killed in motor vehicle crashes in the U.S. Human errors were by far the biggest cause of these fatal accidents: 94%. As such, autonomous cars have great potential to reduce the highway death toll, and one could even claim that it is a moral imperative for governments and corporations to produce them, as well as for citizens to shift to autonomous driving as soon as possible, as we have written before. However, recent fatal autonomous car crashes pose the dilemma who is responsible when autonomous cars crash: the manufacturer, the (fleet) owner or the occupant(s)? Legal expert David Vladeck has offered a fourth, and rather controversial, option: the autonomous car itself. More precisely, autonomous machines in general should obtain legal status as they are not used as tools but are only deployed by humans, and function without their intervention, hence could be legally and morally responsible for the mistakes they cause.

What does this mean?

We generally consider agent A to be morally responsible for outcome O, if A acted in freedom (control condition), A’s actions are causally related to O happening (causal condition), A should have foreseen that his actions would lead to O (epistemic condition) and O is a morally impermissible outcome (moral condition). One could argue that autonomous cars act in freedom and are “smart” enough to know that car crashes are morally impermissible, hence that they can be held morally responsible for the crashes they cause (assuming a crash is a morally impermissible thing). This means that more nonhuman actors could become legally and morally responsible for their actions, such as autonomous weapons, self-filling fridges (e.g. failing to notice your food has rotten), or robo-cooks (e.g. cutting you instead of the meat).

What’s next?

Most money and blame in our economies are earned by nonhuman entities that have legal rights (corporations). Philosopher Bruno Latour has argued that we should give rights to nonhuman objects (e.g. animals or joint-ventures) and let them speak for themselves in the “Parliament of Things”. Hence, morally and legally responsible autonomous machines should also have extended rights to pay their dues, for example, to take out insurance to pay liabilities to victims when found guilty, or to be able to earn their own money (e.g. by having their own virtual wallets so that they can independently hire themselves out to human and nonhuman users). However, punishment is not only a monetary matter; punishment also entails “recognizing” the crime in order to restore the sense of injustice that is experienced by the victim. In this sense, it remains to be seen whether autonomous machines can be meaningfully punished (e.g. can they apologize?) in the eyes of victims, as recognition should come from other free and autonomous persons. Hence, attributing moral responsibility to nonhuman actors implies that we should expand our universe of moral and conscious beings, challenging our view on the nature of intelligence and autonomy.