In March this year, a self-driving car from Uber killed a pedestrian in Arizona. Reports now suggest that the incident resulted from Uber’s decision to no longer let its autonomous vehicles (AVs) brake for everything that comes in their path. That is, so far, AVs had been overly cautious, resulting in uncomfortable rides and potentially dangerous situations, and in order to “grow up” they had to learn to disregard “false positives”. One could thus argue that this accident is part of AVs’ maturing, but for now, this and other (fatal) incidents could hamper the further development of AVs; governments may be less welcoming to trials and funding for startups may run dry.
What does this mean?
As we have discussed before, AVs may result in much safer roads, and company-data suggests they already are safer, but some caution is warranted as their cars still operate under relatively favorable conditions (e.g. nice weather, decent roads). Moreover, even when AVs become better drivers than humans, it will be difficult to accept any failures on their part. For one, we ascribe human error to the individual(s) involved, while an AV’s mishap is (rightfully) attributed to the entire fleet. Also, AVs tend to make different mistakes from humans, which seem ridiculous (and easily avoidable).
We will see new fatal incidents with AVs and these will continue to raise debate until we reach a point where we accept them as the new normal, just as we did with regular cars in the past. Such acceptance will, however, only be possible when autonomous vehicles offer a significant benefit over traditional cars. Mere time-saving for the happy few may not suffice, and, ironically, it will ultimately be safety gains that will make us give AVs the benefit of doubt.