NewsSociety and CultureTechnology and InnovationThe Macroscope

Transparency over accuracy in artificial intelligence?

What happened?

As A.I. becomes more important in real-life applications such as health-care and transportation, concerns are growing over the reliability, openness and fairness of these systems (e.g. on the European level). Different from man-made algorithms of which we (or at least some of us) can read and understand each line of code, A.I. systems produce their own code and much of this is beyond human comprehension. Moreover, not all A.I. is equal. The most advanced forms of A.I., such as deep learning, that may be most valuable from an economic and societal perspective, are also the most difficult to grasp for humans. This makes for a tricky trade-off between accuracy and transparency; we may have to settle for less advanced techniques (e.g. machine learning) in order to be able to assess those results.

What does this mean?

As long as A.I. is used in gaming or relatively frivolous applications, there is little need to worry over transparency or fairness of the system; when it works, it works and otherwise, it’s likely a harmless bug. However, in contexts such as healthcare (e.g. Watson in Oncology) or the justice system (e.g. the case of Glen Rodriguez), we do not only want results to be accurate, we also want to be able to understand how the machine came to its conclusion (e.g. because its suggested cancer treatment is different from what a doctor’s proposes). With relatively straightforward A.I. techniques such as conventional forms of machine learning, the outcomes can be assessed by humans quite well. With deep learning, and the neural networks and “hidden layers” that underpin it, the system’s inner workings are fundamentally different from human logic.

What’s next?

Ultimately, it’s quite unlikely that we’ll settle for suboptimal A.I. systems in favor of transparency; too much potential value would go lost and A.I. is of too great strategic importance for governments to enforce overly strict regulations. Also, efforts are ongoing to develop deep learning applications that can explain themselves better (e.g. through visualizations of their reasoning steps) and, in doing so, make themselves more trustworthy. Alternatively, despite the risks involved, we may have to do with a trial-and-error approach to learn how to engage with deep learning systems. If so, in an extreme scenario, the catastrophic failure of an A.I. system could result in a “Hindenburg moment” that would set A.I. back for years.