NewsOut of the BoxTechnology and InnovationThe Macroscope

Computers need a language for causal reasoning

By September 16, 2019 No Comments

What happened?

Algorithms, as they sift through immense piles of data, can identify all sorts of unexpected patterns underlying phenomena of interest in a dataset. So far, however, algorithms can only inform us about correlations, but not about causality. When it comes to predicting, AI can thus tell us that something is (probably) about to happen, but is not able to tell us why. In most applications of today’s applications of machine learning this is not necessarily a problem, but it can easily introduce different forms of harmful bias and lead to detrimental outcomes. Moreover, some argue, the lack of causal reasoning in AI is actually holding back the development of AI in general as it is a crucial element of genuine (human) intelligence.

What does this mean?

Because a correlation can be very strong, it is easily mistaken for a causal relationship. A cliché example is that of a Pacificisland were the residents witnessed that those who had fleas were healthy, whereas those without fleas were sick. They concluded that fleas cause health. The correlation is true, the conclusion is not. Although this example might seem silly to us now, the same mistake is easily made when the difference between correlation and causality is not understood when using the findings of algorithms. It is one of the sources of automation bias and actual causal relations are likely to be overlooked(e.g. a large set of data on job applications shows a correlation between applicants being male and being offered a certain position, so we conclude men are better suited for the job).

What’s next?

Ever since philosopher David Hume pointed out we actually never witness something causing something else, the concept has been problematic. However, there is a definition for causality that we use in daily life and science alike: 1) there is a permanent correlation between A and B, 2) A precedes B, and 3) hypotheses of other causes than A are eliminated. AI is now only capable of fulfilling the first two conditions; the third condition requires reflection and conclusion for which a “statistical language” is yet lacking. In his recent book The Book of Why: The New Science of Cause and Effect, computer scientist and philosopher Judea Pearl proposes a way to endow AI with several (different) hypotheses of causal relations. AI is then supposed to fulfill the third condition of causality. Eventually, machines with a genuine understanding of causality may also be able to explore the kind of “what if…” questions that we ask ourselves all the time. These kinds of questions, Pearl argues, are central to human creativity, invention and morality.