Earlier this month, Dutch police were allowed to force a suspect to unlock their smartphone with a fingerprint scan. The court ruled that this method is not in conflict with the principle that a suspect does not have to cooperate with their own conviction, stating that, as placing the thumb is a “limited infringement” of physical integrity and “the fingerprint was obtained with a very small amount of coercion”, this act is lawful.

A few months earlier, IBM released a dataset of almost a million photos that were taken from photo hosting site Flickr, which they had coded to describe the subjects’ physical characteristics, such as facial geometry and skin tone, which details may be used to develop facial recognition algorithms. The problem was that none of the people in the pictures had given consent for the use of their images in this way – they did not even know their pictures were part of a training dataset and it is almost impossible to get the photos removed. IBM is not the only company scrambling for pictures in order to feed their algorithms to improve their face recognition technology, the internet offering very easy and fast ways to access these pictures.

Both cases show that the use of biometric data is rapidly emerging in many domains without the legal or ethical frameworks in place concerning law enforcement, border control, and, relevant in the case of IBM, commercial use. This is because authentication of individuals based on physical characteristics is rapidly spreading, as biometric technologies are becoming better, cheaper, more reliable, accessible and convenient. The fact that accessing biometric data does not require “active cooperation”, as the Dutch court ruled, shows us that indeed, biometric data are up for grabs and make us vulnerable to threats from external parties who may use and abuse these data.

There is a variety of concerns about the use of biometric technology. First, there are security concerns, as there is a risk of hacking and abusing of the collected biometric data. Second, there is a risk of inaccuracy, as biometric technology is improving rapidly but still imperfect, as it remains vulnerable to errors. To illustrate, face recognition algorithms have a poor accuracy record when it comes to identifying non-white faces. Moreover, biometric technology is still easily manipulated. For instance, a Dutch consumers’ union tested 110 smartphone models and found that the facial recognition feature used for locking devices can be tricked with photos on 42 phones (the iPhone withstood the test). Finally, the fact that we are increasingly living in a sensor-based economy with a plethora of seamless interfaces and sensors gathering biometric data, raises serious concerns over the blurring boundary between security and surveillance and the conflict between privacy and security.

Possible implications:

  • Human identity authentication based on biometric data does not guarantee the level of security that was previously hoped for.
  • More cases of misuse, hacks, errors in biometric technology that affect a large group of people, as the technology, when applied on a large-scale, can create a “vulnerable world”, as philosopher Nick Bostrom calls it.
  • Backlash could arise to the use of biometric data without consent, possibly aimed at companies deploying the technology, as well as a call for more regulation.

 

RISKS MARKED ON THE RISK RADAR AS NUMBER 2: AI failure and arms race

The Risk Radar is a monthly research report in which we monitor and qualify the world’s biggest risks to watch. Our updates are based on the estimated likelihood and impact of these risks. This report provides an additional ‘risk flection’ from a political, social, economic and technological perspective.
Click here to see the context of this Risk Radar.