When thinking or talking about technology, we often use words that describe the mechanical characteristics of technology. Not seldom is technology in books or movies depicted as machines or robots. Indeed, in our language this machine image is also widely present. The machine metonym is closely connected to an ontological assumption: a machine is assembled by humans, built up from parts to a whole, it has controllable and predictable characteristics. This is a classic-modern view of technology.
However, when turning to present cases of technological advances such as synthetic biology, this becomes problematic. Even if the goal was to create synthetic organisms as controllable and predictable entities, a living organism, whether “natural” or a product of human intervention, by definition evolves and interacts with other organisms and the environment in multiple ways. These characteristics do not fit the part-whole view and make organisms less controllable and predictable than machines. This complex interaction of technology with other technological or living systems creates complexity. In addition, organisms reproduce and grow, something that the machine metonym does not imply either. As a result, using machine metonyms might blind us from the implications of creating new life forms, such as synthetic organisms, as happens in synthetic biology. In the case of Artificial Intelligence, similar problems arise when using the machine metonym. AI, and more specifically machine learning, is confronting us with a case of technology that shows more autonomy than the machine metonym suggests. So, what are these cases of technological innovation showing us? How are they different from technologies that better suit our more mechanistic and predictable view of technology?
Already in 1985, philosopher Hans Jonas envisioned a historically new technoscientific era when technologies would show different characteristics than the previous class of technologies, such as a certain degree of autonomy and limited predictability. In current philosophy of technology debates, scholars differentiate between modern technology, or classic-modern technology, and late-modern technology. We can understand synthetic biology and AI as cases of the latter. Late-modern technologies differ from classic-modern technologies in two fundamental ways.
First, they show self-organization, autonomous behavior or agency properties. In the case of AI, an autonomous system goes beyond the behavior programmed in the initial algorithm, as it can learn by itself from data and environment, its behavior transgresses the initial objectives and conditions set by its creators (i.e. human engineers, computer scientists) and therefore gain a lower degree of predictability. Similarly, an organism created by means of synthetic biology, starts to interact with and “learn” from its environment in a way that makes it hard to predict its behavior. In both cases, the technology autonomously interacts with an open-ended and uncertain context, the real-world environment, and is thus less predictable than technological systems that merely react to human input and are otherwise passive. In that sense, technologies are sometimes regarded as “black boxes”, as insight into their input and output processes is difficult to acquire.
Second, in the case of late-modern technology, the technology no longer appears in its modern way, rather, technological traces are disappearing. Culturally established borders and modern dichotomies such as “natural” vs “artificial” are becoming blurred. For instance, a synthetic cell has an artificial pathway, but shows no traces of technology: it cannot easily be distinguished from “natural” cells. Similarly, the thinking of AI can sometimes hardly be separated from human thinking or decision-making. In 2018, Google gave a demo of its voice assistant calling a hairdresser to make an appointment and shocked the audience when the hairdresser did not notice that she was not talking to a human. Indeed, this novel kind of technology appears human or natural to us. This is what is called the naturalization of technology. However, moral debates about these sorts of technology, such as the debate about acceptance of GMOs, are often still framed in modern terms, with a strict distinction between us humans, the technology we use, and the natural environment.
Late-modern technology is thus difficult to predict and control, difficult to separate from the context and environment of its application, it can be said to “have a life of its own”. The fact that human beings are surrounding themselves with more and more technologies that are less controllable and show autonomous features, inevitably gives us the sense that we are facing greater technological complexity, losing control over our technology and that our notion of autonomy, which we regard as a fundamental human trait, is being challenged. Late-modern technologies such as AI could even undermine our autonomy, as its ubiquitous deployment could steer us implicitly and explicitly in our behavior. As is often the case with new technological developments, late-modern technologies force us to define and reframe values and views that used to be implicit and unchallenged.