One effect of digitization is that consumers are no longer passive buyers or users of a service. Instead, they’ve become much more active participants in (e.g. ordering a cab instead of waiting in line at a taxi stand). This has far-reaching consequences for the design of systems that provide those services, as they now have to take into account the “human-in-the-loop” and engage in continuous, although often implicit, negotiations with their users. Moreover, we have to come to terms with ourselves as unpredictable and non-rational “problems to be solved”.

Our observations

  • The value of (consumer) data is often understood in terms of targeted advertising or other routes to increasing sales. Somewhat overlooked in this respect is the value of data in terms of incentivizing consumers to use a service in a specific way or at specific moments. This is, for instance, relevant to ride-hailing services (e.g. Uber or Lyft), public transport (e.g. on-demand buses) and energy providers. Each of these have an interest in spreading the use of their services throughout the day, or more precisely, to better match demand with supply.
  • Digital marketplaces of all sorts have opened doors to collecting more data and influencing user behavior in more intricate ways (e.g. navigation systems in cars rerouting traffic). In part because of this, the field of behavioral economics has flourished and multiple Nobel prizes in Economics have been awarded to scholars in this discipline: Daniel Kahneman won it in 2002 and Richard Thaler in 2017.
  • Digital systems have far greater possibilities for guiding or influencing our behavior. They can be designed in such a way as to only allow specific uses (e.g. geo-fenced drones) and such “scripts” are becoming increasingly detailed and forceful. Even when technology still allows divergent behavior, we tend to follow the advice given by digital systems anyway (known as automation bias or, in philosophical terms, technological decisionism).
  • We also noted how consumer behavior should not be understood as a series of separate, real-time decisions, but rather as highly routinized practices that are deeply rooted in a material (e.g. technological) and cultural (e.g. societal norms) context.

Connecting the dots

In the past, and in cases today still, service providers offered their services on a take-it or leave-it basis. Competition with other providers pushed them to optimize their offerings in terms of costs and quality, but consumers had little to no say in how these services were offered. In other words, there was a clear divide between the user on the one hand and the service provider (and its operations) on the other. In other words, the human was “out-of-the-loop”. Service providers could optimize their operations (e.g. their supply chains, or allocation of assets) to align with their own interests and with those of rather stereotypical users (and typical patterns of use). To illustrate, in public transport, almost all buses still follow fixed routes with fixed frequencies, based on typical travel patterns, and customers are expected to adjust their travel plans accordingly. In terms of operations, once those routes are set (typically for multiple years), a transport operator only has to optimize the allocation of vehicles and drivers to minimize costs.
The shift towards more flexible forms of public transportation (e.g. on-demand bus services, ride-hailing and, ultimately, mobility-as-a-service) means that operators will have to adapt to demand in real time; they’ll have to adjust to the “human-in-the-loop”. This presents practical challenges in the allocation of assets, but is also likely to increase costs, as fluctuating demand can easily lead, alternatingly, to idle and excess capacity. To reduce uncertainty and spread demand more evenly, service providers seek to influence users by providing them with the right kinds of positive (“carrots”) or negative (“sticks”) incentives and nudges. Uber and other ride-hailing platforms use dynamics tariffs to encourage users to travel off-peak and drivers to work during peak hours. Energy providers are experimenting with dynamic tariffs and reward systems to trigger households to change their patterns of energy consumption; with increasing shares of intermittent renewables, balancing energy supply and demand will be even more challenging than it is today. Such monetary incentives in themselves are not new (e.g. off-peak rates in public transportation and energy are quite common), but today these incentives are becoming much more fine-grained (varying between use cases and individual users), dynamic (adjusted in real time) and geared towards more specific outcomes (e.g. rerouting traffic).
There are, however, numerous challenges in designing such incentives. First of all, different stakeholders have different interests and those of the service provider (e.g. shaving the peaks of energy consumption) are seldom aligned with those of users or society at large (e.g. freedom to use energy anytime). Second, incentives can have adverse effects (e.g. the London congestion charge drove up home prices in the city center) or get out of hand (e.g. Uber’s surge pricing). Third, many incentives only work temporarily (e.g. until users simply accept higher costs for their default behavior) and, over time, when a “carrot” that starts out as a bonus for “good” behavior is withheld, it ends up being regarded as punishment for “bad” behavior (e.g. not receiving a discount is experienced as a penalty). Fourth, people often find ways to “hack” incentives and render them useless or use them to their own benefit (e.g. Uber drivers found ways to drive up surge pricing).
Most of these problems boil down to a lack of understanding of human behavior and, more specifically, a lack of information about users’ motivations. Service providers designing incentives tend to have little information about their customers other than superficial usage statistics. For example, they may know when people use energy or drive on certain roads, but not why they do so and what it is worth to them. Because of this, the design of incentives is based on guesswork, modelling exercises and experiments. Testing the workings of incentives in real use-cases, through A/B testing, i.e. trial-and-error, is tricky, as it can easily lead to the kind of backlash that Uber experienced with its surge pricing.
Ultimately, more detailed data and a deeper understanding of consumer practices (i.e. why people do what they do) would help service providers to design better functioning incentives; some trips can easily be postponed or rerouted, others cannot and the same principle applies to other practices. The problem, of course, is that users are seldom willing to provide such information and that this data could easily be used against them. Some kind of “meta-incentive” would thus be necessary to convince them to share such data; it should be clear to users that their data will not be used against them, but will be used to help them display “better” behavior in cooperation with their service providers and, most of all, that cost savings on the side of operators are shared with users in exchange for data and reasonable changes in their behavior.


  • Consumers are already “spoiled” by cheap on-demand services, often offered by “digital disruptors” that are not profitable (yet). It is questionable whether these can ever become profitable without offering their users the kind of positive or negative incentives that would enable them to better match demand with supply (without all the additional costs that come with the level of flexibility they offer today).
  • Yet, these service providers will need to open up about the need for, and working of, incentive schemes in order for users to share data and comply with such schemes.
  • A recent review paper presents “human-in-the-loop” as an engineering problem; how to deal with the unpredictable and sometimes irrational behavior of humans. Something similar is happening in the field of autonomous vehicles; the biggest challenge for these vehicles is dealing with other, human, road users. As such, new technology is increasingly forcing us to consider ourselves as “problems to be solved”.