Artificial Intelligence for monitoring behavior is making its way onto the work floor and classroom. While monitoring employees and students has already been done for a long time (e.g. with timesheets, attendance lists), the minute-to-minute observations of AI technology are heavily contested for their questionable impact on people’s privacy and psychological wellbeing. Is this simply a more extensive way of monitoring behavior, or can it be characterized as a fundamental game changer? 

Our observations

  • An increasing number of firms are using algorithms to scrutinize staff behavior minute-to-minute by collecting data on their working activities (e.g. emails, file-editing). The Isaak system enables employers to monitor their employees in real time to rank their attributes. Amazon, for example, tracks the productivity of its warehouse workers and uses AI to automatically manage, monitor, generate warnings or even terminations without any input from supervisors.
  • Schools in the U.S. are turning to software companies such as Gaggle or Securely to surface potentially worrisome communications by students. These Safety Management Platforms (SMP) use natural-language AI to scan school computers and flag “bad” phrases (e.g. bullying or self-harm). Also, schools are interested in AI surveillance to recognize when students are getting into trouble in schools (e.g. a fight).
  • In China, schools have installed facial recognition technology in class to monitor how attentive students are. The classroom is scanned every 30 seconds and records students’ facial expressions, categorizing them into happy, angry, fearful, confused and upset. The system also records student actions such as writing, reading, raising a hand, and sleeping at a desk. Subsequently, students who focused will be marked an A, while students who wandered off will be marked a B.
  • Alex Rosenblat wrote a book called Uberland: How Algorithms are Rewriting the Rules of Work, in which he finds that, while Uber claims that their drivers are entrepreneurs and classify them as independent contractors, they’re actually managed by a boss – albeit an algorithmic boss – and algorithms are basically just rules encoded in software.
  • Unions warn that systems such as Isaak may only increase pressure on workers and can cause significant distress. In a similar vein, human rights groups fear that such “big data” monitoring systems may violate privacy and are misused to track the activities of vulnerable ethnic minorities who are deemed “politically threatening”.

Connecting the dots

Ever since Napoleon, our society has grown more and more accustomed to the collection of personal data by governments and, later on, corporations and educational institutions, albeit within the confines of privacy laws. The first and simplest kind of employee monitoring occurred in the late 19th century, with the invention of the timesheet in 1888 by a clock jeweler, who later merged with other time equipment companies to form IBM. Analysis on the psychological effects of systems like timesheets or productivity track boards has shown that, while workers sometimes experience more pressure and stress, a workplace atmosphere based on objective verification is often also perceived fairer. In general, these monitoring methods have been accepted by employees and students. Today, the rise of highly advanced AI monitoring technologies such as facial recognition software in schools is causing controversy because of their chilling effects on people’s freedom of expression, creativity, trust and ultimately, productivity.
One of the main differences with traditional methods is that AI technology can collect data at a continuous minute-to-minute rate. In the Isaak system, for example, when someone is touching his computer, this is categorized as working, while not touching it for 5 minutes when logged in is interpreted as not working. Furthermore, these systems often lack any human interaction, because the algorithms determine whether someone is following the rules and requirements. This has been exemplified by Ibrahim Diallo, who got fired from his job by a machine because of a broken key card, or transgender Uber drivers who are kicked off the app because of their changing physical appearance. In these types of cases, important decisions are made automatically by the firm’s algorithm without any human involvement. What is more, whereas employers used to collect basic data such as attendance or sales figures per worker, they now have the

ability to look into every minor activity or highly personal information (e.g. biometric information) and even monitor a worker’s emotions. Such data is not solely collected to monitor workers, but also to influence them through, for example, gamification. Finally, the collected data is so detailed that it can be of value to parties beyond the work floor or classroom, which creates a vulnerable position for the ones who are monitored but do not have control over their personal data.
Monitoring behavior with AI technology can still simply be characterized as a more advanced, but not fundamentally different method for governments, companies or educational institutions to collect personal data for organizational purposes, as they have done for ages. However, the position of the ones being monitored might undergo a more fundamental change. This is not only caused by the decline of human interference or the collection of very detailed personal data, but also by automation bias, the tendency of people (in this case employers, teachers or officials) to believe in the validity of recommendations made by algorithms over human testimony (e.g. an employee who claims he was not skipping work but taking time to get inspiration for a work-related matter). An employer’s testimony might pale in comparison to conclusions made by an algorithm’s superior calculation power and its lack of human subjectivity. A lack of (verbal) skills, context or time to evaluate whether the computed conclusion comprises the right “verdict” can be a problem for both the employee and the employer. The ones being monitored might feel powerless against the reduction of their actual activities to activities that can be measured by an algorithm, and employers might not be able to justify a different conclusion about their employees than was given by an algorithm.

Implications

  • Although we have grown accustomed to organizations collecting highly detailed personal data (e.g. Google, Facebook), our tolerance for such practices is likely to reach a limit at some point. In this case, the unequal relationship between employee and employer or student and teacher in terms of dependency might trigger such an endpoint. Contrary to the (lack of) consequences we experience by Google collecting our data, the impact of these systems is very concrete and the stakes are high: we might lose our job or not graduate because of misinterpreted data. . That what can be measured will therefore automatically be valued more important than activities that are hard to capture in measurable standards.  
  • The degree of tolerance for such monitoring systems differs greatly per region. China’s social credit system has expanded the idea of monitoring people’s behavior to many aspects of life, judging citizens’ behavior and trustworthiness, whereas the West is still hesitant towards such developments. However, The Royal Society of Arts predictsthat in the next 15 years, life insurance premiums will be based upon data from wearable monitors and workers in retail and hospitality will be tracked for time spent inactive. As gig economy working spreads, people will qualify for the best jobs only with performance and empathy metrics that pass a high threshold, while others will only have access to the most menial tasks. China is already banning people with low social credit from the best jobs, the best hotels, their children from the best schools and so on.