Making sense of automation tax

Automation plays a substantial role in today’s economy and its impact will only increase in the future. One risk associated with this process is that automation could outpace societies in terms of our ability to deal with large-scale job displacement. If so, mass unemployment and structural public budget deficits would be the result. An oft-heard solution entails slowing down the pace of automation through changes in our tax regimes.

Our observations

  • The most automated country in the world, South Korea, was the first to implement indirect taxation on automation in 2018. The Korean government reduced incentives (i.e. tax benefits) for automation due to fears of unemployment and losing tax revenue to automation.
  • In 2017, the European Parliament rejected a motion to implement a robot tax and economist Lawrence Summers has criticized Bill Gates for suggesting a similar strategy. Nonetheless, the discourse on AI taxation is evolving. At the EmTech Next 2019 conference, pro-tax scientist Ryan Abbot and anti-tax journalist Ryan Avent led a debate on taxing AI. Similarly, the DG of ECFIN recently discussed potential strategies to regulate the impact of automation on the labor market by taxation in Europe.
  • Former U.S. presidential candidate Bill de Blasio proposed introducing an automation tax in the U.S. during his campaign. His aim was to target only the investments that increase unemployment by creating a new federal agency that would assess the automation process and its effects on workers.
  • Labor taxes are a crucial part of government revenue compared to capital taxes. In the UK, income taxes make up 25% of government’s tax revenue, in the S., payroll taxes account for more than one third of federal tax revenue and in combination with income taxes, they amount to more than 80% of government revenue.
  • The Ex’tax Project is a plan to shift the burden of taxation from labor to resources. Even though its focus differs from that of AI taxes, both proposals signal increasing attention on possible tax reforms in the coming years.


Connecting the dots

As French intellectual André Gorz stated in 1988, “The abolition of work is a process already underway … The manner in which [it] is to be managed … constitutes the central political issue of the coming decades.” In fact, automation and job displacement have been commonplace since the start of the Industrial Revolution and, so far, mass unemployment has never been the result (although one might question whether displaced workers are actually better off in their new jobs). Yet, this time around, with AI and robotization threatening an unprecedented number of jobs, concerns foster a debate on possible actions to shield our economies from the dangers of automation. Among the proposed solutions, the introduction of automation taxes stands out.
Currently, most countries incentivize automationia depreciations and tax deductions on capital investments. The idea behind such incentives is that governments should stimulate innovation and productivity to support GDP growth. AI-tax opponents contend that automation increases productivity and that taxing innovation will slow down economic growth. Interestingly, one of the main arguments for taxing automation stems from this exact concern for ensuring a thriving economy. Professor Acemoglu argues that automation does not necessarily favor efficient investments in productivity. Indeed, the structure of capital taxation favors automation even when it is not the most efficient tool to achieve growth. Implementing a tax on automation would make investments in AI more efficient, by eliminating their distorted comparative advantage with respect to labor investments.
Another economic justification for implementing automation taxes is the imbalanced composition of government tax revenues. AI-tax enthusiasts worry that automation will create massive unemployment and that payroll and employment taxes will be lost as sources of government revenue. To make matters worse, public expenditures for unemployment schemes, social security and re-training of displaced workers

will grow accordingly. On the other side, AI-tax opponents claim that taxing automation would only lead to increased outsourcing of labor to developing countries and hence to decreasing job security and ultimately a further reduction of tax revenue and increase of aforementioned expenditures. According to them, we should focus instead on alternative measures such as wealth taxation, in-firm re-training of workers and new forms of labor compensation (e.g. universal basic income and minimum wages).
Most of today’s proposals to tax automation are quite moderate and seek a balance between protecting employment and stimulating economic growth. These are mostly based on indirect taxation, i.e. on reductions of current incentives for investments in automation. For instance, Professors Abbott and Bogenschneider suggest that firms with high levels of worker automation should have less tax depreciation on capital investments. Similarly, a recent study by the University of Oxford and the Singapore Management University proposes cutting the depreciations on investments depending on their effect on employment. They contend that some automation processes substitute employment while others complement it and that only the former should be taxed.
Other proposals are more radical. For example, in The Software Society, William Meisel asserts that businesses that replace human labor with automation should be asked to continue to pay payroll taxes for displaced workers even after they stop working. Critics of such taxes point to the fact that it is hard and extremely costly to determine the targets of automation taxes, and we should thus find simpler solutions. Besides concerns over the practicalities of automation taxes, there are disagreements on the rationales for implementing them. Nonetheless, policy proposals on how to regulate the effects of automation share common goals that suggest a common directionality towards more focus on redistribution and job security and new sources of government revenue.


  • The debate on automation taxes might help tackle issues of economic inequality. Indeed, both a tax on capital investments and a tax on wealth would help reduce the rising economic inequality. On the one hand, taxing capital would make investments in labor more attractive, improving the comparative position of workers. On the other hand, taxing wealth would directly reduce the accumulation of capital at the top deciles, smoothing out inequality.
  • Taxing automation would not be enough to alleviate the damages of unemployment. Even though it might slow the pace of displacement, its positive impact depends on efficient use of the tax revenue governments would get from it. Indeed, automation-taxes should be complemented by other public policies to ensure that the revenues are used to improve the conditions of the unemployed and at-risk workers.
  • Firms could be held responsible for job displacement. They may be asked to provide in-firm re-training for workers at-risk of displacement. This has some potential benefits. Firms could train their workers specifically in the skills they need. Additionally, this would (re-)establish a long-term relationship between workers and employers and provide more stability for workers. This is already happening in big firms and smaller firms might also come to apply this strategy via state-incentives.

Could we live without a job?

Automation has the potential to completely change the way we conceive of work, and writers like Aaron Bastani already speak of a post-work society. To envision what this change would mean for us, we need to first understand what role work played and plays in our Western society.

Our observations

  • We’ve often written about the potential of AI and robots and how they might outperform human workers. Besides the benefits that certainly come with these technologies (easier data analysis, less heavy manual work, increasing productivity), policy analysts fear they will displace a relevant number of workers. In addition, recent interactions between AI and cognitive computing suggest that automation will not only influence the more traditional targets (retail and transport sectors) but might also substitute human workers in more complex and creative jobs.
  • There is a growing body of literature on the idea of a post-work or minimum-worksociety brought about by massive automation. Even though data does not suggest that we will soon experience a radical replacement of working humans by machines, considering this extreme scenario might be useful to start questioning the value we attach to our working life.
  • The interest in the changing nature of the relationship between humans and their jobs stems from art and literature as well. The Art collective Lou Cantor has opened an exhibition on the post-automation future and its implications for psychology, sociality and work replacement. Moreover, 20th century novelist JG Ballard explored the psychological effects of a life without jobs in Having a Wonderful Time (1978). While some of the characters in his novel embrace their free time to undertake (subjectively) meaningful activities, others do not seem to be able to find meaning in their lives beyond work.

Connecting the dots

We live in an extremely work-centered society. Jobs are not only the means with which we meet our most basic material needs; they are a way to establish status and identity. We spend more than one third of our life working and this is what we prepare for during most of our schooling years. When we are kids, we are asked: What do you want to be when you grow up? And often (if not always), the question and the answer refer to the job we believe we will do in the future. When we decide where and what to study, we look at employability rates to understand what university will guarantee a profitable and satisfying career for us. “Career days” and recruitment events at universities are becoming more and more common and a sign of prestige for the organizing institutes. So even our infancy and education system are entrenched with the work-centered lifestyle of our society. In addition, there seems to be a negative stereotype towards those who did not manage to have a big career: the hedonists, the lazy and demotivated.
But, has this always been the case? Historically speaking, the positive normative value we attach to work is a relatively recent development in our Western society. In ancient history, the Hebrews and the Greeks believed that work (ponos, later used in Latin as poena, i.e. sorrow) was a curse inflicted on humans by divinities. In fact, manual labor was imposed on slaves while higher social classes dedicated their time to art, warfare, philosophy and big commerce. Especially for the Greeks, wisdom and prestige were determined by the amount and quality of leisure time one could enjoy, not by one’s career achievements. With the advent of Protestantism in the 16th century, the cultural perception of physical work changed. In the Protestant ethic, hard work had a major role in giving meaning to one’s life. Indeed, Protestantism offered a religious rationale to support work as a value for everyone, independently of social classes. Later on, the philosophy of hard work spread beyond religious justifications and became part of the secularized culture of Western societies. In the late 18th century, work was not only the basic means to survival, it became an ideal citizens could strive for if they wanted a better life, a means to freedom from oppression.

In fact, central figures of the American Revolution such as Benjamin Franklin praised the liberating value of work in their writings. The industrialization of the 19th century and its continuation in the 20th century changed work ethics again. Middle and lower classes started to lose control over their jobs. Whereas before, most of the businesses had been family and home-based, technological developments radically changed the work environment: small businesses evolved into huge industrial factories owned by capital owners. In this new setting, both psychological and economic rewards for hard work were not assured anymore. The mechanization and anonymity of tasks reduced workers to appendages of machines who were not able to enjoy the benefits of their hard work due to low salaries and prolonged working hours. While some scholars worried about the disruptive effects of an obsession with work due to industrialization, Keynes was more optimistic and predicted that new technologies would reduce working hours to 15 per week and that we would be able to enjoy our free time in prosperity, assigning more value to culture, knowledge and sociality. However, the economist was not right. We still live in a work-centered society where the wealthy can strive for more intellectual careers and the poor are dependent on multiple precarious jobs to survive. The digital revolution we are experiencing has already changed our relationship to work and might change it further. One’s expectations may, nevertheless, differ. Those who fear the advent of automation and digital technologies highlight that they might increase unemployment, especially for those already struggling. The enthusiasts on the other side contend that we will finally see Keynes’ promises fulfilled: reduced working hours and more leisure time for other meaningful activities.


  • If we want automation to have a positive impact on our relationship to work, we need to ensure that the gains from enhanced productivity will be shared so that everyone canwork less and still have the means to survive. The most common, yet controversial,proposal is to introduce a universal basic income or substantial benefit schemes in our welfare systems.
  • A society with no (or little) work might be a challenge not only from an economic perspective, but from a psychological one too. Since work has come to have meaning in itself and we now associate it with self-worth, it might not be easy to adapt to a life without work. For example, according to anthropologist David Graeber, our obsession with work, i.e. “workism”, led us to create “bullshit” jobs that in turn have dragged us into depression and a burnout epidemic. Thus, the transition to a post-work society should be sensitive to our psychological need for self-worth.
  • Considering that jobs have come to define one’s identity, people might experience an existential crisis if there isn’t as much work to be done, or they might simply not be able to spend their free time in a satisfying way. Indeed, we might need to think about ensuring the possibility to engage in other meaningful projects beyond jobs, such as volunteering, sports, art, social activities and cultural circles. Indeed, this is one of the promises of post-work enthusiasts.

Reinventing urban waterways

What happened?

Last week, two entrepreneurs announced they are going to build a new distribution hub on the shore of one of Amsterdam’s main waterways. Goods (e.g. parcels) from different businesses will be collected at the hub and delivered to final customers by means of electric vessels. With varying degrees of success, similar projects in European cities have also sought to reinvent waterborne urban distribution for the distribution of parcels, construction material, restaurant supplies and for garbage collection. The common rationale is that (electric) waterborne transport can be (part of) a solution to congestion and urban air pollution. Yet, the question is why this would succeed today, when we abandoned these practices decades ago.

What does this mean?

Many European cities relied on waterborne transportation until (most) ships gave way to faster, more cost-efficient trucks. Today, ships may be “clean”, but they are still are slow and labor-intensive and they can only reach a limited number of locations. Moreover, urban quays have found new uses, e.g. providing space for houseboats and car parking, and are hardly available for unloading. For these reasons, it seems likely that waterborne distribution, in the near future, will mostly be limited to customers who are willing to pay a premium for green delivery options and those for whom on-time delivery is crucial and waterways offer a more reliable option than clogged-up urban roads.

What’s next?

In the longer term, autonomous vessels could play a significant role by reducing labor costs (and ships may be even easier to automate than cars). The same is true for small (automated) vehicles that could extend the “reach” of ships beyond the immediacy of a quay. Along with technological innovation, additional institutional innovation will be needed as well. Part of this may have to come from municipalities (e.g. discouraging (Diesel) truck deliveries and re-opening quays for deliveries) or from distributors and customers (e.g. shifting to bundled and less frequent deliveries).

Video: GenZ: How digital technology forms the next generation. With prof. Eveline Crone, Wouter van Noort

GenZ: How do emoji’s, Fortnite and the iPhone form the next generation? With professor Eveline Crone (Universiteit Leiden), emoji-expert Lilian Stolk, tech journalist Wouter van Noort (NRC), researcher Alexander van Wijnen (FreedomLab Thinktank) and Emilie Notermans, sharing her experience as GenerationZ.

Last January, 10 million youngsters were dancing in their living room at a party that took place in the virtual world: a live concert by DJ Marshmello, given in Fortnite. If you have never heard of DJ Marshmello, don’t worry, he is especially popular with young people who in their turn probably don’t know who Gloria Estefan is. However, for who thinks Fortnite is a type of video game, or thinks it will be a new type of disco, is unlikely to see that a whole new world is emerging in which ‘today’s youth’ gains most of her formative experiences. A world with new possibilities and rules: the virtual world.

In this video, we discuss the influence of this hyper-technological environment on growing generations. Will this generation gap show characteristics of clichéd nature or will there be a deeper gap between young people experiencing the formative phase in the virtual world and older generations who experienced the formative phase mostly in the physical world?

In Dutch with English subtitles. Credits: Lourens Aalders (, Eva Wubbe (FreedomLab), Jessica van der Schalk (FreedomLab Thinktank).



Who are the African youth?

In a few decades, Africa will be home to the largest number of young people globally, set to be almost a billion under-18s by 2050. The current and future youth employment challenge provides an increasingly important focus for national and international policies and interventions. However, conceptions of African youth are marked by contradictions, as they are sometimes portrayed as the nation’s future, but mostly referred to as a threat that could destabilize the continent. What can we say about the African youth?  

Our observations


  • Around 60% of Africa’s population is currently under 25 years old, and the continent’s youth will account for twice Europe’s total population in 2100. The median age on the African continent is half the European one. In 2050, the continent will be home to the largest number of young people, amounting to nearly twice the young population of South Asia and Southeast Asia, East Asia, and Oceania.
  • Every year, 10-12 million African youth enter the job market and only about 3 million of them find a job. By 2035, Africa will contribute more people to the workforce each year than the rest of the world combined. By 2050, the continent will be home to 1.25 billion people of working age. In order to absorb these new entrants, Africa will need to create more than 18 million new jobs each year.
  • However, demographic and economic trends are not in tune. The continent’s economic growth of the last decade has been mainly jobless. Demographic dividend, which is when the growing number of people in the workforce relative to the number of dependents boosts economic productivity, no longer exists,as labor supply outstrips labor demand in many African economies. As a consequence, many young Africans are formally unemployed and the demographic dividend is not realized. And thus, this vast majority of youth entering the labor market is often referred to as a problematic youth bulge: the situation of a country reducing infant mortality but still having a high fertility rate, leading to a large share of the population comprising children and young adults.
  • Creating jobs for Africa’s youth is one of many national plans to avoid wasting the demographic dividend. For example, many countries have appointed a minister of youth and have created national youth strategies, such as the Kenyan National Youth Policy. Also, providing decent work for all is an internationally agreed upon Sustainable Development Goal and part of the Agenda 2063 by the African Union. And in the last few years, youth has also become a trending topic in foreign policies of Western countries. The Dutch ministry of foreign affairs is prioritizing African youth in its development strategy and recently appointed a Youth Envoy, who will focus on youth unemployment in multiple African countries.
  • However, the youth are not a homogenous group and failing to account for differences in gender, class or religion, family heritage, communities and broader social relations, has implications for the effectiveness of youth policies. Critics, such as Marc Sommers his book The Outcast Majority (2015),have drawn attention to the problems that can arise from using a single category of youth for a heterogeneous group and portrayal of the youth bulge as a risk.

Connecting the dots

An important commonality shared by African countries across the continent is the number of young people. The ten youngest countries in the world are all in Africa. But who are the African youth? They have become a project for governments, political leaders, and NGOs, too many interests are at stake and there’s a great variety of projections in which the youth take different positions in the social order. As a result, African youth have lost their naivety. Moreover, referring to young Africans in terms of the youth bulge does not lead to a neutral demographic discourse. The concept is often associated with population explosions or a ticking time bomb. Also, it is often stated that the youth bulge leads to instability, disregarding the other factors that result in instability. On the other hand, African youth are also often framed as an untapped resource for the world’s fastestgrowing economies or as Africa’s greatest economic asset. Among the leading figures giving the current African generations a voice and trying to counter these simplified narratives about the African people, is Nigerian writer Chimamanda Ngozi Adichie. She cautions against “the danger of a single story” that categorizes African people. In particular, she warns of the single story of Africa as a place of “negatives”, popularized by Western literature, a critique that can also extend to framing youth solely as an underutilized asset.

What helps is involving the perceptions of the African youth themselves. A study shows that East Africans identify slightly more as being young than as being of a certain nationality. The survey among 18 to 35-year-olds in Kenya, Uganda, Tanzania and Rwanda, conducted in 2014 and 2015, showed that about 40% of the respondents saw themselves first and foremost as young people, while 34% saw themselves first as citizens of their countries. Only 11% identified themselves by their faith first and 6% identified as members of their family first.

Only 3.5% reported their tribe or ethnicity as the first dimension of their identity. Although there is not much evidence of the perceptions of the youth across countries, these results reflect the force of modernization that is affecting communities across the continent, especially as young people are more mobile and more interconnected than ever, regarding internet as a force for good. Indeed, across the continent, they are rapidly moving to cities and modernizing. They are the active agents of the continent’s radical transformation from a mainly rural to a predominantly urban region, leading to a concentration of youths in urban centers. Although leaving some traditions behind and aspiring to a modern life, they are still tied to the traditions and worldviewof their rural upbringing. For instance, although there is a growing number of young, college-educated Africans who are modernizing farming and call themselves agripreneurs, integrating modern methods into the traditional profession, traditional farming is often associated with poverty and only 5% of Rwandan youth is interested in farming or agriculture as a full-time job,

In addressing the challenge of youth unemployment, it would make sense to invest in modern industries and sectors that are futureproof in order to appeal to the youth. In fact, as Brookings argues, it is in these non-smokestacks industries that the biggest potential for meaningful work lies: tourism, ICT services, agribusiness, transport. These are among Africa’s most dynamic sectors, and like manufacturing, they benefit from productivity growth, scale, and agglomeration economies, but they are outpacing the growth of manufacturing in many African countries. Between 1998 and 2015, Africa’s services exports grew more than six times faster than merchandise exports. Between 2002 and 2015, exports of tradable services and agri-business increased as a share of non-mineral exports by an average of 58%. In order to offer the African youth, no matter how diverse this group is, a real chance, African countries will have to focus on developing a variety of future-proof sectors.


  •  As this young Kenyan writer argues, African youth are becoming aware that they are being categorized and politically problematized, leading them to push against narratives that frame them as a risky bulge.
  •  Mo Ibrahim considers the new African continental free trade agreement a way to deal with the youth unemployment challenge. As we noted earlier, the AfCFTA aims to create a single continental market for goods and services as well as a customs union with free movement of capital and persons.
  • 85.8% of employment in Africa is considered informal. The informal sector is seen as a main job creator. The WEF writes that companies across the continent are pioneering business models that bridge the formal and informal sectors. The penetration of mobile technology has mobilized large numbers of informal actors in their supply chains or service delivery. These gig economy companies function as bridge companies that are pioneering new ways of injecting efficiency and higher productivity into traditional informal markets. According to the WEF, these companies are defining the future of employment in African countries.

Don Ihde and generational responses to digital technology

What has Don Ihde said?

Don Ihde is one of today’s most influential philosophers of technology. As a (post-)phenomenologist, Ihde seeks to understand the ways in which technology mediates our experience and understanding of the world around us. In his seminal book, Technology and the Lifeworld (1990), he developed a typology of four kinds of human-technology relations; embodiment (we are “one” with technology and experience the world through it, e.g. glasses), hermeneutic (technology provides a specific representation of the world, e.g. an MRI scan), alterity (we relate to the technology itself, e.g. a personal computer) and background (technology disappears in the background of our lifeworld, e.g. an air conditioner). These relations, however, are not static and the use of a technology may result in different relations, depending on the context of use and the actual user. To illustrate, when an air conditioner breaks down, we have to relate to the technology itself and an experienced doctor will relate differently to an MRI scanner (i.e. more like he would to a pair of glasses) than a patient.

What can we learn from this?

Ihde’s typology may help us to understand the different ways in which current generations of users experience and use digital technology. The generations of “digital natives” (i.e. Gen Z and younger millennials) have grown up with (more or less fully developed) smartphones and ubiquitous connectivity and they have developed a relation of embodiment with these technologies (i.e. they “look through” the technology). For them, digital technology is a self-evident and “natural” component of meaningful practices (e.g. socializing with friends in Fortnite and other virtual environments). Previous generations (Gen X and older millennials), by contrast, experienced these technologies as they emerged, rather underdeveloped and requiring skilled users (e.g. typing MS-DOS commands instead of clicking and swiping). As a result, these older generations have had a look “under the hood” of digital technology and, in that sense, they have a better understanding of the actual technology than “digital natives”. At the same time, they continue to have more of an alterity relation with digital technology (i.e. they “look at” the technology) and, to them, digital technology is much more at odds with ideas about meaningful practices (i.e. only a face-to-face conversation is “real”).

How does this inform us about the future?

Comparable generational shifts took place in past technological revolutions and each generation has set its own requirements for technology. The first automobiles could only be operated by skilled mechanics who developed a relation of alterity with their cars. It was only after the introduction of the self-starter (the so-called “ladies’ aid”) in the 1910s that cars became easy to operate (cf. graphical user interfaces for computers) and that users could develop a relation of embodiment. It was also then that the public started to question the negative side-effects of the automobile, such as traffic fatalities and air pollution. The automotive pioneers, the ones with the “under the hood” understanding of cars, had largely ignored these problems as they were mostly concerned with making the technology work. Similar dynamics are visible in today’s debate over the detrimental side-effects of digital technology. Only now is early enthusiasm giving way to much more critical reflection and are societies considering regulation to mitigate these effects (e.g. the dominance of big tech, fake news or smartphone addiction). Interestingly, many digital pioneers are actively involved in (technological) efforts to “fix the internet”, in an “under the hood”-attempt to recreate the internet they once imagined. Digital natives, it seems, are much more concerned with “fixing” their online practices and hence with the ways in which technology is used and abused (e.g. bullying in Fortnite). The very fact that they have developed a relation of embodiment with their smartphones, implies that they no longer question the technology as such, but instead question the world and the kind of human behavior they experience through their digital interface.

Is EdTech addressing the demands of 21st century education?

Whether it concerns corporate learning or the education of our youngsters, digital technology is expected to transform education in terms of efficiency, affordability, accessibility and effectiveness. At the same time, the Fourth Industrial Revolution and the resulting changing demands of the digital era are forcing policymakers and caretakers to reconsider the focus of education. Knowledge transmission can no longer be the core purpose of education, but the desired focus on higherorder thinking skills is still highly contested from a pedagogical point of view. What does EdTech offer in this context?

Our observations

  • Global Education is currently a $6T industry expected to reach $10T by 2030. The global spend on education has increased at a 5,2% CARG over the past 10 years, with spend per student increasing 4,1% according to Goldman Sachs. However, the increased cost of, for example, higher teacher wages, does not lead to productivity gains, as is often the case in other sectors when wages rise (the so-called Baumol’s costs disease).
  • The main costs of education are staff costs (e.g. 75% for K-12 stage), building/maintenance and educational materials. Goldman Sachs expects that technology can lower educational costs by 20%-30%, mainly by improving students’ academic achievements through adaptive learning programs, reducing real estate costs and allowing teachers to reach more students through online learning.
  • The U.S. was considered the leading country in EdTech, the market is worth over $8.38 billion dollars and in 2015, 60% of the money invested in EdTech was invested in the United States. However, China is competing for this title, since in 2018, Chinese startups received over 50% of all the capital invested by venture capitalists in EdTechs worldwide. Chinese EdTech companies received more money than the total amount invested in EdTech firms from all other countries combined. Beijing is considered the world’s preeminent hub for EdTech, because an unparalleled number of EdTech companies are headquartered in the city. 
  • One of the most revolutionary aspects of EdTech is its possibility of offering education that is personalized/adaptive to each student’s needs and capabilities. Knewton, for example, is an EdTech company that develops technology to collect data on students as they complete tasks, recording their learning preferences, strengths and weaknesses. It then creates personalized lessons for each student to maximize learning. Arizona State University claims it experienced a 17% increase in pass rates after implementing this technology into its math courses.
  • As we wrote before, the changing future of work is creating demand for 21st century skills and technical skills. As Andreas Schleicher, head of PISA (OECD), puts it: “The world economy no longer pays for what you know; Google knows everything… The world economy pays for what you can do with what you know.”
  • Technical skills such as coding are pretty straightforward when it comes to teaching and examine them. Makeblock, for example, is an EdTech company that offers programs to children and schools worldwide to teach such skills. Currently, it has over 4,500,000 users worldwide. However, there is still much (pedagogical) debate on how to teach 21st century skills such as creative thinking, critical thinking or problemsolving skills and how to examine them.

Connecting the dots

The implementation of digital technology in education (EdTech) might evoke a lively picture of the future of learning, with children or employees being educated with the help of VR-sets, augmented reality programs, YouTube tutorials, personalized competencebased learning or learning through computer games. When students are introduced to such new learning methods, it is evident that they will become more tech-savvy than when they are taught in a more traditional classroom setting with blackboards and so-called talking heads for teachers. In this sense, the implementation of technology in education will better prepare students for a world in which technology is omnipresent. EdTech furthermore offers promising programs to master some desired technical skills such as coding or engineering, which inevitably demand the use of digital technology in education. Finally, the solutions EdTech promises are very compelling, such as adaptive learning that enables students to achieve their maximum potential, efficient use of teachers, reduction of real estate costs, improving accessibility, etc. EdTech therefore has the potential to bring about a revolution in education delivery (e.g. who educates and where) and learning methods (e.g. how something is taught).
This hopeful picture translates in the growth expectations of, for example, spend on VR/AR (from 1,8B in 2018 to 12,6 in 2025) or AI (from 0,8B in 2018 to 6.1B in 2025), mostly in non-accredited and corporate sectors. Yet, EdTech is still in a trial and error phase. Moreover, the successes or failures of EdTech applications are often perceived differently by different parties or studies. Knewton’s tech applications, for example, while reported by Arizona State University to be successful, were highly criticized in an article in Forbes. It might therefore take a while before EdTech applications are granted a definite place in education. Yet, simply because it expands the possibilities of teaching in general (e.g. gamification, any time any place, learning through (VR) experience), it will be of use in teaching delivery and teaching methods one way or another. Currently, the most promising applications are intelligent tutoring systems, automated essay scoring, and early warning systems that detect when students drop out of school, are victims of bullying or when otherwise worrisome behavior occurs.

The opportunities as well as the hesitations of implementing EdTech in education are a hot topic. However, whether EdTech is addressing the changing demands of the digital era is a rather under explored subject. It might wrongly be assumed that, since EdTech will make students more tech-savvy, they will automatically meet the demands of the digital era. Being more tech-savvy, however, is only one demand that comes with the digital era. Mastering technical- and 21st century skills are viewed equally important.
A closer look at EdTech and its content shows that it primarily focuses on traditional knowledge subjects such as math, history, language or geography. This is not surprising, since these subjects have been taught for centuries, there are proven ways to teach them, their content matter is clear, and there are solid ways to examine them. The urgently needed, newer (21st century) skills such as critical thinking, creative thinking and problem-solving, on the other hand, are relatively new territory. There are no solid methods yet to teach them, nor is there consensus on how they should be examined. Some even go as far as arguing that skills are always context-dependent and are not transferrable to other contexts. In other words, it will not be possible to teach these skills separately from knowledge education, and even if students master them in one (knowledge) topic, it is not likely that they can apply them in another. This implies that the education of 21st century skills will be in vain if they are educated separately from the subject they are wanted for (e.g. data analytics, creative business model development, information management, etc.), and need to be implemented in (school) topics that specifically address these (new) disciplines. This seems unfeasible since there are so many new disciplines that demand 21st century skills and consensus on this matter is still at distance.
Due to the fact that we are still unable to meet some important contemporary educational demands, as there is no consensus yet in terms of teaching methods and examination, for now, EdTech cannot prove its value in that area any more than the more traditional education methods can. From this perspective, EdTech is mainly revolutionizing the educational methods and delivery that are applied in traditional, knowledge-oriented subjects.


  • One of the biggest points of discussion on educating 21st century skills such as critical thinking, creative thinking or problemsolving is that they cannot be educated without certain foundational knowledge that must be mastered by students. And since current curricula are already highly occupied with knowledge transmission, it seems unfeasible to add substantial new subjects to school curricula. However, if EdTech proves to offer more efficient ways to teach knowledge-oriented subjects such as math, history or geography, then this would partly relieve teachers of this task and their time could be spent on practicing 21st century skills instead. Especially since the prospective strengths of EdTech do not lie in teaching higherorder thinking skills, which are still considered to be something only humans can teach.

  • Because of the uncertainties concerning the success of EdTech in educating our youngsters, it is understandable that the implementation of its new learning methods in school curricula will take some time. After all, no parent or policymaker is willing to experiment with the future of their children. However, in countries where educational facilities are poor, the applications might be an improvement in education regardless (e.g. Africa or India).

  • Furthermore, since the demands of the digital era are more urgent in the current domain of work, corporate learning might be more willing to make use of the possibilities of EdTech, because of its scalability and lower cost compared to traditional upskilling programs. Moreover, the consequences of a failing EdTech application are less severe for adults that for children who’s cognitive development is still not completed.

Future leaders get a taste of new economics

What happened?

This year at Harvard, a new introductory course in economics (Using Big Data to Solve Economic and Social Problems”) has proven very popular among students. The university’s standard introduction to economics, “Principles of Economics”, used to be the institute’s most popular course, but it is losing ground to this new contender. In contrast to the traditional course (and theory-heavy “orthodox” economics in general), the new course is strongly focused on real-life problems and solutions andthe use of (big) data. The popularity of this course may be the first step for a new generation of decisionmakers with a different, more inclusive, perspective on the economy that will start their careers in business and politics in a few years’ time.

What does this mean?

According to Harvard, the content of the course  is not new to its curriculum per se, but it is now offered in the first year because students are simply demanding more relevant and meaningful insights than traditional courses were able to offer. The professor teaching this course, Raj Chetty, is a prominent frontrunner of the empirical turn in economics and has published influential studies on the root causes of inequality in American society. While he is not a political activist himself, his work has gained prominence in political debates as it shows, by means of empirics instead of theory, how markets arefailing to address societal problems and how state intervention is warranted in ways that traditional economics would disapprove of.

What’s next?

The new mindset that Harvard graduates will bring to board rooms and political debates is likely to impact American society in the coming decades. First, because it ties in with the broader trend of favoring a larger role for government that we have seen reflected in ambitious plans in relation to sustainability (e.g. the Green New Deal) and inequality (e.g. Democratic plans for Medicare-for-all). Second, the rise of the sensor-based economy is likely to generate new insights in the workings of the economy, including clear cases of market failure in which governmental action is justified and possibly inevitable. That is, these graduates will be more open to such data and less so to the orthodox theory that guides many decisionmakers today.

To measure someone is to know someone

Artificial Intelligence for monitoring behavior is making its way onto the work floor and classroom. While monitoring employees and students has already been done for a long time (e.g. with timesheets, attendance lists), the minute-to-minute observations of AI technology are heavily contested for their questionable impact on people’s privacy and psychological wellbeing. Is this simply a more extensive way of monitoring behavior, or can it be characterized as a fundamental game changer? 

Our observations

  • An increasing number of firms are using algorithms to scrutinize staff behavior minute-to-minute by collecting data on their working activities (e.g. emails, file-editing). The Isaak system enables employers to monitor their employees in real time to rank their attributes. Amazon, for example, tracks the productivity of its warehouse workers and uses AI to automatically manage, monitor, generate warnings or even terminations without any input from supervisors.
  • Schools in the U.S. are turning to software companies such as Gaggle or Securely to surface potentially worrisome communications by students. These Safety Management Platforms (SMP) use natural-language AI to scan school computers and flag “bad” phrases (e.g. bullying or self-harm). Also, schools are interested in AI surveillance to recognize when students are getting into trouble in schools (e.g. a fight).
  • In China, schools have installed facial recognition technology in class to monitor how attentive students are. The classroom is scanned every 30 seconds and records students’ facial expressions, categorizing them into happy, angry, fearful, confused and upset. The system also records student actions such as writing, reading, raising a hand, and sleeping at a desk. Subsequently, students who focused will be marked an A, while students who wandered off will be marked a B.
  • Alex Rosenblat wrote a book called Uberland: How Algorithms are Rewriting the Rules of Work, in which he finds that, while Uber claims that their drivers are entrepreneurs and classify them as independent contractors, they’re actually managed by a boss – albeit an algorithmic boss – and algorithms are basically just rules encoded in software.
  • Unions warn that systems such as Isaak may only increase pressure on workers and can cause significant distress. In a similar vein, human rights groups fear that such “big data” monitoring systems may violate privacy and are misused to track the activities of vulnerable ethnic minorities who are deemed “politically threatening”.

Connecting the dots

Ever since Napoleon, our society has grown more and more accustomed to the collection of personal data by governments and, later on, corporations and educational institutions, albeit within the confines of privacy laws. The first and simplest kind of employee monitoring occurred in the late 19th century, with the invention of the timesheet in 1888 by a clock jeweler, who later merged with other time equipment companies to form IBM. Analysis on the psychological effects of systems like timesheets or productivity track boards has shown that, while workers sometimes experience more pressure and stress, a workplace atmosphere based on objective verification is often also perceived fairer. In general, these monitoring methods have been accepted by employees and students. Today, the rise of highly advanced AI monitoring technologies such as facial recognition software in schools is causing controversy because of their chilling effects on people’s freedom of expression, creativity, trust and ultimately, productivity.
One of the main differences with traditional methods is that AI technology can collect data at a continuous minute-to-minute rate. In the Isaak system, for example, when someone is touching his computer, this is categorized as working, while not touching it for 5 minutes when logged in is interpreted as not working. Furthermore, these systems often lack any human interaction, because the algorithms determine whether someone is following the rules and requirements. This has been exemplified by Ibrahim Diallo, who got fired from his job by a machine because of a broken key card, or transgender Uber drivers who are kicked off the app because of their changing physical appearance. In these types of cases, important decisions are made automatically by the firm’s algorithm without any human involvement. What is more, whereas employers used to collect basic data such as attendance or sales figures per worker, they now have the

ability to look into every minor activity or highly personal information (e.g. biometric information) and even monitor a worker’s emotions. Such data is not solely collected to monitor workers, but also to influence them through, for example, gamification. Finally, the collected data is so detailed that it can be of value to parties beyond the work floor or classroom, which creates a vulnerable position for the ones who are monitored but do not have control over their personal data.
Monitoring behavior with AI technology can still simply be characterized as a more advanced, but not fundamentally different method for governments, companies or educational institutions to collect personal data for organizational purposes, as they have done for ages. However, the position of the ones being monitored might undergo a more fundamental change. This is not only caused by the decline of human interference or the collection of very detailed personal data, but also by automation bias, the tendency of people (in this case employers, teachers or officials) to believe in the validity of recommendations made by algorithms over human testimony (e.g. an employee who claims he was not skipping work but taking time to get inspiration for a work-related matter). An employer’s testimony might pale in comparison to conclusions made by an algorithm’s superior calculation power and its lack of human subjectivity. A lack of (verbal) skills, context or time to evaluate whether the computed conclusion comprises the right “verdict” can be a problem for both the employee and the employer. The ones being monitored might feel powerless against the reduction of their actual activities to activities that can be measured by an algorithm, and employers might not be able to justify a different conclusion about their employees than was given by an algorithm.


  • Although we have grown accustomed to organizations collecting highly detailed personal data (e.g. Google, Facebook), our tolerance for such practices is likely to reach a limit at some point. In this case, the unequal relationship between employee and employer or student and teacher in terms of dependency might trigger such an endpoint. Contrary to the (lack of) consequences we experience by Google collecting our data, the impact of these systems is very concrete and the stakes are high: we might lose our job or not graduate because of misinterpreted data. . That what can be measured will therefore automatically be valued more important than activities that are hard to capture in measurable standards.  
  • The degree of tolerance for such monitoring systems differs greatly per region. China’s social credit system has expanded the idea of monitoring people’s behavior to many aspects of life, judging citizens’ behavior and trustworthiness, whereas the West is still hesitant towards such developments. However, The Royal Society of Arts predictsthat in the next 15 years, life insurance premiums will be based upon data from wearable monitors and workers in retail and hospitality will be tracked for time spent inactive. As gig economy working spreads, people will qualify for the best jobs only with performance and empathy metrics that pass a high threshold, while others will only have access to the most menial tasks. China is already banning people with low social credit from the best jobs, the best hotels, their children from the best schools and so on.

January 2019: The global mental health problem

Mental distress is a global problem. According to the World Health Organization, today, 450 million people worldwide are living with mental illness. One in four people will experience a mental or neurological disorder during their lives. The Gallup 2018 global Emotions Report indicates that the levels of negative feelings people have, such as stress, anger or physical pain, have increased over the past five years. Although not all findings suggest a global rise in mental disorders, there are indications that younger generations are experiencing an increase: especially in the U.S., levels of depression have increased sharply for young individuals. Among mental disorders, depression is the leading cause of disability around the world and it represents the fourth leading cause of the global disease burden. And the WHO predicts that it will rank second by 2020.

One possible driver of this apparent increase in mental distress might be that the concept of mental health is changing due to increasing awareness of mental health issues and more attention to modern influences on our mental fitness. Modern societies are undergoing transitions that cause mental distress and might further exacerbate mental health issues. The mental effects of three transitions in particular have become more visible in Western societies over the last years.

First is the social structure of societies that have rapidly changed over the last decades. Earlier, we wrote about modern loneliness, of an increasingly disconnected society in the digitally connected era. Currently, many speak of a loneliness epidemic in the Western world, such as in the U.S. The issue is being taken seriously, to the point of appointing a minister for loneliness, as in the U.K., since studies indicate that loneliness can lead to psychological disorders and that it increases mortality risk.

Second, societies have to deal with rapid demographic changes. Feelings of anxiety and uncertainty in the Western world stem from a considerable change in demographics. One example of this is the growing fear of white majorities in the U.S. of becoming the minority. The perceived threat of demographic change is making mostly white voters fearful, giving power to nationalism and politicians responding to that fear. Tensions in Europe are also rising, as transformative demographic changes have threatened the majority. The latest refugee crisis has further intensified the fear and uncertainty of the majority of the population that they will become minorities. This is especially true for Eastern European countries that are facing a depopulation crisis. It is a fear of existence that leads to psychological stress.

A third strain on mental stability consists of the increasingly experienced effects of climate change. According to climate change psychology experts, the dire projections of climate change and experiences of extreme weather events, floods, wildfires and droughts lead us to feel anxious and uncertain. Furthermore, new research shows how air pollution has an emotional cost. A study published in Nature found that higher levels of air pollution are associated with a decrease in people’s happiness levels.

Although these transitions contribute to mental health issues, adequate treatment is missing in developed and developing countries. More than 40% of countries have no mental health policy and over 30% have no mental health program. Furthermore, the WHO warns of the mental health gap in developing countries. For instance, in most African countries, less than 1% of health budgets is spent on mental health care. Those living in poverty are more likely to be constantly exposed to severely stressful events, dangerous living conditions, exploitation, and poor health, contributing to their greater vulnerability and increasing the need for mental care.

Meanwhile, the increasing emotional and psychological strain that many people are experiencing is costly for societies. Already in 2010, as research by the WEF and the Harvard School of Public Health suggests, the global economic impact of mental disorders was US$2.5 trillion, with indirect costs (lost productivity, early retirement and so on) outstripping direct costs (diagnosis and treatment) by a ratio of around 2:1.72. Furthermore, the trend of mental distress is strongly connected to political polarization, volatile electoral results, and social unrest.


The Risk Radar is a monthly research report in which we monitor and qualify the world’s biggest risks to watch. Our updates are based on the estimated likelihood and impact of these risks. This report provides an additional ‘risk flection’ from a political, social, economic and technological perspective.
Click here to see the context of this Risk Radar.