We need more downward mobility

Written by Sebastiaan Crul
June 16, 2021

We often gauge inequality by looking at income distributions and complain about the lack of equal opportunity in society. To fight inequality, governments are in search of social measures to ease the path upwards and close the income gap. For example, better education is widely held to be the great equalizer and the best way to move up on the social ladder. Although this is presumably true, it is only half the story. Income distributions only offer a static snapshot of equality.

To fully comprehend inequality, one should also look at the dynamics of the population. Equally important is the capacity of the population to constantly change positions, keep moving from time to time, including the rich and arrived (i.e. downward mobility). Mathematically, this is called the ergodicity of the system and, more intuitively, this tells us whether the rich stay rich and lock in their privileges and wealth or if one has a good chance to become rich(er), but still end up poor(er). From this point of view, Europe may be more unequal than the U.S., because, according to Nassim Taleb, Europe excels in non-ergodic systems. So, often neglected, the path to equal societies is not only to empower the lower classes, but just as much to add some skin in the game for the richest decile of the income distribution, to increase the chances some of them actually slide down the social ladder.

Burning questions:

  • Is the current surge in “public executions” based on someone’s private behavior a symptom of our non-ergodic system and lack of downward mobility by other means?
  • How do we make our central and bureaucratic organizations, companies and governments have more skin in the game?
  • Can the decentralized architecture of Web3 increase the ergodicity of the system in the future, or will it establish new, unforeseen mechanisms and patterns of absorbing wealth and sticky wealth in a decentralized economy?

After work thoughts

Written by Pim Korsten, august 26 2020

What happened?

Thanks to digitization, a lot of things are happening in the workplace: a new form of on-demand labor (i.e. the “gig economy”) driven by platform economics, a new push of remote working (due to the corona crisis) that brings efficiency gains, while younger generations have different working preferences and their future jobs will require different skills than our current ones. Work is highly esteemed in our societies, in contrast to aristocratic societies that favor leisure over labor. We think that work and a job build a strong character (e.g. it is what gets you out of bed and brings order to your days), that jobs give you “skin in the game” and provide an opportunity to perform social roles (e.g. paying taxes, having social contact with colleagues), and – of course – that the job market is a mechanism for reallocating wealth and opportunities.

What does this mean?

However, many of these beliefs no longer hold. For example, labor markets no longer seem to be a way to reduce inequality (e.g. during the pre-corona-post-financial crisis cycle, labor markets were very tight although wage growth was lacking). Furthermore, many of the new jobs generated by digitization pay low wages, often insufficient to even make a basic living, and lack the human contact that ordinarily makes up the social part of a job. More fundamentally, automation and AI could lead to huge technological unemployment.

What’s next?

We already pondered the question whether we could live without a job once. Focusing on these problems requires a broader view than just the economic perspective. For example, most work is so abstract that it no longer builds our character, so we must look for other ways to build character and achieve cultural Bildung. Craftsmanship is a way we create significant value, learn how to operate in the world and conform to its reality principles. Furthermore, a strong focus on productivity and “full employment” is a disaster for the environment, as well as to our mental health (e.g. we might need lower growth and less consumption). In his book This Life, Martin Hägglund highlights that we need a new concept of value that stresses the spiritual and moral quality of our activities, and thus frees us from the idea that wages are the only measure of the value of our daily activity.

The Korean New Deal will build an ‘untact’ world

What happened?

In South Korea, the government announced a “New Deal”, pledging to invest $94.5 billion in the economy in the next 5 years. The Korean New Deal will focus on supporting innovation and improving the environment, including a plan to train 100.000 people in artificial intelligence (referring to a Korean tale of a king who did not heed a warning from a scholar to train 100.000 soldiers, leading Japan to rampage the land) and a plan to build 230.000 energy-saving homes. What is perhaps most interesting, is a concept at the center of the Korean New Deal called “untact”.

What does this mean?

Untact is the idea of a future built around doing things without direct contact with others. Examples include self-service retail and contactless payment. The New Deal will promote “untact industries” (e.g. remote healthcare, virtual offices, e-commerce support for SME’s). The idea of building an ‘untact world’ is driven by much more than the corona virus. The Korean government believes that it will support both the competitiveness of the economy (by becoming a leader in “untacting technologies”) and improve the environment.

What’s next?

When the Korean state has a vision for the future and invests billions of dollars into it, we should pay attention. The Korean ‘developmental state’ has a strong tradition of bringing government agencies and business leaders together to rebuild the economy. Through this model, Korean giants such as Hyundai and Samsung became globally competitive. When it comes to the idea of an untact world, the Korean New Deal teaches us that it extends far beyond the current corona virus, and is rather driven by technological, demographic and environmental change. That is why the idea of untact, despite its obvious drawbacks to the social fabric, could gain ground in other parts of the world.

Remote teaching: building a plane while flying

What happened?

Since World War II, there haven’t been as many schools closed as there are now. Unlike then, technology has made it possible to provide education at a distance. However, physical education can’t just be moved wholesale to the digital realm. Physical presence is a prerequisite for a number of didactic tools teachers need to provide quality lessons, and the proximity to other students is an important aspect of education. Even if the circumstances of teachers, parents and students are optimal, they are forced to make many adjustments to continue to facilitate education.

What does this mean?

Nearly all lesson content can be made digital. The challenge is mainly to do so in as stimulating a manner as possible. The perfect balance has yet to be struck between synchronous contact and subject material that students can study in their own time. Teachers are finding out that simply streaming their lessons is not ideal, not to mention the practical obstacles such as students not always having full access to a computer. From a distance it’s much more difficult, for example, to keep students engaged and to get a notion of whether the material is coming across. Moreover, it’s more challenging for students to concentrate on a screen for long stretches of time than to stay focused in the classroom. This has consequences for students’ daily schedules as well: their ordinary school schedule appears to be less than perfectly suited to remote teaching, but the optimal alternative remains elusive. Synchronous contact through the screen is now often reserved for interactive meetings and the transfer of content takes place largely through video or exercises, for example. Some subjects (mathematics) appear to demand more synchronous contact than others (languages). Furthermore, the contact between teacher and student seems to be becoming more informal, now that the participants are safely behind screens at home and communicating through, for example, chat messages.

What’s next?

Not enough research has been done on education as it’s now organized to be able to oversee the long-term consequences for students. It’s therefore unclear how to optimally organize it, slowing down the pace of learning while its organization is more time-consuming. It’s as though the parties concerned were building a plane while flying. And yet, this way of teaching will continue to play a considerable role in education until the crisis is resolved. Because even though we’re gingerly considering ways of reopening the schools, we’re a long way from education as we used to know it. Moreover, since all students and teacher are now forced to endorse numerous digital educational tools, it is likely that after the crisis the adaptively of Edtech has substantially increased.

The need for complexity thinking

Complexity science is an interdisciplinary field of studies spanning subjects from physics and biology to economics, to the social world. The field aims to analyze complex systems: systems that cannot be reduced to their constitutive components and contain many non-linear and dynamic interactions. Given the increased interconnectedness of the world (e.g. due to digitization and globalization) and the fact that many of the world’s largest challenges can be considered “wicked problems”, complexity theory will increasingly become part of the toolbox of the 21st century researcher.

Our observations

  • Physicist Albert-Laszlo Barabásihas set the research agenda for the next generation of students of complexity sciences through his work on scale-free networks. In his work, he provided a theoretical model for the “preferential attachment mechanism”: a process that describes how hubs (i.e. heavily connected nods) are more likely to find new connections. An example is found in the World Wide Web, where HTML documents that point from one page to another follow a power-law distribution, and it is empirically found that the most linked website is twice as likely as the second most linked website to be linked again.
  • The tradition of systems thinking in sociology dates back to Niklas Luhmann, who described a “system” as a sphere of reduced complexity, separated from its chaotic environment. Social systems are constituted by an internal communicative process of information selection and meaning is created through the process of bringing some order to the virtually infinite and chaotic outside. Importantly, the systems described by Luhmann are “autopoietic”: they constitute themselves through self-referential communication. Modern bureaucracies are an example of such complexity reducing systems: by imposing a set of binding standards on their citizens, they create their own internal rules for communicating information. The act of labeling and categorizing is not merely descriptive but constitutive of citizenship as such.
  • From the 19th century on, increasingly sophisticated statistics methods and models have aimed to discover patterns in human behavior so that they can be regulated. The French statistician Adolphe Quetelet showed that factors such as age, class, and status allow for the prediction of marriage decisions. Modern dating apps essentially draw on these mechanisms by using algorithms.
  • The sociological tradition of systems thinking is continued by Armin Nassehi. In his recent book Patterns: Theory of the Digital Society, he defends the claim that the digital revolution, rather than creating new social, political and economic structures, merely reveals already existing ones. Digital technology is seen as embedded within these existing structures of behavior and its success can only be properly understood by looking at longer established economic, political and scientific forms of societal organization.
  • Two years ago, we already wrote about economic complexity and how new economic metrics and economic theories borrow from complexity studies and other disciplines, creating a new paradigm for thinking about economic development and relations.

Connecting the dots

The notion of economies as complex adaptive system dates back to the Anglo-Austrian philosopher and economist F.A. von Hayek who described how advanced economies can be seen as spontaneous orders in which order emerges without central coordination, from individuals pursuing their self-interests. A key characteristic of the Hayekian approach is viewing (economic) systems as wholes which cannot be understood merely from their individual parts (emergentism). Likewise, these systems follow their own rules in an evolutionary adaptive process that can neither be understood nor predicted on the basis of knowledge about the individual elements. Spontaneous orders are scale-free networks, while organizations are hierarchical networks. Early work on spontaneous orders remained theoretical, while researchers are now increasingly developing systemic framework for the computational analysis of complex economies and social systems.
Similar patterns can be found in different contexts, such as in the field of quantitative linguistics: Zipf’s law describes how the most frequently used word in a language occurs approximately twice as often as the second most frequent word. The same holds for citations of the most prominent author in a scientific field. This can also be applied in social network analysis and explains, for example, why most people have fewer Facebook friends than their average Facebook friend.
Pioneer of systems thinking Scott Page gives an example of how complexity thinking can be applied in organizations theory. His 2017 book The Diversity Bonus: How Great Teams Pay Off in the Knowledge Economy shows how the systems perspective can be helpful in explaining successful performances of teams: One should not only look at the structural set-up and interactions of individual parts in a reductionist and mechanistic manner. Page states that the so-

called diversity bonus of teams is the result of various types of cognitive diversity, that is, differences in how people perceive, encode, analyze and organize the same information and experiences (and how these differences are in turn correlated with identity diversity, i.e. racial or gender differences). Often, we still lack the vocabulary to make sense of the dynamic interactions observed in complex adaptive systems, but this might be essential in addressing challenges of an ever-more complex and uncertain world. Complexity thinking allows for the discovery of previously hidden structures. In many cases, it provides the link between statistics and qualitative inquiry. Patterns found by complexity scientists can be found in systems and networks across contexts. As such, it makes – at least theoretically –  room for a synthesis of the natural and social sciences. Given the immense amounts of data we face today, complexity science promises to provide a useful conceptual framework for a multi-disciplinary way of doing science.
Complexity economics further bears the potential of fully bringing the computer revolution to economics. It might, for example, close the gap between econometrics and behavioral economics, enabling us to explain consumer behavior from both a structure and an agency perspective. Agent-based models allow for simulations which are, for example, applied in urban planning or supply chain management, but are also used to predict the spread of epidemics or to project the future needs of the healthcare system. Evolutionary or complexity perspectives are, however, typically based on assumptions which go against the fundamentals of mainstream economics, whereby rational agents face constrained optimization problems. This divergence of theoretical assumptions makes it difficult to integrate new approaches with older ones and requires a deeper paradigm shift.


  • Given that the large majority of work in complexity science or systems theory remains theoretical in nature, it does not yet have the potential to compete with neoclassical approaches. Most research still revolves around abstract mathematical models, while reality often turns out to be more nuanced. More computational approaches to modelling society and economy are, however, in the making, and it is recommended to keep track of developments in that field.

  • Economists might increasingly have to borrow concepts from natural sciences as well as sociology and psychology to allow for more dynamic perspectives. We already see a number of economists adopting evolutionary perspectives on, for example, institutional development. Conceptually, this goes back to Austrian economist J. Schumpeter, who stressed the point that “capitalism can only be understood as an evolutionary process of continuous innovation and ‘creative destruction’”. The heterodox field of evolutionary economics and evolutionary game theory has popularized concepts such as bounded rationality, diffusion and path dependency.

  • Complexity economics has as yet failed to reach its full potential because, on the one hand, it tackles fundamental assumptions of neoclassical economics, while on the other hand, practical applications remain relatively rare. Computational models, are, however, in the making and complexity economics might thus soon help to fully bring the computer revolution to economics. As knowledge is becoming an ever-more important factor in economies and the amounts of data produced keep growing, researchers are advised to look out for new conceptual frameworks as well as for ways to translate new insights into practical applications.

Digital tools in the classroom

In many countries around the world, digital devices such as Chromebooks, iPads and Windows devices are making their way into the classroom. Google, Microsoft and Apple are battling for dominance in classrooms and want their devices and tools in the hands of the next generation of consumers. Many teachers and students are positive about the implementation of digital tools in the classroom. However, the performance of children who use digital devices has not improved substantially and some research has even shown that their influence is simply negative. What drives the implementation of digital devices in classrooms?


Our observations

  • Even though the education market is not particularly profitable for Big Tech companies compared to other markets they are in, companies such as Apple and Google are battling for dominance in the classroom. Critics say this is because it provides access to a relationship with customers that havemuch greater lifetime value for them beyond their time in elementary school or the K-12 system.
  • The world wide web has opened up the opportunity to educate whenever, wherever and scale up like never before. Moocs, online tutoring, educational apps, online education platforms or entire (online) schools, keep expanding as digital possibilities (e.g. 4G/5G infrastructure, affordability) increase. Since traditional education is now less valued by employers, alternative education, including online education, has become more attractive (e.g. up-to-date, less expensive) as a serious alternative in preparation for a future job.
  • A recent Gallup report found that teachers, principals and administrators see great value in using digital learning tools now and in the future. The top three uses in which they experienced effectiveness are: 1) doing research or searches for information; 2) Creating projects, reports or presentations; 3) providing practice lessons and exercises. At the same time, teachers, principals and administrators say there is some but not a lot of information available about the effectiveness of digital learning tools.
  • A study by the OECD concluded that those schools that use computers heavily at school perform a lot worse in most learning outcomes, even after accounting for social background and student demographics. Countries that invest heavily in ICT for education showed no appreciable improvements in student achievement in reading, mathematics or science. Moreover, technology appeared to be of little help in bridging the skills divide between advantaged and disadvantaged students.
  • In a report by the National Education Policy Center at the University of Colorado on personalized learning, the authors expressed their concern for the privacy of students and the lack of research support for the effectiveness of digital devices.
  • In a survey of Education Week Research Center, a strong majority of U.S. principals worried that the implementation of digital tools and devices is leading to too much screen time for students, students working alone too often, and the tech industry gaining too much influence on public education.

Connecting the dots

With technology being omnipresent in our daily lives through our smartphones and laptops, it is to be expected that classrooms around the world will adopt digital tools as well, if only to correspond with daily practices. As we wrote before, for example, YouTube is a preferred learning tool for Gen Z, which they also use extensively outside the classroom. What is more, digital learning tools can meet the needs of modern students to study whenever, wherever. The Gallup report on the use of technology in education shows that most teachers would like to make more use of digital tools in their classroom, selecting tools that can provide immediate and actionable data on students’ progress, allow for personalized instruction based on students’ skill levels andengage students with school and learning. Finally, schools often use the implementation of technology in education to promote their school as upto-date and futureproof. This positive attitude of schools andteachers as well as students is an important driver of the implementation of digital tools in classrooms.

Big Tech companies are developing digital products just for schools and often offer them for free. Digital devices such as laptops and iPads are also offered to schools for special prices, which makes it easier for schools to buy them for their students. One of the main arguments of tech companies to do this is that they want to provide each student a fair chance to get familiar with technology and have access to internet. However, as the history of the usage of technological tools shows, when customers are used to a certain interface, program or brand, that is a huge advantage for a tech company in terms of customer loyalty. Providing students with tools and devices means that their operating system, their whole ecosystem, becomes ingrained in students minds. Either way, Big Tech companies are an important driver of the implementation of digital tools and devices in classrooms.

However, the effectiveness of digital tools and devices in classrooms is yet to be proven. Although the Gallup report shows that many teachers see value in the adoption of technology in education, convincing scientific evidence of this value has not been provided yet. Moreover, many of the more extensive empirical studies on the advantages of digital tools and devices in classrooms didn’t show any advantages or even deterioration in the educational performance of students. MIT recently even published an article that argued technology in the classroom can hold students back, claiming that technology should primarily support teachers in their tasks instead of aiming to replace them. Videos and audio recordings, for example, can be used to bring topics to life, but should not replace a lesson provided by a real, live teacher.


  • The use of digital devices and tools in the classroom automatically forces students to spend more time in front of screens and work alone, compared to traditional teaching. These aspects of the usage of digital devices and tools in general are increasingly being criticized for the negative impact they have on youngsters. Moreover, more screen time is increasingly associated with poor kids and less screen time with rich kids. Along with the lack of substantial evidence that digital tools and devices are actually beneficial in the classroom, this criticism is giving rise to the popularity of a countertrend in education: techfree schools such as Waldorf education. Although this is a modest trend, it might be a weak signal of an upcoming reevaluation of digital tools and devices in education.  
  • The drivers of digital tools and devices in education are strong: the target group (e.g. teachers, students) is eager to use them and its providers are motivated to deliver them. The lack of evidence of its effectiveness is therefore not enough to stop this trend, especially since many would perceive the absence of digital tools and devices as too big a contrast with life outside the classroom. The skepticism about their effectiveness is, however, causing uncertainty about which devices and tools are best to use inside a classroom, which makes it unclear where this trend is headed.
  • Digital tools that can help out with administrative tasks, for example helping teachers to take attendanceor grading, shows the most immediate benefit till now. This is in accordance with other disciplines, in which automation of routine tasks is currently one of the most successful applications too.  


Retroscope 2019

The end of the year is a time for contemplation. In this Retroscope, we look back and reflect on the ideas and insights we have published in The Macroscope throughout 2019. We have covered a wide range of events and developments in technology, global politics and society. The Macroscope is marked by our team’s diversity of perspectives, ranging from philosophy, economics, history, sociology, political sciences to engineering. Combining this interdisciplinary approach with scenario thinking, we aim to assess current affairs from a comprehensive and long-term perspective. Our retrospect of 2019 is therefore about how this year’s events tie in with or deviate from larger trends in technological, hegemonic or socio-cultural cycles. Our mission is to unlock society’s potential by decoding the future.

We hope you enjoy our reflection!

FreedomLab Thinktank

Click here for our Retroscope of 2019.


Click here for our Retroscope 2019: Hegemonic cycle

Click here for our Retroscope 2019: Technological cycle

Click here for our Retroscope 2019: Socio-cultural cycle

Click here for our Retroscope 2019: Disruption in the making

Retroscope 2019: Disruption in the making

Disruption in the making

In 2019, we have written about how four domains of our daily lives are being disrupted by digital technology: mobility, health(care), food and education. In all these cases, digital technology does not only change the competitive field and reshape value chains, it also changes consumer preferences and creates new social and ethical challenges. As digital technology, regulation, consumer practices and business models co-shape each other, disruption is a process continuously in the making.

1. Mobility

Despite all stories about new generations not caring about cars, very little is changing in our travel behavior. Youngsters have less money and study longer, but as soon as they earn a living and start a family, they display “adult travel behavior”, just like their parents.

For now, we should not expect too much of technological change either. Even though chipmakers are getting ready to build a chauffeur-on-a-chip, it’s been a challenging year for self-driving cars and it will take many years before truly autonomous vehicles hit the road for real. Until that time, they will operate in trials to collect data for training purposes and possibly, within limits, for last-mile solutions. Most of all, they will be learning about the difficult-to-predict behavior of other, human, road users. Even if self-driving cars eventually come to work perfectly, it is still questionable whether we will welcome them wholeheartedly. For now, technology developers encounter quite a bit of resistance and even technology vandalism that reminds us of the Luddite protests in the early 19th century.

On a different note, the push for electrification of road (and waterborne) transport continues, because of climate change and in response to growing awareness about the deadly impact of local air pollution. Next year, we will see a surge in the number of electric vehicles on the market, but demand remains highly dependent on local subsidies for consumers and ethical concerns over natural resources (e.g. cobalt) could dampen enthusiasm about EVs. For this reason, and because battery technology will not progress fast enough, hydrogen as a fuel-of-the-future made quite a comeback last year.

2. Health

We are in the midst of the transition to a more personalized, preventive and participatory healthcare system. Changing disease patterns and aging societies demand a different organization of the system. Naturally, all eyes are on digital technology when it comes to enabling the transition, but, as we’ve frequently noted, digital technology is not a solution in itself. To illustrate, smart home care could relieve pressure and reduce unnecessary and costly hospital visits, but we expect socio-cultural dynamics, such as the coming generation of self-conscious and tech-savvy “elastic” elderly, to also play a big role in the sustainable management of aging societies.

Furthermore, ubiquitous digital self-tracking practices empower citizens to take responsibility for their own health, keep patients better informed on their health and could thus help democratize the doctor-patient relationship. Unfortunately, the rise of self-tracking might also lead to coercive practices and exploitation of the more vulnerable groups of society and government policies could be perceived as patronizing.

On an existential level, we don’t really know what the impact of the datafication of life will be. The emergence of the quantified self might improve measurable health, the amateur athlete is starting to look like a pro and the widespread adoption of mindfulness apps might help us get rid of the self-destructive and easily distracted “ego”. At the same time, the lack of spiritual legacy in mindfulness could increase self-centeredness or lead to alienation from our very own bodies.

For these reasons, socio-cultural reflection on the role of technology in health care is indispensable. Clever algorithms are already able to outperform doctors on specific and limited tasks (e.g. diagnosing tiny lung cancers), but we don’t expect doctors will be replaced altogether. The decision-making process of doctors requires moral reflection, practical wisdom and they have an important “healing role”.

3. Food

In 2019, food became a pressing geopolitical matter. We saw how the trade war between China and the U.S. disrupted food trade flows, how several conflicts around the world caused food insecurity (e.g. in Venezuela, Yemen and Sudan), and how countries increasingly looked to secure their future demand (as shown by China’s investment in agriculture in over 100 countries).

Unsustainable pressure on earth’s resources is further threatening food security. We are urged to look for ways to produce food in a climate-smart way: by adapting to climate change (e.g. saline farming, climate-resistant crops or regenerative farming practices) as well as reducing the ecological footprint of the food sector (e.g. fighting food waste or reducing food packaging). Drawing most attention this year were alternative protein products, as the plant-based protein transition is gaining speed in developed countries. Yet, since middle classes are rising across the developing world, demand for animal protein is bound to increase, as is illustrated by the rising popularity of milk in China.

Global obesity levels continued to rise in the past year and, in response, we are increasingly in search of more healthy lifestyles. What we eat is key to our health, attempts are emerging to biohack our diets and people have sought ways to link diet to our DNA.

As more and more people are moving to cities worldwide, the question is who the next generation of farmers will be, especially on rising continents such as Africa, where the rural youth do not aspire to traditional farming and are rapidly moving to cities. The question is also how growing cities will be able to sustain themselves in the future and what role indoor farming will play in this challenge. Meanwhile, online food delivery in urban centers is disrupting the food chain by challenging the traditional middlemen and sometimes even connecting consumers to farmers directly.

4. Education

Like last year, traditional education systems are struggling to provide students with relevant qualifications for the rapidly changing labor landscape. Consequently, alternative and sometimes radical initiatives to educate future employees are on the rise and companies are increasingly hiring without demanding a conventional degree. Coding, for example, is becoming an important skill for future generations to participate in our ever-digitizing world,but it has not found its way to general education yet. Nor has formal logic, even though it is central to all programming and would help future coders, irrespective of which coding language they eventually come to use. To fill that void, many online apps, programs andgames that offer the possibility to master coding skills are gaining popularity. Meanwhile,EdTech promises to bring about a revolution in traditional as well as alternative education in terms of efficiency, affordability and accessibility. Until now, EdTech has primarily offeredsolutions in traditional subjects such as math, language and geography and not much in the way of the desired 21st century skills.

Click here to see the full Retroscope of 2019

Teaching youngsters coding skills with logic

The digital world has become omnipresent in our daily lives. Therefore, the skills to understand and create digital objects such as websites or tools are becoming increasingly important. In order to do so, one must master coding skills. After literacy, coding skills might thus become one of the most important sets of skills to teach next generations, providing them equal chances to participate in an ever more digitized world. Although these skills appear to be a brand-new educational topic, they are strongly related to ancient philosophy, namely formal logic, a discipline that formalized the rules of thought that underlie many subjects such as scientific research, grammar and chess.

Our observations

  • All sorts of initiatives to teach coding to next generations are arising. Last month, for example, Disney and Roblox teamed up to advance kids’ coding skills with the Star Wars: The Rise of Skywalker CreatorChallenge, offering fans the opportunity to learn how to design and race their own spaceship. But educational coding apps (e.g. Kodable, Daisy the Dinosaur), coding programs in schools and even entire coding schools are also gaining popularity.
  • The top three coding languages in the world are currently JavaScript, which was developed in 1995 and is one of the essential technologies of the World Wide Web. Next is Python, developed in 1991 and known for its readability due to the use of our natural language in its script. Python is currently one of the fastest growing languages. Finally, there is Java, an open source script developed in 1995 and used by, for example, Twitter and Netflix. In total, there are currently about 700 coding languages.
  • The only admission requirement of the famous coding school 42, a tuition-free and non-profit coding school which opened its doors in Paris in 2013, is a passing grade for a test in logics. None of the traditional degrees (e.g. bachelor’s, masters) are required, not even a primary or secondary school diploma.
  • In the West, logic was first developed by ancient philosopher Aristotle and gradually became widely accepted in science and mathematics. Logic traditionally includes the formalization of rules of thought (e.g. a circle cannot be a square because one of the rules of thought is that something can only be identical to itself). These rules are not freely agreed upon by their creators, they are universal principles that every valid argument necessarily adheres to. An argument or complex line of thought can be reduced to this formalized language, after which it is possible to examine whether it is coherent and/or whether the argumentation is valid. Since Aristotle, logic has deepened and expanded.
  • Today, logic is extensively applied in the field of artificial intelligence with, for example, argumentation theory. One of the first programming languages (1970s), Prolog, originates directly in formal logic. Coding is intrinsically related to formal logic, for coherence and valid reasoning are crucial in coding too. Of course, both in logical reasoning and coding, rules can be applied in an invalid manner. In logic,this leads to incoherence, contradictions or invalid conclusions, in programs, it can lead to errors in performance.

Connecting the dots

Coding skills, the ability to read and write the language of computer software, are considered an important prerequisite for participating in an increasingly digitized world. In order to code, one must have knowledge of aprogramming or scripting language (e.g. Python or JavaScript). Python, for example, is considered one of the easiest coding languages to learn because it uses elements of our natural language, unlike JavaScript, for example. With coding, one translates certain tasks that are expressed in natural language (e.g. “whenever someone visits our website, ask if they want to subscribe to our newsletter”) to a line of instructions that computers can execute. These (coding) instructions need to be very precise and well-structured in order for a program to perform in the way that was intended by its developers.

One of the most important capacities for coding in general is a good sense of logic. This is an important skill because it enables a programmer to write universal rules that can follow their own path (e.g. whenever x happens, then y, except when z happens, then skip y), rather than being bound to static instructions (e.g. always first do x, then do y, then do z, etc.). Moreover, having a good understanding of logical reasoning is needed to translate everyday sentences so that they align with the basic patterns of code language (e.g. “dogs can run” becomes “all dogs are creatures that run” and then, for example,dCr). Finally, logical reasoning is needed in order for programmers to detect and understand errors or undesired outcomes in a program. For example, when a statement is programmed as reversible, it is important to be able to comprehend whether this is correct. In case of the sentence “all dogs are creatures that run”, for example, this is false (e.g. when reversed, it becomes “all creatures that run are dogs”). However, the sentence “no time without change” could be reversed: “no change without time” (depending on your view on time). These are only very simple examples, but in a program with hundreds of instructions, this can get very complicated and logical reasoning is necessary to keep it from running errors as well as maintain a structured overview and understanding.  

In our daily lives, pure logical reasoning is not something we explicitly encounter much. The most common occasion for engaging in pure logical reasoning, is when we are asked to take an IQ test, in which logicalreasoning is usually tested in two ways: First, the testee is asked to find (in)valid argumentation or drawconclusions (e.g. If Peter is bigger than Karen, then Peter is bigger than John. Peter is bigger than Karen. Ergo: a. Karen is bigger than John, b. Peter is bigger than John, b. Karen is smaller than John). The second way is to testrecognition of patterns in visuals. However, what is less known, is that these types of logic are actually explicit in formal logic: the discipline in which (un)sound reasoning is captured in rules, so one can judge whether a line of thought is coherent and leads to a certain conclusion or not. Studying logic and the relationship between logic and ordinary speech can help a person better structure his own arguments and scrutinize the arguments of others. There are arguments used in everyday life that are rife with errors because most people are untrained in logic and therefore unaware of how to formulate an argument correctly. Besides helping us avoid invalid arguments, as discussed, a good sense of logic is also an important competence for mastering coding skills. Moreover, coding languages are updated over time and new ones are introduced on a regular basis. Therefore, in a world in which coding is ubiquitous, teaching children formal logic provides them with a skill they can fall back on when learning any (new) coding language that might be relevant in the future.  


  • Although there are already many online apps, programs, games, etc. with which youngsters can learn coding, not every child will have access to such resources (e.g. due to lack of money, parents that are unaware of these possibilities or the importance of these skills to their children). To ensure that all children have equal chances, it is likely that coding will be introduced in education systems at some point. However, as we’ve argued, coding languages change and it is uncertain which coding language will be relevant in the future. It is therefore plausible that formal logic will be introduced as well. Teaching coding skills as well as formal logic will require upskilling programs for teachers around the world.
  • Although formal logic is often considered to be complex and mainly suited for the highly educated, in the late 1970s, philosopher and founder of philosophy for children (P4C) Matthew Lipman was the first to introduce formal logic to children in primary schools through his children’s novel Harry Stottlemeier’s Discovery. He was convinced that logic was necessary to improve, for example, critical thinking, creative thinking and problem solving. P4C is gaining popularity globally and several studies have shown thatengaging in P4C can permanently raise a child’s IQ by 6.5 points.
  • In many countries, the law, court rulings and government policies are public in order for citizens to monitor their functioning. If citizens become able to read code, they might demand that the coding scripts used for public affairs become publicly accessible as well. For digital programs are used more and more to support or even carry out legislation (e.g. fining citizens for small offences) or, for example, to nudge us into changing our behavior. The coding script that is used for such tasks is a strong determinant of how a policy or law is interpreted by, for example, a policymaker.
  • In a more distant future, userfriendly interfaces might largely come to replace current coding languages, leaving the actual coding to computers. However, having a more in-depth understanding of logical reasoning will remain important, because it helps us see how the digital world around us functions, which is paramount, as it constitutes an increasingly large part of our lives.

New horizon of education

The changing labor landscape demands a quick response from education systems to provide people with accurate qualifications. However, traditional education systems do not allow for rapid updates in curricula. As a result, new and sometimes radical initiatives are rising, which are geared towards a new horizon of what and how we educate our youngsters.

Our observations

  • This year, CBS News reported that the wages of graduates have barely increased, while student loan debt has climbed to a historical record of $1.5 trillion in the U.S. The average U.S. household with student debt now owes about $48,000. According to Pew Research, only about half of student loan holders think the lifetime financial benefits of their bachelor’s degree outweigh the costs.
  • A conventional degree is no longer the only way to get a job that pays well. Business insider and Glassdoor, for example, report that many of America’s most popular companies to work for don’t require a college degree. Earlier this year, Apple CEO Tim Cook stated that half of Apple’s employees do not have a four-year-degree because there is a mismatch between the skills learned in the conventional education system and those needed in business. Moreover, LinkedIn has found that certain positions are more likely to be filled by non-college graduates than others.
  • 42 is a tuition-free, non-profit coding school, which opened its doors in Paris in 2013, founded and funded by French billionaire Xavier Niel. The name is a reference to the book The Hitchhiker’s Guide to the Galaxy, in which the number 42 is “the Answer to the Ultimate Question of Life, the Universe, and Everything”. They offer project-based learning without teachers, students proceed based upon their personal level of progress, there are no classrooms, it is open 24/7 and no degree is needed to enter the school. They claim that their graduates have a 100% job guarantee. World-wide, there are now 20 campuses with this model.
  • Kahn Academy was created in 2008 and offers free educational material that helps in the learning of, for example, statistics, history or a foreign language. It aims to complement conventional education. Just as in many educational apps or games, the course level is personalized in the sense that students can work their way through levels. In this way, Kahn Academy tries to employ some of the largely unused possibilities of (online) personalized learning that are already at hand. In the long run, they aim to make much more (online) education free in order for young people to have less debt when they start their career and to offer more possibilities to upskill one’s talent.
  • Coursera is an American online learning platform that offers MOOCS and, since 2017, programs with a degree. They offer content from high profile universities such as Princeton, Stanford and Duke University. They have also partnered up with high profile organizations such as IBM, MOMA, Google and several governments. Coursera also offers courses for the corporate sector and their customers include L’Oréal, Boston Consulting Group, and Axis Bank. This platform makes it easier for students to upskill their qualifications by taking some courses that fit their career path instead of having to do an entire new and often expensive study.
  • Ubiquity University is an online university that mostly anticipates the need for soft skills in the labor market by offering online courses in, for example, critical thinking, problem-solving or emotional intelligence. They offer so-called competency-based credentialing, of which they claim it “is fast gaining traction among educators, CEOs and companies worldwide as the best way to assess talent and making hiring decisions.” Anything from taking only one course to attaining a full degree is possible. A big part of these degrees can be made up of courses from other universities. A student pays as he goes, can study anywhere and at his own pace.

Connecting the dots

Initiatives that offer education outside conventional educational systems (e.g. universities, high schools) are on the rise. Many are motivated to offer education that anticipates new developments in the labor market (e.g. the need for coding skills, critical thinking skills) or meets the needs of students, such as avoiding high loans, being able to “shop” only the courses that are considered beneficial and studying whenever, wherever. These alternative ways to study break with the idea that a degree should be obtained within a traditional education system with, for example, accreditation procedures, fixed programs, three to four years of continuous studying (mostly) at one university and often high admission standards. Coding school 42, for example, doesn’t have teachers, is open 24/7 and no degree is required to enter the program. Ubiquity University responds to the trend of valuing skills more than knowledge and Coursera offers students a collection of courses from universities and companies that students can enroll in without having to complete a whole study program. Hiring employees without a conventional degree is on the rise as well. This is motivated by dissatisfaction with the competences and knowledge of graduates, an observed mismatch with their needs or skills gap.
Compared to these new initiatives, the curricula of conventional education systems often look outdated and changes are far more slowly implemented. Several factors contribute to their rather inflexible nature. First, new methods, topics or curricula need to be thoroughly tested in terms of effectiveness and pedagogical implications, and the relevance of new items needs to be agreed upon before implementation. This often takes years (e.g. new methods or topics need to be tested in pilots over a long period of time). Second, educators need to be retrained which, again, takes a lot of time, money and effort. Third, before, for example, a bachelor’s program can be executed by a university, it needs to earn accreditation, which means a lot of rules and quality standards need to be met. When a university wants to change a bachelor’s program or offer a new one, this extensive procedure needs to be followed all over again. The benefits of these thorough precautions regarding change in traditional education systems is obvious: it is the best way we know to make sure our youngsters get a qualitative and solid education. However, it is also obvious that this way of organizing education is less suited to anticipate the rapid changes that come with technological developments. This explains the surge of new initiatives that are not bound to such extensive procedures, as well as the trend of hiring employees regardless of their education.
The bridge to close the gap between the flexibility of alternative education and the quality guarantee of traditional education systems is still a long way off. For some specific new qualifications, such as coding, this lack of insight into the quality of alternative education is less problematic. Coding is a pretty straightforward skill of which companies can easily check whether a potential future employee masters it or not. The quality of competences such as critical thinking and emotional intelligence is a lot harder to estimate, which makes it harder for alternative education to achieve success in these areas. This is partly because these competences might be less generic than they are often portrayed. Critical thinking, for example, might rely on very different skills in the context of a food company than in, for example, legislation. Nevertheless, when traditional education doesn’t offer such topics, alternative education is the only option for students and employers to turn to.


  • As we wrote before, EdTech might incrementally improve basic education (primary, secondary, high school), but is it not likely that basic education will be radically disrupted by alternative education initiatives anytime soon. This is simply because the quality guarantee in terms of pedagogy and effectiveness will remain the first priority for policymakers and caretakers alike when it comes to minors. However, due to high student loans and questionable job guarantees regarding college and university degrees, it is likely that students will increasingly look for alternative options after finishing their basic education.
  • Alternative education initiatives can have great success in areas that can prove their usefulness easily (e.g. coding). In these cases, alternative educational initiatives can respond quickly to the rapidly changing demands of the labor market when, for the time being, traditional education systems cannot.
  • Single courses offered on online platforms such as Coursera anticipate the continuous need for upskilling our competences throughout our entire career. Many traditional universities or colleges do not offer such possibilities outside of their three to four year programs, mostly due to the heavy procedures and quality demands they have to meet before they can make a change.
  • Companies are now connected more easily to education through platforms such as Coursera. When education systems fail to prepare their youngsters for the future of work, companies might take on a more pro-active role, offering training programs themselves.