Automation and the future of work – understanding the numbers

13 April 2018

Portrait of Professor Michael Osborne

Professor Michael Osborne
Professor of Machine Learning

Michael A Osborne (DPhil Oxon) is an expert in the development of machine intelligence in sympathy with societal needs. His work on robust and scalable inference algorithms in Machine Learning has been successfully applied in diverse and challenging...

Portrait of Professor Carl Benedikt Frey

Professor Carl Benedikt Frey
Director

Carl-Benedikt Frey is the Dieter Schwarz Associate Professor of AI & Work at the Oxford Internet Institute and a Fellow of Mansfield College, University of Oxford. He is also Director of the Future of Work Programme and Oxford Martin Citi Fellow ...

Adobe Stock 182323378
© Adobe Stock

In 2013, we published a paper entitled “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, estimating that 47% of U.S. jobs are at risk of automation.

Since then, numerous studies have emerged, arriving at very different conclusions. In particular, one study published by a group of researchers at the University of Mannheim suggests that only 9% of jobs are exposed to automation. And more recently, a study by the OECD suggests that it is actually 14%, with a further “32% of jobs have a risk of between 50 and 70% pointing to the possibility of significant change in the way these jobs are carried out as a result of automation.”

Many policymakers naturally find it hard to make sense of these results. Which study is right? And why do they arrive at very different conclusions? In this article, we shall seek to explain why these estimates diverge.

For all their differences, these studies build on the same intuition. This is that the future of work can be inferred by observing what computers do. And there are good reasons to believe that this view is right. Back in 2003, David Autor, Frank Levy, and Richard Murnane, of the Massachusetts Institute of Technology (MIT), showed that jobs which were intensive in routine tasks had disappeared since 1980. Their findings were entirely predictable.

In his 1960 essay, “The Corporation: Will It Be Managed by Machines?”, Herbert Simon predicted the decline of routine jobs, arguing that computers hold the comparative advantage in “routine” rule-based activities which are easy to specify in computer code. Through a series of case studies in 1960, the U.S. Bureau of Labor Statistics (BLS) also arrived at a similar conclusion, suggesting that:

A little over 80% of the employees affected by the change were in routine jobs involving posting, checking, and maintaining records; filing; computing; or tabulating, keypunch, and related machine operations.

Despite their accurate insights, Simon and the BLS were careful enough not to provide a timeline for how long it would take before routine jobs would disappear in large numbers. And neither did we back in 2013.

Our study wasn’t even a prediction. It was an estimate of how exposed existing jobs are to recent developments in artificial intelligence and mobile robotics. It said nothing about the pace at which jobs will be automated away. What it did suggest is that 47% of jobs are automatable from a technological capabilities point of view. As we pointed out back then:

we focus on estimating the share of employment that can potentially be substituted by computer capital, from a technological capabilities point of view, over some unspecified number of years. We make no attempt to estimate how many jobs will actually be automated. The actual extent and pace of computerisation will depend on several additional factors which were left unaccounted for.

Our estimates have often been taken to imply an employment apocalypse. Yet that is not what we intended or suggested. All we showed is that the potential scope of automation is vast, just as it was at the eve of the Second Industrial Revolution, before electricity and the internal combustion engine rendered many of the jobs that existed in 1900 redundant. Had our great grandfathers tried to make a similar assessment by the turn of the twentieth century, they would probably have arrived at a similar figure. Back in 1900, over 40% of the workforce was employed in agriculture. Now it is less than 2%.

Seen through the lens of the twentieth century, our estimate, that 47% of jobs are exposed to future automation, does not seem extraordinarily high. On the contrary, the University of Mannheim and the OECD estimates seem extremely low.

Both of these studies take their starting point from our methodology, so let’s begin with a non-technical description of it.

In 2013, we gathered a group of machine learning experts to assess the automatability of 70 occupations using detailed task descriptions. Specifically, we asked the experts to assess whether each task for these occupations was automatable, given the availability of state-of-the-art computer equipment and conditional upon the availability of relevant big data for the algorithm to draw upon. Data were derived from O*Net which, through surveys of the working population, has collected some 20,000 unique task descriptions along with data on the skills, knowledge and abilities possessed by different occupations. Such “big data” comes with one non-negligible problem: the human brain struggles to process it. But mercifully we live in the age of AI. And AI performed most of our analysis.

The role of the experts was to provide us with what machine learning researchers call a “training dataset”, allowing our algorithm to learn about the features of automatable vs. non-automatable jobs. While the task descriptions provided for each occupation in O*Net are unique, O*Net also provides a set of common features for all occupations, which also stems from surveys, during which workers are asked how often they engage in particular activities, such as taking care of customers, negotiating, developing novel ideas and artefacts. These features allowed our algorithm to learn about the characteristics of automatable as well as non-automatable occupations, which in turn allowed us to predict the automatability of another 632 occupations.

Thus, we were able to examine a total of 702 occupations, which in 2013 made up 97% of the U.S. workforce.

Using AI for our analysis had benefits beyond saving time and labour. Among the occupations the experts deemed non-automatable were waiters and waitresses. Our AI algorithm however gave us another answer. By examining the similarities between the tasks performed by waiters and those of other occupations, it suggested that the jobs of waiters are in fact automatable. And it was proven right: in 2016 the completely waiter-less restaurant chain Eatsa opened.

Of course, algorithms are only as good as the data we feed them, and our approach did have one drawback. As we pointed out in the paper, while “our probability estimates describe the likelihood of an occupation being fully automated, we do not capture any within-occupation variation […].” What we mean by this is that underlying our analysis was a set of occupation specific features. But every occupation employs thousands of individuals, of which some potentially perform slightly different tasks. This was also the point that the authors of the Mannheim as well as the OECD study picked up on, although within-occupation variation probably plays a minor role: the job of a taxi driver or a cashier, for example, is essentially the same across companies and locations. Still, by using individual instead of occupational level data, allowing them to also capture within-occupation variation, this approach has its merits.

However, we do have a number of other queries regarding these studies.

What all previous studies have in common, including our study, is that they infer the automatability of jobs by analysing their tasks. While the Mannheim study appears to take a task-based approach, a closer look at the variables being used shows that this is not the case.

Instead of relying primarily on tasks, the Mannheim study uses worker and firm characteristics, as well as demographic variables such as sex, education, age, and income. According to this approach, the more an accountant earns, the less automatable his job is. If he or she happens to have a PhD in sociology, the job is safer from automation. Similarly, a female taxi driver with a PhD is less likely to be displaced by a self-driving car than a man who has been driving a taxi for decades. Yet why should automation discriminate on the basis of worker characteristics?

Back in 2013, we observed that workers who were employed in jobs that are exposed to automation tend to have lower levels of education and (typically) lower incomes. And we also noted that a disproportionate number of women work in occupations that are less exposed to automation relative to men. The Mannheim study, however, uses these outcomes as inputs into their analysis. In fact, jobs are not more or less exposed because of the sex of the worker, though some more male-dominated jobs are more exposed to automation.

The OECD study does not make the same mistake: it does not include demographic variables, which might explain why it finds a larger share of jobs to be exposed to automation. It suggests that between 6% (in Norway) and 33% (in Slovakia) of jobs are automatable across the OECD countries. While their headline figure suggests that only 14% of jobs across these countries are automatable, using our definition, their median automatability estimate is quite high: the median job is estimated to have 48% probability of being automated.

Like the Mannheim study, the OECD uses individual level data from the PIAAC survey, which they argue explains why they find a lower percentage of jobs to be automatable, relative to our estimate. However, they do not provide any evidence to actually show that this is the case. To take the example of the truck driver, our approach treats all truck drivers as equal: when autonomous vehicles arrive, all of them will become exposed to automation. The OECD study argues that their estimates are lower than ours because a large share of drivers will not find themselves exposed but without providing any data to show that this is the case. We would welcome the OECD’s publication of the distribution of workers that are exposed to automation by occupation.

We find it hard to believe that the tasks performed by different truck drivers (or workers within any other occupation) vary that greatly. We find it even harder to believe that this would explain a staggering difference between 14% and 47% being exposed to automation. More fundamentally, even if there is variation in the tasks being performed within occupations, would one not expect companies to simplify the tasks in production to be able to take advantage of the new technology? For example, depending on the soil and weather conditions, farm labourers in 1900 will have performed slightly different tasks. But in the developed world, nearly all of them eventually adopted the tractor after it arrived.

Given that the only source data that both studies consider on automatability is our training set, the only reasonable way to check which of their model and our model is preferable is how well it performs on that training set. A common way of evaluating such performance is “holding out” (hiding) elements of the training set provided to the model: later, the model’s predictions for these unseen occupations can be compared against their actual values. A frequently used metric to assess this is the AUC, which is the only comparable metric computed for both studies. By this measure, the non-linear model in our study is substantively more accurate (in predicting held-out members of the training set) than the linear model used in the OECD study. The bi-modal distribution of automatability scores in our study is the result of the confidence of our model (most occupations are confidently predicted to be either automatable or not): the fact that the model mostly correctly predicts held-out elements of the training set lends weight to this confidence. It is, unfortunately, not clear from the evidence in the OECD study that its model is (nor that the results that follow from it are) more reliable.

Furthermore, while our analysis examines 702 occupations, the OECD study results are broken down by broad occupational categories. And as the OECD study accurately notes, “valuable information is lost when the risk of automation is calculated based on the skill requirements of broad occupational categories.” As the OECD study finds that 14% of jobs are exposed to automation, it is possible that 2 to 3 occupations drive their results entirely.

We appreciate the attempt of the OECD to add to our work. The reason why we made our data publicly available is to allow others to build upon it. This is also the reason that the OECD and the University of Mannheim were able to adjust our methodology. We would welcome similar transparency from these studies.

Policymakers need to understand the thinking behind the disparate numbers in these studies to draw their own conclusions about the scale of the changes facing us, and so to be able to craft appropriate responses.

This opinion piece reflects the views of the author, and does not necessarily reflect the position of the Oxford Martin School or the University of Oxford. Any errors or omissions are those of the author.