The errors, insights and lessons of famous AI predictions – and what they mean for the future

25 April 2014

Stuart Armstronga, Kaj Sotala & Seán S. Ó hÉigeartaigh

DOI:10.1080/0952813X.2014.895105 Received: 28 Mar 2013 Accepted: 25 Apr 2013 Published online: 25 Apr 2014

View Journal Article / Working Paper

Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions already abound: but are they reliable? This paper starts by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus's criticism of AI, Searle's Chinese room paper, Kurzweil's predictions in the Age of Spiritual Machines, and Omohundro's ‘AI drives’ paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement and the need for greater uncertainty in all types of predictions. The general reliability of expert judgment in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.