Predicting the future of artificial intelligence has always been a fool's game

From the Darmouth Conferences to Turing's test, prophecies about AI have rarely hit the mark. But there are ways to tell the good from the bad when it comes to futurology

In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting.

The "spectacularly wrong prediction" of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate.

The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.

If they had been right, we would have had AI back in 1957; today, the conference is mostly credited merely with having coined the term " artificial intelligence".

Their failure is "depressing" and "rather worrying", says Armstrong. "If you saw the prediction the rational thing would have been to believe it too. They had some of the smartest people of their time, a solid research programme, and sketches as to how to approach it and even ideas as to where the problems were."

Now, to help answer the question why "AI predictions are very hard to get right", Armstrong has recently analysed the Future of Humanity Institute's library of 250 AI predictions. The library stretches back to 1950, when Alan Turing, the father of computer science, predicted that a computer would be able to pass the "Turing test" by 2000. (In the

Turing test, a machine has to demonstrate behaviour indistinguishable from that of a human being.)

Later experts have suggested 2013, 2020 and 2029 as dates when a machine would pass the Turing test, which gives us a clue as to why Armstrong feels that such timeline predictions -- all 95 of them in the library -- are particularly worthless. "There is nothing to connect a timeline prediction with previous knowledge as AIs have never appeared in the world before -- no one has ever built one -- and our only model is the human brain, which took hundreds of millions of years to evolve."

His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. "We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right".

Although, he adds, that is more a reflection of how bad the rest of the predictions are than the quality of the philosophers' contributions.

Beyond that, he believes that AI predictions as a whole have all the "characteristics of the kind of tasks that experts are going to be bad at predicting".

In particular it is the lack of feedback about the accuracy of predictions about AI that leads to what has been called the "overconfidence of experts", Armstrong argues. Such "experts" include scientists, futurologists and journalists. "When experts get immediate feedback as to whether some prediction is right or wrong then they are going to get better at predicting. Without it, everyone is overconfident as they are making quite definite predictions on pretty much no evidence at all."

It is possible to make better predictions than what is basically just "gut instinct", he says, if you "decompose the problem by saying we need this feature or that feature and then give estimates for each step".

Few experts bother to do this, he believes, "as the problem is hard, it is not taken seriously, and perhaps they don't even realise you could do better by breaking it down. "So in effect your [own] prediction or an algorithm's about AI is as good as an expert's."

Robin Hanson, however, is not so sure that we should discount expert opinion, as "the more people focus very narrowly on the thing they know about then the more reliable their predictions will be". Hanson is an associate professor at George Mason University and chief scientist at Consensus Point, a leading provider of prediction market technology based in Nashville, Tennessee.

Too often, he says, experts are being expected by journalists only to comment on "the quick Sunday supplement style stories", meaning that they too "are more outsiders rather than researchers because these are the not the topics they are really familiar with".

If you ask those actually working in the field of AI they will say that "in the last twenty years they have seen progress of 5 percent to 10 percent towards the goal" and that means "without any acceleration it might take between 200 and 400 years to achieve the goal". Some would even argue that progress towards achieving it is actually "decelerating".

For Hanson one of the best ways to judge the accuracy of an expert is to look at the pundit's track record. "Prefer people who have made a bet financially rather than just saying something.

Don't just ask what will happen, ask them what has happened.

Another way is to look at the futures market as the predictor.

Although there isn't one for AI, Hanson suggests you "look at the demand for computers and it gives you an idea of what's coming down the line and what people are putting their money".

Armstrong reckons it is easy to "tell if a prediction is bad by comparing it with other similar predictions in the past, and if they have failed..."

Other than that, he suggests trying "to take them apart, weaken them, show that they are wrong or irrelevant -- and if you can't, then it is a stronger prediction". "Watch out too," he says, "for whether the prediction is about the behaviour of future AI rather than its inner nature. If it's about behaviour then it's a better prediction, as inner nature is a complex philosophical issue and you will never get feedback about whether it's right or wrong."

Also, the fewer assumptions a prediction makes the better, such as "AI will be networked or have genetic algorithms". If a prediction says "specific things" - that AI will emerge in this way or that way - then be wary for that prediction too.

And what are Armstrong's predictions about the future of AI? "My prediction is that [AI is] likely to happen sometime in the next five to 80 years. I would give a 90 percent chance [it will happen] in the next two centuries, although there is always the chance that someone could come up with an AI algorithm tomorrow."

And I guess that's what's wrong with more accurate predictions.

Stuart Armstrong's, Kaj Sotala's and Seán Óh Éigeartaigh's paper on "The errors, insights and lessons of famous AI predictions and what they mean for the future" plus case studies is pending publication in the conference proceedings of the AGI12/AGI Impacts Winter Intelligence conference.

This article was originally published by WIRED UK