

There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter.

We thus designed a brief questionnaire and distributed it to four groups of experts. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity in other quarters, these issues are ignored or considered science fiction.
