Might an "intelligence explosion" never occur?

Dreyfus and Penrose argued that human cognitive abilities can’t be emulated by a computational machine. Searle and Block argued that certain kinds of machines cannot have a mind (consciousness, intentionality, etc.). But these objections need not concern those who predict an intelligence explosion.

An intelligence explosion does not require an AI to be a classical computational system. And an intelligence explosion does not depend on machines having consciousness or other properties of ‘mind’, only that they are able to solve problems better than humans can in a wide variety of unpredictable environments. As Edsger Dijkstra once said, the question of whether a machine can ‘really’ think is “no more interesting than the question of whether a submarine can swim.

Others who don’t think there will be an intelligence explosion within the next few centuries don’t have a specific objection but instead think there are hidden obstacles that will reveal themselves and slow or halt progress toward machine superintelligence.

A global catastrophe like nuclear war or a large asteroid impact could so damage human civilization that the intelligence explosion never occurs. Or, a stable and global totalitarianism could prevent the technological development required for an intelligence explosion to occur.