Is AGI even possible?

More than any other species on earth, humans have used our shared cognitive capabilities to change our environment to suit our needs and goals. We have built fleets of ships spanning the ocean, gleaming steel skyscrapers, and satellites that constantly orbit the earth. In that sense, humans are the most intelligent species1 that we know of. Because of this, it can feel like the idea of non-human agents matching our cognitive capabilities (or surpassing them) is unlikely, especially if such agents are built by humans. While in recent years AI has been making great strides, exceeding human performance for many tasks, even the best current models still fail at tasks that human kids generally succeed at. Even though many experts believe AGI may come soon, some argue that AGI is impossible even in theory. Who is right?

Maybe real intelligence requires something like consciousness?

Scientists and philosophers such as Hubert Dreyfus, Roger Penrose, John Searle2, Ned Block and David Hsing have argued that non-biological substrates — such as silicon-based computers — might be unable to exhibit consciousness. Some of these critics have also argued that human-level intelligence is impossible without consciousness.

Whether only biological entities can be conscious is an active area of debate, which is made harder by the lack of agreement on what constitutes consciousness. Whether AIs can be conscious or not, the recent progress in AI suggests quite strongly that consciousness is not necessary for the type of intelligence that interests us. Thus, this line of reasoning does not preclude the possibility of generally intelligent synthetic systems .

Even if AGI is possible some day, maybe it’s impossible to build using current techniques?

Some people argue that current deep-learning approaches are insufficient to build AGI. This could happen either because the building blocks such as transformers are not powerful enough to express generality, because we will lack compute, training data or energy to effectively train AGI or for some other reason that might become apparent in the future. It’s hard to be very confident either way, but even if this were the case, this might push timelines back, but would not make AGI impossible.

We know that general intelligence is possible in theory, as humans exhibit it. If we could create a digital mind functionally the same as a human mind, we should be able to create an AGI. One way to do this is via whole brain emulation.

In summary, there seem to be no good arguments that AGI can’t ever be developed.


  1. After dolphins and mice of course. ↩︎

  2. See also his his Chinese room argument. ↩︎