What’s the probability that we’ll eventually wind up with brain-like AGI?

Main author for this article : Arjun Verma


While a majority of researchers across different communities don’t think it’s possible, some independent alignment researchers such as Steve Byrnes predict a >50% probability that we’ll have a sufficiently brain-like Artifical General Inteliigence (AGI) .

Before we dive deeper, it is important to be on the same page regarding what is brain-like AGI. This article assumes the following definition of brain-like AGI,

“ The human brain consists of certain key ingredients that lead to humans demonstrating general intelligence such as common sense, planning, reasoning, etc. A brain-like AGI research track would involve figuring out what these ingredients are, how they work and writing AI code based on these key ingredients. The emphasis does not lie on whole-brain emulation as much as it does on kludgy replication of processes that result in general intelligence. [1] ”

Apart from brain-like, the other routes to AGI could be either via modern connectionist architectures (termed prosaic AGI) or something radically different to the two. In his series of blog posts, Steve Byrnes addresses some popular opinions around these directions in great detail. This article highlights the key aspects from the same.

Opinions on not winding up with a brain-like AGI

Opinion 1 : “Prosaic AGI is going to happen so soon that no other research program has a chance.”

This is a pretty popular opinion held amongst the machine learning (ML) community. If it were true, there would be absolutely no need to invest in other alternate routes to AGI. However, considering that even at a scale of trillion parameters, such connectionist architectures lack human levels of consistency in displaying the ability to perform logical, commonsense reasoning or complex math [2], it seems like a really big if.

Opinion 2: “Brains are very complicated. We understand them so little after so much effort. There is no way we’ll get brain-like AGI even in the next 100 years.”

A majority of people, both inside and outside of the neuroscience community, hold this belief strongly. However, this belief majorly stems from confusing whole-brain emulation with brain-like AGI. It is important to keep in mind that we only need to replicate key ingredients of the brain involved in learning processes to get brain-like AGI. If one were to make an analogy to the DL paradigm, there is no need to design the complex roles of each neuron in architectures individually. This is automatically discovered by the learning algorithm, gradient descent, which itself is comparatively simple. Similarly, reverse-engineering only the learning-from-scratch algorithms found in key parts of the brain (telencephalon & cerebellum) could be a way simpler task than it is assumed to be[3].

Opinion 3: “Neuroscientists aren’t trying to invent AGI, so we shouldn’t expect them to succeed.”

There is very little truth to this opinion. Trying to understand AGI-relevant brain algorithms is a big subset of inventing brain-like AGI, regardless of the intention. Furthermore, a number of leading computational neuroscientists (the DeepMind neuroscience team, Randall O’Reilly, Jeff Hawkins, Dileep George) are in fact explicitly trying to invent AGI.

Opinion 4: “Brain-like AGI is somewhat an incoherent concept. Intelligence requires embodiment, not just a brain in a vat (or on a chip).”

While there is an agreement that future AGIs would need some sort of ability to execute actions, having it in the form of a whole literal body is fairly debatable. There have been enough instances of humans in the past who’ve suffered from a total loss of motor control and yet have gone on to become acclaimed intellectuals ( Christopher Nolan (1965-2009), Stephen Hawking (1942-2018) ). Even if it were necessary, there are other alternatives such as running a brain-like AGI on a silicon chip while being connected to a virtual body in the VR world.

Opinion 5: “Brain-like AGI is incompatible with conventional silicon chips. It requires a whole new hardware platform based on spiking neurons, active dendrites, etc. Neurons are just way better at energy-efficient computations than silicon chips.”

According to Byrnes, this opinion couldn’t be further away from the truth. Conventional silicon chips can definitely simulate biological neurons—neuroscientists do this all the time. One could also use them to implement “brain-like algorithms” using different low-level operations more suited to a particular hardware, just as the same C code can be compiled to different CPU instruction sets. Although it is widely acknowledged that biological neurons are way more energy-efficient, if a silicon-chip AGI server consumed 1000× the power of a human brain, it really wouldn’t matter as long as it performed comparably. The electricity cost to run such a server would still be significantly low.

Opinions on winding up with a brain-like AGI

Opinion 1: "To emulate human-like intelligence, we need to emulate the human brain and it simply doesn't work on the same principles as today’s popular ML algorithms. Prosaic AGI could never subsume Brain-like AGI”

While some of the AGI relevant brain-like ingredients are universal parts of today’s popular ML algorithms (e.g. learning algorithms, distributed representations), a significant portion of other key ingredients are still absent. These include the ability to form "thoughts" that blend together immediate actions, short-term predictions, long-term predictions, and flexible hierarchical plans. There is a growing opinion that current ML algorithms cannot address these concurrently and may never be able to do so.

Opinion 2: “Brain-like AGI is possible but Prosaic AGI is not. It’s just not going to happen. Today's ML research is not a path to AGI, just as climbing a tree is not a path to the moon.”

This discussion could be considered a subset of other preceding opinions. To reiterate, this opinion stems from the question : “If we were to not change anything fundamentally or structurally in the current connectionist architectures and just keep scaling them to unprecedented levels, would we achieve AGI ?” The answer’s probably not. There is a consensus even amongst the ML community that Prosaic AGI would need to be ingrained with additional features such as online learning, continual learning, model-based planning to get it anywhere near human intelligence - all of which find their roots naturally in brain-like AGI.

Byrnes condemns the notion of brain-like AGI being centuries away and puts forward the optimistic opinion that in 20 years from now, we may understand the key learning parts of brain that would eventually lead to AGI [3],[4]


[1] [Intro to brain-like-AGI safety] 1. What's the problem & Why work on it now? - AI Alignment Forum

[2] [2305.14992] Reasoning with Language Model is Planning with World Model

[3] [Intro to brain-like-AGI safety] 2. “Learning from scratch” in the brain - AI Alignment Forum

[4] [Intro to brain-like-AGI safety] 3. Two subsystems: Learning & Steering - AI Alignment Forum



AISafety.info

AISafety.info is a project founded by Rob Miles. The website is maintained by a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—1970

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.