How powerful would a superintelligence become?
The power of a superintelligence
An AI with cognitive abilities far greater than those of humans in a wide range of important domains.
-
Cognitive capabilities. A superintelligent system could be much smarter than a human. To get an intuition for what is possible, imagine the cognitive capacity of a billion perfectly-coordinated human geniuses with a mix of talents and fields of expertise, thinking at a million times human speed, and using (and writing) a huge suite of powerful computer programs. A superintelligence able to design other superintelligences might be able to do even better than those hypothetical billion geniuses by exploring a much larger space of qualitatively different AI designs. However, a superintelligence's cognitive capacity would remain bounded by some fundamental limits, such as physical limits to computation, the inability to solve NP-hard problems in polynomial time, and ignorance of remote parts of the universe.
-
Technological innovation. The technology humans have invented so far probably isn’t close to the limit of how powerful technology could be. For instance, it seems possible in principle to build machines that are much smaller or much larger than we’ve built so far. It also seems likely that, for anything a biological system can do, technology could in principle improve upon it (since biological evolution operates under constraints like being unable to explore the whole design space, unable to think ahead, and limited to certain materials).
-
Control of resources. It's impossible to say what resources a superintelligence would eventually control, because that depends on how we interact with it. At an extreme, we can imagine trying to contain it in a box from which it controls no resources at all. At the other extreme, a superintelligence that controlled a fully automated economy could self-replicate quickly and scale up to use the whole surface of the Earth, the whole solar system, the whole galaxy, and others. (Depending on the superintelligence’s goals, it might choose not to exercise this control, and leave it to humans instead.)
These factors mean a superintelligence could become powerful enough to exercise fine-grained control over astronomical-scale structures. What it would use that power for, and therefore what would happen to the world, would depend on its goals and values.