Predictions about future AI
Building a more accurate picture of how AI will develop lets us make better decisions, allowing us to increase the odds that AI transforms the world for the better and not for the worse.
One aspect of how AI will develop is “timelines”: how soon will advanced AI be created? “Advanced AI” can be further specified as “human-level AI”, “transformative AI An AI that is capable of transforming society, as drastically as the industrial revolution or even more so.
Another aspect is “takeoff speed How quickly the first AI with roughly human-level intelligence leads to the first AI with vastly superhuman intelligence.
An AI with cognitive abilities far greater than those of humans in a wide range of important domains.
In addition to questions of when we’ll have advanced AI, there are questions of what the consequences will be — for example, how likely is it that advanced AI will result in an existential disaster? This is sometimes called “P(doom)”, short for “the probability of doom”, where “doom” refers to human extinction and similarly bad outcomes (without implying inevitability). The answer depends, among other things, on how hard it is to align AI to human values, on how hard we can expect people to try, and on whether we can recover from failure.
There are many other questions about the dynamics of advanced AI: Will there end up being one superintelligent system that can prevent any threats to its control, or many superintelligent systems that compete or collaborate? What kinds of AI will we develop, and will they act autonomously in the world or be used as tools? Will similar generally intelligent systems do many different tasks, or will there be specialized systems for each? What kind of groups (corporations, governments, university research groups, international institutions) will develop the most advanced future AI systems?
All these questions are related in complicated ways. For example:
-
A sudden takeoff might make it harder for humans to react, which might increase P(doom).
-
Timelines affect which actors are likely to build AGI, and vice versa.
-
Less agentic, less general, and more tool-like systems might make existential disasters from misalignment less likely.
-
A sudden takeoff might be more likely than a slow takeoff
to result in a single superintelligent system taking control.Slow takeoffA transition from human-level AI to superintelligent AI that goes slowly. This usually implies that we have time to react.