Why might people build AGI rather than better narrow AIs?
Making a narrow AI
An AI that can only perform a limited set of tasks, e.g., a chess-playing AI.
Of course, for generality to be a good option there are some necessary conditions. You need an architecture which is straightforward enough to scale up, such as the transformer which is used for GPT and follows scaling laws
The relationship between a model’s performance and the amount of compute used to train it.
It seems like many important actors (such as DeepMind and OpenAI) believe these conditions hold in general, and are therefore focusing on trying to build an AGI in order to influence the future, so we should take actions to make it more likely that such an AGI will be developed safely.
Additionally, it is possible that even if we tried to build only narrow AIs, given enough time and computing power put to the task, we might accidentally create a more general AI than we intend by training a system on a task which requires a broad world model
A system’s internal representation of its environment, which it uses to predict what will happen, including as a result of its own possible actions.
Further reading:
- Reframing Superintelligence - A model of AI development which proposes that we might mostly build narrow AI systems for some time.