Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Predictions about future AI

Timelines
Compute and scaling
Nature of AI
Takeoff
Takeover
Relative capabilities
Good outcomes
Catastrophic outcomes

Why might people build AGI rather than better narrow AIs?

Making a narrow AI

for every task would be extremely costly and time-consuming. By making a more general intelligence, you can apply one system to a broader range of tasks, which is economically and strategically attractive.

Of course, for generality to be a good option there are some necessary conditions. You need an architecture which is straightforward enough to scale up, such as the transformer which is used for GPT and follows scaling laws

. It's also important that by generalizing you do not lose too much capacity at narrow tasks or require too much extra computing power for it to be worthwhile.

It seems like many important actors (such as DeepMind and OpenAI) believe these conditions hold in general, and are therefore focusing on trying to build an AGI in order to influence the future, so we should take actions to make it more likely that such an AGI will be developed safely.

Additionally, it is possible that even if we tried to build only narrow AIs, given enough time and computing power put to the task, we might accidentally create a more general AI than we intend by training a system on a task which requires a broad world model

.

Further reading:

Keep Reading

Continue with the next entry in "Predictions about future AI"
How can progress in non-agentic LLMs lead to capable AI agents?
Next
Or jump to a related question


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.