Why would the first Artificial General Intelligence ever create or agree to the creation of another AGI?

AGI can be expected to want to construct other agents with the same goals as itself1 that are similar to superior to or more capable than itself as well as self-improve. This is because of the convergent instrumental goal of seeking power to make itself better positioned to achieve its goals.

However, the pursuit of both creating new agents and self-improvement hit the same set of roadblocks due to Vingean uncertainty, and this leads to the problem of the agent being unable to acquire certainty that its current goals will be preserved by the new agent. This is a problem because consequentialist preferences are reflectively stable by default which means that so long as the agent in question is sufficiently capable, it will want to protect its utility function from any modification and only create other agents with similar utility functions. Essentially, this suggests that such an agent will want to solve the problem of aligning superior agents to itself before trying to self-improve or create new agents2.


  1. or possibly even goals that are different from its own, unless its goals are already stable. ↩︎

  2. although for certain other agents this may not be necessary. ↩︎