What is anthropic reasoning?
Anthropic reasoning1 means reasoning about events that affect which observers exist, and dealing with the biases introduced by these selection effects.
For instance, our record of past supervolcano eruptions, such as the Oruanui eruption, might lead us to underestimate the likelihood of such eruptions, since if such an eruption had happened more recently, there might no longer be humans today to muse about it. More generally, the true base rate of human-extinction-causing events might be higher than the historical record (i.e., no human extinction in 300,000 years) suggests.
Anthropic reasoning may be relevant to estimates of risk from AGI, as well. For example, it has been argued that anthropic selection effects:
- Explain why we have yet to see agentic AGI even though we’ve seen AI advance in other ways2
- Imply that creating intelligent entities might be harder than we expect from seeing human intelligence evolve
The correct way to interpret anthropic reasoning, or if such reasoning even applies to the world we live in, is debated.
Not to be confused with the company with the same name. ↩︎
The argument is that if we were in a world with agentic AGI, we would have died. Thus, the fact that we have not observed such dangerous AGI may offer some support for such an AGI being an existential risk. ↩︎