Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Objections and responses

Is smarter-than-human AI unrealistic?
Is AI alignment easy?
Flawed proposals for setting AI goals
Flawed proposals for controlling AI
Dealing with misaligned AGI after deployment
Other issues from AI
Morality
Objections to AI safety research
Miscellaneous arguments

Wouldn't a superintelligence be slowed down by the need to do physical experiments?

A superintelligence

will be able to carry out theoretical reasoning at millions of times human speed, while real-world experimentation can be a lot slower. While this might make experimentation a limiting factor, we should note:

  • Experiments can often be done in approximating simulations, where they don't depend on fine-grained physics that is unknown or impractically computationally expensive.

  • Theory and experiment can substitute for each other to some extent. Just because it takes humans a certain amount of experimentation to make an advance, that doesn't mean that an AI would require the same amount, if it were vastly more intelligent. Humans often use far less than the maximum possible information they can extract from an experiment. In many cases, the information needed to be confident in a hypothesis isn’t much more than the information needed to notice the hypothesis as a possibility in the first place (e.g., general relativity was already a good explanation of existing physics before being specifically experimentally confirmed).

  • Experiments at the nanoscale can be extremely fast because the distances involved are so short.

  • A superintelligence that operates efficiently can perform many experiments in parallel (when knowing which experiments to perform doesn't require the results of a long chain of previous experiments).

  • Being able to develop theory much faster means a superintelligence can search through the entire tree of possible technological advances and select whichever path requires the least experimentation, even if that isn't the path humans would have chosen.

Keep Reading

Continue with the next entry in "Objections and responses"
Why would misaligned AI pose a threat that we can’t deal with?
Next
Or jump to a related question


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—2025

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.