Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey
Basic concepts

Capabilities
Current systems
Algorithms
Prompting
Alignment concepts
Intelligence and optimization
AI goals
Risks and outcomes

What is the orthogonality thesis?

The orthogonality thesis is the claim that any1

level of intelligence is compatible with any set of terminal goals. In other words, values and intelligence are “orthogonal” to each other in the sense that agents can vary in one dimension while staying constant in the other dimension.

This means we can’t assume that an AI system that is as smart as or smarter than humans will automatically be motivated by human values.

On its own, the orthogonality thesis only states that unaligned superintelligence

is possible, not that it is likely, or that AI alignment is difficult. It is invoked to counter the idea that future AI will converge toward human goals or morality, regardless of its design, as an automatic result of becoming smarter.

In addition to this “weak” version of the thesis, people have considered stronger versions. Eliezer Yudkowsky’s “strong form” of the orthogonality thesis says that creating AI systems with arbitrary goals is not only possible, but involves no special difficulty; in other words, that “preferences are no harder to embody than to calculate”.2

While the orthogonality thesis is broadly accepted by the alignment research community, it has critics who come from a few distinct strands:

  • Some moral realists assert that a sufficiently intelligent entity would discover and adhere to objective moral truths that humans would endorse upon reflection.

  • Beren Millidge claims that the strong form of the orthogonality thesis is false within modern deep learning algorithms.

  • Nora Belrose contends that, depending on how the thesis is interpreted, it’s either trivial, false, or unintelligible.

  • Steve Petersen argues that the need for continuity of an agent's goals over time, together with the seemingly intrinsic underspecifiedness of goal representations, might cause AIs that have complex goals and that can understand their human creators to consider these creators to be previous versions of themselves, and aim to further these creators’ goals.


  1. What’s mostly meant here is arbitrarily high levels of intelligence are compatible with any goal. Systems with low levels of intelligence might not be able to represent goals at all. For example, it’s hard to understand what it would even mean for a rock to "have a goal." ↩︎

  2. Even in its strong form, the orthogonality thesis makes no predictions about which systems people will actually try to build, and therefore makes no predictions about which systems will end up existing in reality. People have sometimes misunderstood the orthogonality thesis to mean something like “the system resulting from a real-world AI design process is equally likely to end up with any set of goals”, but this is not implied by the thesis. ↩︎

Keep Reading

Continue with the next entry in "Basic concepts"
What is Goodhart's law?
Next
Or jump to a related question


AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.