Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey

What is a "quantilizer"?

A quantilizer is a system which doesn’t look for the best solution; rather, it is satisfied with choosing one of the better solutions from a specific collection of possibilities. Concretely, imagine an AI trained to imitate a human expert at some task. This AI could take the collection of all actions that this human expert might take to achieve a specific goal and then choose randomly from the most effective one percent of actions. Intuitively, you can think of it as always acting like an extremely competent person having an excellent day.

This combines the power of utility maximization with the safety of human imitation. It is choosing from the best solutions, so it will be more effective than just imitating people, but since it is only selecting from actions that a person might plausibly take, it is less susceptible to Goodhart’s law and won’t fall into extreme solutions like a utility maximizer.

For example, if you gave an unbounded optimizer a task like collecting stamps, it might try to turn all of the solar system’s resources into stamps, whereas a human might buy some at a stamp show, or collect them from letters. A quantilizer would avoid the dangerous methods of the optimizer by only choosing from human-like strategies, while avoiding human mistakes by only selecting from the most effective of those strategies.

Quantilization is currently only a theoretical tool useful for exploring design decisions. While promising as a general strategy to mitigate the dangers of an AI taking extreme strategies, it has serious limitations. One problem is that quantilization is limited in its range of application. If you are only choosing from actions which imitate humans, it fails in cases where a solution doesn’t lie within the distribution, such as when success would need superhuman ability.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.