Might an aligned superintelligence force people to change?
We don't know if an aligned superintelligence
An AI with cognitive abilities far greater than those of humans in a wide range of important domains.
For example, is it ethical to change what people want if we expect them to endorse such a change in hindsight, e.g. curing a drug- or gambling-addict of their addiction or treating a patient against their will? There is currently no consensus among moral philosophers regarding in which conditions this is acceptable, if any. An AI that follows preference utilitarianism would refuse to do so, but one that implements hedonistic utilitarianism might consider it.
It turns out that solving moral philosophy is pretty hard. We don't really know what a "good" superintelligence would do. We're focused here on preventing AIs from doing obviously very bad things.
Further reading:
- Max Tegmark’s book Life 3.0