Might an aligned superintelligence force people to change?
We don’t know if an aligned superintelligence would force us to change more than we would presently like. Ignoring for a moment the values of others, if a superintelligence were aligned with your values, we would expect it to change you only in ways that you would approve of upon reflection, but this is not necessarily clear.
For example, is it ethical to change what people want if we expect them to endorse such a change in hindsight, e.g. curing a drug- or gambling-addict of their addiction or treating a patient against their will? There is currently no consensus among moral philosophers regarding in which conditions this is acceptable, if any. An AI that follows preference utilitarianism would refuse to do so, but one that implements hedonistic utilitarianism might consider it.
It turns out that solving moral philosophy is pretty hard. We don't really know what a "good" superintelligence would do. We're focused here on preventing AIs from doing obviously very bad things.
Further reading:
- Max Tegmark’s book Life 3.0