Could we tell the AI to do what's morally right?
There are a number of challenges with the proposal to tell an AI to "do what's morally right":
-
Philosophers (and other people) don’t agree on what actions are morally right and wrong, and many hold that human values are inherently complex (in the sense of "difficult or impossible to define in a succinct way").
-
It is difficult to create a well-defined concept of what is morally right in a way we can encode into an AI.1
One attempt to solve what the right thing to do is is Coherent Extrapolated Volition (CEV) -
We currently don't know how to make an AI pursue any particular goal in a safe and reliable way.
One attempt to solve what the right thing to do is is Coherent Extrapolated Volition (CEV) ↩︎