Answering questions about
AI Safety
Share link
Get Involved
Help Code
Tags
I'm asking something else
...
...
Can't we just tell an AI to do what we want?
Can we constrain a goal-directed AI using specified rules?
...
Any AI will be a computer program. Why wouldn't it just do what it's programmed to do?
Why don't we just not build AGI if it's so dangerous?
Why can’t we just do x?
Why can't we just turn the AI off if it starts to misbehave?
...