Objections
60 pages tagged "Objections"
We’re going to merge with the machines so this will never be a problem, right?
Do people seriously worry about existential risk from AI?
Isn’t it immoral to control and impose our values on AI?
Isn’t AI just a tool like any other? Won’t it just do what we tell it to?
What about technological unemployement from AI?
What about autonomous weapons?
What about AI-enabled surveillance?
Can we think of AIs as human-like?
Is the UN concerned about existential risk from AI?
What about automated AI persuasion and propaganda?
Can we list the ways a task could go disastrously wrong and tell an AI to avoid them?
Are AI self-improvement projections extrapolating an exponential trend too far?
If we solve alignment, are we sure of a good future?
If I only care about helping people alive today, does AI safety still matter?
How much computing power did evolution use to create the human brain?
How might things go wrong even without an agentic AI?
How might AI socially manipulate humans?
How might AGI kill people?
Does the importance of AI risk depend on caring about the long-term future?
Do AIs suffer?
Could we tell the AI to do what's morally right?
Why can't we just turn the AI off if it starts to misbehave?
Could AI have emotions?
Can you stop an advanced AI from upgrading itself?
Can we get AGI by scaling up architectures similar to current ones, or are we missing key insights?
Can we constrain a goal-directed AI using specified rules?
Can an AI be smarter than humans?
How can AI cause harm if it can't manipulate the physical world?
Wouldn't it be a good thing for humanity to die out?
Wouldn't a superintelligence be smart enough to avoid misunderstanding our instructions?
Why would we only get one chance to align a superintelligence?
Why would intelligence lead to power?
Why might people build AGI rather than better narrow AIs?
Why don't we just not build AGI if it's so dangerous?
Aren't there easy solutions to AI alignment?
Why can’t we just “put the AI in a box” so that it can’t influence the outside world?
Why can’t we just use Asimov’s Three Laws of Robotics?
Why can't we just make a "child AI" and raise it?
What is a "value handshake"?
What are the ethical challenges related to whole brain emulation?
Isn’t the real concern with AI something else?
Wouldn't humans triumph over a rogue AI because there are more of us?
What are some arguments why AI safety might be less important?
How can an AGI be smarter than all of humanity?
Are corporations superintelligent?
What are the "no free lunch" theorems?
Can't we limit damage from AI systems in the same ways we limit damage from companies?
Isn't capitalism the real unaligned superintelligence?
Will AI be able to think faster than humans?
Wouldn't a superintelligence be slowed down by the need to do physical experiments?
What about AI that is biased?
Are AIs conscious?
Why would a misaligned superintelligence kill everyone?
Aren't AI existential risk concerns just an example of Pascal's mugging?
What about people misusing AI?
Objections and responses
What is Vingean uncertainty?
Wouldn't AIs need to have a power-seeking drive to pose a serious risk?
Might someone use AI to destroy human civilization?
What is Moravec’s paradox?