Take AISafety.info’s 3 minute survey to help inform our strategy and priorities

Take the survey

Categories

Academia (6)Actors (5)Adversarial Training (7)Agency (6)Agent Foundations (20)AGI (19)AGI Fire Alarm (3)AI Boxing (2)AI Takeoff (8)AI Takeover (6)Alignment (5)Alignment Proposals (10)Alignment Targets (4)Anthropic (1)ARC (3)Autonomous Weapons (1)Awareness (6)Benefits (2)Brain-based AI (3)Brain-computer Interfaces (1)CAIS (2)Capabilities (20)Careers (14)Catastrophe (29)CHAI (1)CLR (1)Cognition (5)Cognitive Superpowers (9)Coherent Extrapolated Volition (2)Collaboration (6)Community (10)Comprehensive AI Services (1)Compute (9)Consciousness (5)Content (2)Contributing (29)Control Problem (7)Corrigibility (8)Deception (5)Deceptive Alignment (8)Decision Theory (5)DeepMind (4)Definitions (86)Difficulty of Alignment (8)Do What I Mean (2)ELK (3)Emotions (1)Ethics (7)Eutopia (5)Existential Risk (29)Failure Modes (13)FAR AI (1)Forecasting (7)Funding (10)Game Theory (1)Goal Misgeneralization (13)Goodhart's Law (3)Governance (25)Government (3)GPT (3)Hedonium (1)Human Level AI (5)Human Values (11)Inner Alignment (10)Instrumental Convergence (5)Intelligence (15)Intelligence Explosion (7)International (3)Interpretability (17)Inverse Reinforcement Learning (1)Language Models (13)Literature (4)Living document (2)Machine Learning (20)Maximizers (1)Mentorship (8)Mesa-optimization (6)MIRI (2)Misuse (4)Multipolar (4)Narrow AI (4)Objections (60)Open AI (2)Open Problem (4)Optimization (4)Organizations (15)Orthogonality Thesis (3)Other Concerns (8)Outcomes (5)Outer Alignment (14)Outreach (5)People (4)Philosophy (5)Pivotal Act (1)Plausibility (7)Power Seeking (5)Productivity (6)Prosaic Alignment (7)Quantilizers (2)Race Dynamics (6)Ray Kurzweil (1)Recursive Self-improvement (6)Regulation (3)Reinforcement Learning (13)Research Agendas (26)Research Assistants (1)Resources (19)Robots (7)S-risk (6)Sam Bowman (1)Scaling Laws (6)Selection Theorems (1)Singleton (3)Specification Gaming (10)Study (13)Superintelligence (34)Technological Unemployment (1)Technology (3)Timelines (14)Tool AI (2)Transformative AI (4)Transhumanism (2)Types of AI (2)Utility Functions (3)Value Learning (5)What About (9)Whole Brain Emulation (6)Why Not Just (15)

Objections

60 pages tagged "Objections"
We’re going to merge with the machines so this will never be a problem, right?
Do people seriously worry about existential risk from AI?
Isn’t it immoral to control and impose our values on AI?
Isn’t AI just a tool like any other? Won’t it just do what we tell it to?
What about technological unemployement from AI?
What about autonomous weapons?
What about AI-enabled surveillance?
Can we think of AIs as human-like?
Is the UN concerned about existential risk from AI?
What about automated AI persuasion and propaganda?
Can we list the ways a task could go disastrously wrong and tell an AI to avoid them?
Are AI self-improvement projections extrapolating an exponential trend too far?
If we solve alignment, are we sure of a good future?
If I only care about helping people alive today, does AI safety still matter?
How much computing power did evolution use to create the human brain?
How might things go wrong even without an agentic AI?
How might AI socially manipulate humans?
How might AGI kill people?
Does the importance of AI risk depend on caring about the long-term future?
Do AIs suffer?
Could we tell the AI to do what's morally right?
Why can't we just turn the AI off if it starts to misbehave?
Could AI have emotions?
Can you stop an advanced AI from upgrading itself?
Can we get AGI by scaling up architectures similar to current ones, or are we missing key insights?
Can we constrain a goal-directed AI using specified rules?
Can an AI be smarter than humans?
How can AI cause harm if it can't manipulate the physical world?
Wouldn't it be a good thing for humanity to die out?
Wouldn't a superintelligence be smart enough to avoid misunderstanding our instructions?
Why would we only get one chance to align a superintelligence?
Why would intelligence lead to power?
Why might people build AGI rather than better narrow AIs?
Why don't we just not build AGI if it's so dangerous?
Aren't there easy solutions to AI alignment?
Why can’t we just “put the AI in a box” so that it can’t influence the outside world?
Why can’t we just use Asimov’s Three Laws of Robotics?
Why can't we just make a "child AI" and raise it?
What is a "value handshake"?
What are the ethical challenges related to whole brain emulation?
Isn’t the real concern with AI something else?
Wouldn't humans triumph over a rogue AI because there are more of us?
What are some arguments why AI safety might be less important?
How can an AGI be smarter than all of humanity?
Are corporations superintelligent?
What are the "no free lunch" theorems?
Can't we limit damage from AI systems in the same ways we limit damage from companies?
Isn't capitalism the real unaligned superintelligence?
Will AI be able to think faster than humans?
Wouldn't a superintelligence be slowed down by the need to do physical experiments?
What about AI that is biased?
Are AIs conscious?
Why would a misaligned superintelligence kill everyone?
Aren't AI existential risk concerns just an example of Pascal's mugging?
What about people misusing AI?
Objections and responses
What is Vingean uncertainty?
Wouldn't AIs need to have a power-seeking drive to pose a serious risk?
Might someone use AI to destroy human civilization?
What is Moravec’s paradox?

AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.