Categories

Academia (6)Actors (6)Adversarial Training (7)Agency (6)Agent Foundations (18)AGI (19)AGI Fire Alarm (2)AI Boxing (2)AI Takeoff (8)AI Takeover (6)Alignment (6)Alignment Proposals (12)Alignment Targets (5)ARC (3)Autonomous Weapons (1)Awareness (5)Benefits (2)Brain-based AI (3)Brain-computer Interfaces (1)CAIS (2)Capabilities (20)Careers (16)Catastrophe (31)CHAI (1)CLR (1)Cognition (6)Cognitive Superpowers (9)Coherent Extrapolated Volition (3)Collaboration (5)Community (10)Comprehensive AI Services (1)Compute (8)Consciousness (5)Content (3)Contributing (32)Control Problem (8)Corrigibility (9)Deception (6)Deceptive Alignment (9)Decision Theory (5)DeepMind (4)Definitions (83)Difficulty of Alignment (10)Do What I Mean (2)ELK (3)Emotions (1)Ethics (7)Eutopia (5)Existential Risk (31)Failure Modes (17)FAR AI (1)Forecasting (7)Funding (10)Game Theory (1)Goal Misgeneralization (13)Goodhart's Law (3)Governance (24)Government (3)GPT (3)Hedonium (1)Human Level AI (6)Human Values (12)Inner Alignment (12)Instrumental Convergence (8)Intelligence (17)Intelligence Explosion (7)International (3)Interpretability (16)Inverse Reinforcement Learning (1)Language Models (11)Literature (5)Living document (2)Machine Learning (19)Maximizers (1)Mentorship (8)Mesa-optimization (6)MIRI (3)Misuse (4)Multipolar (4)Narrow AI (4)Objections (64)Open AI (2)Open Problem (6)Optimization (4)Organizations (16)Orthogonality Thesis (5)Other Concerns (8)Outcomes (3)Outer Alignment (15)Outreach (5)People (5)Philosophy (5)Pivotal Act (1)Plausibility (9)Power Seeking (5)Productivity (6)Prosaic Alignment (6)Quantilizers (2)Race Dynamics (5)Ray Kurzweil (1)Recursive Self-improvement (6)Regulation (3)Reinforcement Learning (13)Research Agendas (27)Research Assistants (1)Resources (22)Robots (8)S-risk (6)Sam Bowman (1)Scaling Laws (6)Selection Theorems (1)Singleton (3)Specification Gaming (11)Study (14)Superintelligence (38)Technological Unemployment (1)Technology (3)Timelines (14)Tool AI (2)Transformative AI (4)Transhumanism (2)Types of AI (3)Utility Functions (3)Value Learning (5)What About (9)Whole Brain Emulation (5)Why Not Just (16)

Catastrophe

31 pages tagged "Catastrophe"
Isn't the real concern AI-enabled surveillance?
Is AI safety about systems becoming malevolent or conscious?
Is large-scale automated AI persuasion and propaganda a serious concern?
Can we list the ways a task could go disastrously wrong and tell an AI to avoid them?
If I only care about helping people alive today, does AI safety still matter?
How quickly could an AI go from harmless to existentially dangerous?
How likely is it that an AI would pretend to be a human to further its goals?
How can I update my emotional state regarding the urgency of AI safety?
Are Google, OpenAI, etc. aware of the risk?
Wouldn't it be a good thing for humanity to die out?
Why might a superintelligent AI be dangerous?
Why might a maximizing AI cause bad outcomes?
Why is AI alignment a hard problem?
Why would an AI do bad things?
Why does AI takeoff speed matter?
What is a "warning shot"?
How likely is extinction from superintelligent AI?
What are the differences between AI safety, AI alignment, AI control, Friendly AI, AI ethics, AI existential safety, and AGI safety?
What are accident and misuse risks?
Can't we limit damage from AI systems in the same ways we limit damage from companies?
Will AI be able to think faster than humans?
What is perverse instantiation?
Isn't the real concern with AI that it's biased?
What is reward hacking?
Why would a misaligned superintelligence kill everyone in the world?
What is the "sharp left turn"?
Wouldn't AIs need to have a power-seeking drive to pose a serious risk?
Might anyone use AI to destroy human civilization?
What is the EU AI Act?
Why would misaligned AI pose a threat that we can’t deal with?
But won't we just design AI to be helpful?