AI Safety Memes Wiki

This is a database of memes relevant to AI existential safety, especially ones which respond to common misconceptions.

Follow AI Notkilleveryoneism Memes for more!

AGI won’t be powerful enough to destroy humanity

AI is “just” x (math, matrix multiplication, etc.)

Doubt on AGI capabilities (too far away, impossible, etc)

(animated: https://research.aimultiple.com/wp-content/uploads/2017/08/wow.gif?fbclid=IwAR2JDaaf9eWWp9NnyidpNZI9L_9yf1eVvEFpno2hox83Oq6yii2qnasfFCQ)

Goalpost-moving (still not AGI!)

Why would AI want to be evil?

Whatever AI does is good since they’re way smarter

AI alignment is a fringe worry

We can’t just regulate AI development

AI regulation is bad

Alignment will be solved, don’t worry

The real danger is from modern AI, not ASI (myopia)

You’re saying other risks aren’t important

Debates abt consciousness, sentience, qualia, etc.

F*ck Moloch!

Misc other takes

“AI acceleration is good”

“Yudkowskian scenarios are unlikely”

“Death is inevitable”

“Start more AI labs”

“AI is like electricity”

“I only thought abt it for 5 secs”

“AI safety is a Pascal’s Wager!”

“AI safety is an infohazard!”

“You can’t predict the future exactly”

Rapid-fire bad solutions (bro just…)

Bad Yann takes

Just train multiple AIs

Just box the AI

Just turn it off if it turns against us

*Just use Asimov’s three laws*

Just don’t give AI access to the “real world”

Just merge with the AIs

*Just legally mandate that AIs must be aligned*

Just raise the AI like a child