Are Google, OpenAI, etc. aware of the risk?

The major AI companies are thinking about this. OpenAI was founded specifically with the intention to counter risks from superintelligence; many people at Google DeepMind, Anthropic, and other organizations that are working toward building AGI are convinced by the arguments, and few genuinely oppose work in the field (though some claim it’s premature). For example, the paper Concrete Problems in AI Safety was a collaboration between researchers at Google Brain, Stanford, Berkeley, and OpenAI.

Leaders and employees at these companies have signed a statement saying “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

However, as of 2024, the majority of the effort these organizations put forward is towards capabilities research, rather than safety.