Are Google, OpenAI, etc. aware of the risk?
The major AI companies are thinking about existential risk
A risk of human extinction or the destruction of humanity’s long-term potential.
Co-founder and CEO of OpenAI.
However, as of 2024, the majority of the effort these organizations put forward is towards capabilities research
Research aimed at making AI more capable. This is sometimes contrasted with AI research aimed at safety.
Further reading:
-
AI Lab Watch ranks labs on various criteria related to AI safety.
-
SaferAI produces a similar ranking.
The paper Concrete Problems in AI Safety was a collaboration between researchers at Google Brain (now Google Deepmind), Stanford, Berkeley, and OpenAI. ↩︎