Might an aligned superintelligence immediately kill everyone and then go on to create a "hedonium shockwave"?

Yes. Though that is very unlikely, as it assumes that the superintelligence wants to tile the universe with hedonium, which is a very specific failure scenario. The more details a story has, the less likely it is to be true (this is known as the conjunction fallacy).

A hedonium (or more generally, utilitronium) shockwave is best viewed as a thought experiment showing the shortcomings of just trying to maximise happiness in the universe, rather than as a real danger. It’s a bit like when a physicist makes a very simple model in order to better understand the underlying laws.



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.