Why should someone who is religious worry about AI existential risk?
The problem of AI existential risk is not tied to any specific set of values. Rather, the problem is that we don’t know how to give a superintelligent AI any goal which we can trust it to fulfill without causing existential catastrophe. Both religious and secular people agree that human extinction would be a tragedy and could join together in facing this challenge.
One possible reason why someone religious might not worry is that they trust that God won’t allow human extinction to happen. Even if this is the case, we still know that the world has seen many tragedies and that human choice can be a factor in such tragedies, even if God would never allow complete extinction. If working on solutions for AI risk would prevent billions of people from dying, that would also be worthwhile.
Similarly, one might think that the existence of an afterlife would make the physical extinction of mankind less bad, since it wouldn’t mean the end of all conscious beings. However, we don’t view murder as being good because “the victim is now with God”; rather, the shortening or destruction of human life in this world is itself seen to be a grave harm.
Another possible concern among religious people could be the secular materialist worldview dominant in AI safety research. Would their use of (aligned) AI be a threat to my religious way of life? While it’s true that some secular people want to use AI in a way that would conflict with some religious people’s values, many are hesitant to impose their own values on future generations of humans. To avoid these problems, people have developed ideas like coherent extrapolated volition (CEV), which would allow each person to live according to their values in the fullest way possible.
Even if you are aren’t satisfied with any proposed values for an advanced superintelligence and think it is better to not build any powerful AIs, there are specific policies which religious and secular people could likely agree on, including: supporting government regulation, pressuring AI companies not to deploy potentially dangerous systems, and researching interpretability so that we understand what the AI systems are actually doing.