If I only care about helping people alive today, does AI safety still matter?

If AI could become advanced enough to constitute an existential threat to humanity within the lifetimes of the people you care about, then the importance of AI safety work doesn't rest on caring about the long-term future. It's difficult to estimate how soon transformative AI will arrive, but it's a common belief among AI experts and researchers that AI will transform the world within the lifetime of most people alive today.

Considering the enormous damage a misaligned AGI could do soon after it is deployed, the interests of those currently alive, including yourself, depend on humanity not deploying misaligned AGI.