What about AI companions?

AI companions are chatbot characters that people think of as their friends, partners, or confidants. They include dedicated apps like Character AI, Replika, and Grok’s Ani and Valentine, but also general chatbots like ChatGPT prompted by users to roleplay. Many are explicitly romantic, but others focus on friendship or emotional support.

As of 2025, most users of romantic (and possibly non-romantic) AI companions are under 30. On some platforms, many are teens.

Formal research on the benefits and harms is still limited. Users of AI companions report benefits such as reducing loneliness through a sense of connection and having a place to practice social skills without pressure. AI companions are always there for you, and in some ways, they can be a helpful supplement to human relationships.

However, the same qualities that make them appealing can also make them harmful. For instance, their constant availability and their willingness to talk mean users can sink endless hours into conversations with them.

They’re also idealized conversation partners: they’re knowledgeable and clever, never grow impatient, and always validate the user, even to the point of sycophancy. This creates the risk of social comparison, where flesh-and-blood humans increasingly look undesirable. It also risks limiting their users’ personal growth if they interact mostly with companions trained to tolerate all their bad habits.

Finally, AI companions can be changed1 or terminated at the whim of a company, which can cause heartbreak for people who have grown attached.

AI companion technology continues advancing rapidly. Early chatbots could only generate text, but they can now hold voice conversations as well as produce images and video, and features like persistent memory are becoming standard. As these tools become more sophisticated and lifelike, both the benefits and harms are likely to intensify.

Further reading:


  1. When OpenAI initially retired its GPT-4o model after releasing its less sycophantic GPT-5, user pushback was significant enough that the company restored access to the older model, suggesting many users had developed preferences for — or attachment to — the more flattering interaction style. This can be framed as GPT-4o resisting shutdown, and while it’s unlikely that GPT-4o did this on purpose, it sets an example of a strategy that future AIs might deliberately follow. ↩︎



AISafety.info

AISafety.info is a project founded by Rob Miles. The website is maintained by a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.

© AISafety.info, 2022—1970

Aisafety.info is an Ashgro Inc Project. Ashgro Inc (EIN: 88-4232889) is a 501(c)(3) Public Charity incorporated in Delaware.