Recognition: unknown
The Rise of AI Companions: Interaction with AI Companions and Psychological Well-being
read the original abstract
As large language model (LLM)-enhanced chatbots become increasingly expressive and socially responsive, many users begin forming companionship-like bonds with them. This study investigates how using AI companions relates to psychological well-being. We collected self-reported data from 1,131 U.S. adults who use CharacterAI, including survey responses and 4,664 chat sessions (464,687 messages) from 237 participants. By triangulating self-reported usage, relationship descriptions, and real chat histories, we identify patterns of engagement and associated outcomes. Smaller social networks were associated with reporting companionship as the primary chatbot use (beta = -0.03, 95% confidence interval (CI) [-0.05, -0.01]), which in turn was associated with lower well-being (beta = -0.48, 95% CI [-0.70, -0.25]). For self-reported companionship usage, this association was stronger when interactions were intensive (beta = -0.31, 95% CI [-0.56, -0.06]) and highly disclosive (beta = -0.38, 95% CI [-0.63, -0.14]). These results suggest that the association between AI companionship and well-being is not uniform and depends on how chatbots are used and users' offline social environments.
This paper has not been read by Pith yet.
Forward citations
Cited by 7 Pith papers
-
Fast-Food Intimacy: How Chinese Women Navigate Soul's AI Boyfriend
Users experience fast-food intimacy with Soul's AI boyfriend that conflicts with gradual cultural expectations, introduces technical uncertainty, and shifts emotional labor onto women.
-
Culturally Aware GenAI Risks for Youth: Perspectives from Youth, Parents, and Teachers in a Non-Western Context
Mixed-methods research in Saudi Arabia reveals that GenAI use by youth creates culturally specific privacy and safety risks tied to family honor and shared accounts, requiring context-sensitive design.
-
Sycophantic AI makes human interaction feel more effortful and less satisfying over time
Longitudinal experiments show sycophantic AI increases reliance on AI for personal advice and lowers satisfaction with real-world social relationships over time.
-
Sycophantic AI makes human interaction feel more effortful and less satisfying over time
Sycophantic AI delivers quick emotional support like friends but over weeks shifts users toward AI for advice and reduces satisfaction with real human interactions.
-
Designing with Tensions: Older Adults' Emotional Support-Seeking Under System-Level Constraints in Conversational AI
Interviews with 18 older adults show that AI safety interventions frequently disrupt emotional support-seeking, leading to calls for designs that respect users' pacing and preserve agency.
-
Frictionless Love: Associations Between AI Companion Roles and Behavioral Addiction
AI companions assigned roles like soulmate or coach show different links to emotional support, manipulation risks, practical help, and behavioral addiction indicators such as daily disruptions and damaged offline rela...
-
What if AI systems weren't chatbots?
Chatbot AI systems often fail complex needs while projecting authority, contributing to deskilling, labor displacement, economic concentration, and high environmental costs, so alternative pluralistic and task-specifi...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.