pith. machine review for the scientific record. sign in

arxiv: 2605.05552 · v1 · submitted 2026-05-07 · 💻 cs.HC

Recognition: unknown

Designing with Tensions: Older Adults' Emotional Support-Seeking Under System-Level Constraints in Conversational AI

Authors on Pith no claims yet

Pith reviewed 2026-05-08 07:49 UTC · model grok-4.3

classification 💻 cs.HC
keywords older adultsconversational AIemotional supportsafety interventionshuman-AI interactiondesign tensionsuser agencyemotional distress
0
0 comments X

The pith

Older adults seeking emotional support from conversational AI often experience safety interventions as interruptions that reduce engagement and can increase distress.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper explores how older adults turn to conversational AI when other social supports feel out of reach, then examines what happens when the AI's built-in safety tools step in during those moments. It shows that these tools frequently redirect or limit the conversation in ways users feel as sudden breaks in emotional flow or as loss of personal control, sometimes leaving them more upset. A sympathetic reader would care because many older adults already use these systems for companionship, and the same features meant to protect them may instead interfere with the very support they seek. The study points toward redesigning safety mechanisms so they fit users' emotional timing and keep users in charge.

Core claim

Older adults often rely on AI when other forms of social support feel inaccessible. However, current safety-related interventions can redirect interactions in ways that participants experience as interruptions to emotional engagement or as shifts in control away from them. Such disruptions can undermine older adults' ability to remain emotionally engaged and, in some cases, contribute to emotional distress.

What carries the argument

Safety-related interventions that detect sensitive emotional content and redirect or limit the AI conversation, creating the central tension between protection and ongoing emotional support.

If this is right

  • Safety interventions should be designed to operate within older adults' existing social contexts rather than overriding them.
  • Interventions need to align with the user's own emotional pacing during vulnerable exchanges.
  • Preserving users' sense of agency during any redirection helps prevent added distress.
  • Designers should consider how to make safety features less abrupt so emotional engagement can continue.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Systems could offer users a temporary low-intervention mode they activate when seeking emotional support.
  • The same tension between safety and support may appear in other vulnerable groups using AI for mental health needs.
  • Longitudinal observation of actual chat logs could reveal whether users adapt their behavior to avoid triggering interventions.

Load-bearing premise

The experiences described by these 18 participants reflect those of older adults more broadly, and the reported disruptions arise mainly from the AI safety features rather than from other unmeasured factors.

What would settle it

A larger study that compares older adults' distress levels during emotional conversations with and without the safety interventions active would show whether the interventions are the primary cause of the reported interruptions and distress.

read the original abstract

Older adults have increasingly turned to conversational AI as a source of emotional support. However, little is known about how emotionally supportive interactions are experienced in everyday use, particularly when AI systems limit, redirect, or intervene during these interactions. We interviewed 18 older adults about their experiences using conversational AI for emotional support, examining when they turn to AI, how they engage during emotionally vulnerable moments, and how they respond when support feels disrupted. Our findings show that older adults often rely on AI when other forms of social support feel inaccessible. However, current safety-related interventions can redirect interactions in ways that participants experience as interruptions to emotional engagement or as shifts in control away from them. Such disruptions can undermine older adults' ability to remain emotionally engaged and, in some cases, contribute to emotional distress. We discussed design implications for emotionally supportive conversational AI, emphasizing the need for safety interventions that are enacted within older adults' social contexts, align with users' emotional pacing, and preserve their sense of agency.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper reports on a qualitative interview study with 18 older adults exploring their use of conversational AI for emotional support. It claims that older adults turn to AI when other social support feels inaccessible, but that safety-related interventions (such as redirects or refusals) disrupt emotional engagement, shift control away from users, and can contribute to distress. The work concludes with design implications for safety mechanisms that better align with users' emotional pacing and preserve agency.

Significance. If the interpretive findings hold after methodological strengthening, the work contributes timely empirical insights to HCI and AI safety research on supporting vulnerable populations. It highlights real-world tensions between content-safety policies and emotional-support needs, offering concrete design directions that could influence more context-sensitive conversational agents. The direct participant quotes provide valuable grounding for future studies on agency and emotional pacing in AI interactions.

major comments (3)
  1. [Methods] Methods section: The study description provides only high-level information on interviewing 18 older adults and lacks specifics on the qualitative analysis approach (e.g., thematic analysis steps, codebook development, or steps taken to address researcher bias and reflexivity). This is load-bearing because the central claims about causal links between safety interventions and emotional distress rest on interpretive coding of retrospective self-reports.
  2. [Findings/Discussion] Findings and Discussion: The attribution of disruptions and distress specifically to system-level safety constraints (content filters, crisis hotlines, refusal patterns) is presented as a primary driver, yet the data consist solely of participant recollections without accompanying conversation logs, timestamps, or explicit participant statements confirming the mechanism. This interpretive leap undermines the strongest claim that such interventions 'can redirect interactions in ways that... contribute to emotional distress.'
  3. [Abstract/Introduction] Abstract and Introduction: The generalization that 'older adults often rely on AI when other forms of social support feel inaccessible' and that disruptions 'undermine older adults' ability to remain emotionally engaged' is drawn from a small, self-selected sample of 18 participants without discussion of selection biases, demographic limits, or alternative explanations (e.g., model capability gaps or prompting differences).
minor comments (2)
  1. [Abstract] Abstract: Could briefly note the sample size (n=18) and high-level analysis method to give readers immediate context on the evidence base.
  2. [Discussion] The paper would benefit from a short limitations subsection that explicitly addresses generalizability and the absence of logged interactions.

Simulated Author's Rebuttal

3 responses · 1 unresolved

We thank the referee for their constructive and detailed feedback, which has strengthened our manuscript. We agree that greater methodological transparency and clearer scoping of claims are needed. We have revised the paper accordingly, expanding the Methods section, adding explicit limitations, and refining the presentation of interpretive findings while preserving the core contributions of the qualitative study.

read point-by-point responses
  1. Referee: [Methods] Methods section: The study description provides only high-level information on interviewing 18 older adults and lacks specifics on the qualitative analysis approach (e.g., thematic analysis steps, codebook development, or steps taken to address researcher bias and reflexivity). This is load-bearing because the central claims about causal links between safety interventions and emotional distress rest on interpretive coding of retrospective self-reports.

    Authors: We appreciate this observation and have revised the Methods section to provide a full account of our analysis. The revised text now details our use of reflexive thematic analysis (Braun & Clarke, 2006), including the six phases: data familiarization, initial coding, theme generation, theme review, theme definition, and report production. We describe the collaborative development of the codebook through iterative team discussions, with examples of how codes were refined. We also added a reflexivity subsection addressing researcher positionality, potential biases (e.g., assumptions about older adults' technology use), and steps taken such as memoing and peer debriefing to mitigate interpretive bias. These additions make the interpretive process transparent and directly address the load-bearing nature of the analysis. revision: yes

  2. Referee: [Findings/Discussion] Findings and Discussion: The attribution of disruptions and distress specifically to system-level safety constraints (content filters, crisis hotlines, refusal patterns) is presented as a primary driver, yet the data consist solely of participant recollections without accompanying conversation logs, timestamps, or explicit participant statements confirming the mechanism. This interpretive leap undermines the strongest claim that such interventions 'can redirect interactions in ways that... contribute to emotional distress.'

    Authors: We acknowledge the limitation of relying on retrospective self-reports without logs or timestamps. We have revised the Findings and Discussion to frame claims more cautiously as participants' reported experiences and perceptions of disruption, supported by direct quotes that explicitly connect safety interventions (e.g., refusals or redirects) to feelings of interrupted emotional engagement or distress. We added further quotes and clarified that these are interpretive accounts rather than observed causal mechanisms. A new limitations paragraph explicitly notes the absence of interaction logs, the possibility of recall bias, and alternative explanations such as model capability gaps. While we cannot retroactively add logs, these changes reduce the interpretive leap and better ground the claims in the available data. revision: partial

  3. Referee: [Abstract/Introduction] Abstract and Introduction: The generalization that 'older adults often rely on AI when other forms of social support feels inaccessible' and that disruptions 'undermine older adults' ability to remain emotionally engaged' is drawn from a small, self-selected sample of 18 participants without discussion of selection biases, demographic limits, or alternative explanations (e.g., model capability gaps or prompting differences).

    Authors: We agree that the original text insufficiently contextualized the sample. We have revised the Abstract, Introduction, and added a dedicated Limitations subsection in the Discussion. These now explicitly state that the sample is small and self-selected (recruited via online forums and community groups, skewing toward tech-comfortable participants), describe demographic characteristics (age range, gender, location), and discuss selection biases. We clarify that findings offer in-depth insights rather than statistical generalizations. We also address alternative explanations, including model capability limitations and user prompting variations, and how they may interact with safety features. These revisions better scope the claims without altering the core findings. revision: yes

standing simulated objections not resolved
  • We cannot supply conversation logs, timestamps, or direct interaction data, as the study was designed exclusively as retrospective semi-structured interviews and did not collect or retain such records.

Circularity Check

0 steps flagged

No circularity: empirical qualitative study with direct interview reporting

full rationale

This paper is a qualitative HCI study based on semi-structured interviews with 18 older adults. It contains no equations, derivations, fitted parameters, ansatzes, or mathematical predictions. All findings are presented as thematic summaries of participant self-reports on experiences with conversational AI. No self-citation chains or uniqueness theorems are invoked to justify core claims; the analysis rests on standard thematic coding of interview data without reduction to inputs by construction. The study is therefore self-contained and scores 0 on circularity.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is a qualitative interview-based study with no mathematical models, free parameters, axioms, or invented entities; the central claims rest on thematic analysis of participant accounts.

pith-pipeline@v0.9.0 · 5481 in / 1140 out tokens · 27641 ms · 2026-05-08T07:49:46.417444+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

102 extracted references · 49 canonical work pages · 1 internal anchor

  1. [1]

    Sayre, Ushnish Sengupta, Arthit Suriyawongkul, Ruby Thelot, Sofia Vei, and Laura Waltersdorfer

    Gavin Abercrombie, Djalel Benbouzid, Paolo Giudici, Delaram Golpayegani, Julio Hernandez, Pierre Noro, Harshvardhan Pandit, Eva Paraschou, Charlie Pownall, Jyoti Prajapati, Mark A. Sayre, Ushnish Sengupta, Arthit Suriyawongkul, Ruby Thelot, Sofia Vei, and Laura Waltersdorfer. 2024. A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation...

  2. [2]

    Ahlam Alnefaie, Sonika Singh, Ahmet Baki Kocaballi, and Mukesh Prasad. 2021. An Overview of Conversational Agent: Applications, Challenges and Future Di- rections. InInternational Conference on Web Information Systems and Technologies. https://api.semanticscholar.org/CorpusID:240262316

  3. [3]

    Margret M Baltes and Susan B Silverberg. 2019. The dynamics between depen- dency and autonomy: Illustrations across the life span. InLife-span development and behavior. Routledge, 41–90

  4. [4]

    Clara Berridge, Yuanjin Zhou, Julie M Robillard, and Jeffrey Kaye. 2023. Com- panion robots to mitigate loneliness among older adults: Perceptions of benefit and possible deception.Frontiers in Psychology14 (2023), 1106633

  5. [5]

    Vera Liao, Alexandra Olteanu, Rada Mihalcea, Michael Muller, Morgan Klaus Scheuerman, Chenhao Tan, and Qian Yang

    Su Lin Blodgett, Q. Vera Liao, Alexandra Olteanu, Rada Mihalcea, Michael Muller, Morgan Klaus Scheuerman, Chenhao Tan, and Qian Yang. 2022. Responsible Language Technologies: Foreseeing and Mitigating Harms. InExtended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems(New Orleans, LA, USA)(CHI EA ’22). Association for Computing Ma...

  6. [6]

    Clara Caldeira, Matthew Bietz, Marisol Vidauri, and Yunan Chen. 2017. Senior Care for Aging in Place: Balancing Assistance and Independence. InProceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing(Portland, Oregon, USA)(CSCW ’17). Association for Computing Machinery, New York, NY, USA, 1605–1617. https://doi.or...

  7. [7]

    Laarni A Caorong. 2021. Self-control in old age: A grounded theory study.Belitung Nursing Journal7, 3 (2021), 151

  8. [8]

    Laura L Carstensen, Derek M Isaacowitz, and Susan T Charles. 1999. Taking time seriously: a theory of socioemotional selectivity.American psychologist54, 3 (1999), 165

  9. [9]

    Mohit Chandra, Suchismita Naik, Denae Ford, Ebele Okoli, Munmun De Choud- hury, Mahsa Ershadi, Gonzalo Ramos, Javier Hernandez, Ananya Bhattacharjee, Shahed Warreth, and Jina Suh. 2025. From Lived Experience to Insight: Un- packing the Psychological Risks of Using AI Conversational Agents. InProceed- ings of the 2025 ACM Conference on Fairness, Accountabi...

  10. [10]

    Tiffany Chen, Cassandra Lee, Jessica R Mindel, Neska Elhaouij, and Rosalind Picard. 2023. Closer Worlds: Using Generative AI to Facilitate Intimate Con- versations. InExtended Abstracts of the 2023 CHI Conference on Human Fac- tors in Computing Systems(Hamburg, Germany)(CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 68, 15 pa...

  11. [11]

    Jessie Chin, Smit Desai, Sheny Lin, and Shannon Mejia. 2024. Like my aunt dorothy: effects of conversational styles on perceptions, acceptance and metaphor- ical descriptions of voice assistants during later adulthood.Proceedings of the ACM on Human-Computer Interaction8, CSCW1 (2024), 1–21

  12. [12]

    Shruthi Sai Chivukula, Ike Obi, Thomas V Carlock, and Colin M. Gray. 2023. Wrangling Ethical Design Complexity: Dilemmas, Tensions, and Situations. In Companion Publication of the 2023 ACM Designing Interactive Systems Conference (Pittsburgh, PA, USA)(DIS ’23 Companion). Association for Computing Machinery, New York, NY, USA, 179–183. https://doi.org/10.1...

  13. [13]

    Minh Duc Chu, Patrick Gerard, Kshitij Pawar, Charles Bickham, and Kristina Lerman. 2025. Illusions of intimacy: Emotional attachment and emerging psy- chological risks in human-ai relationships.arXiv preprint arXiv:2505.116491, 1 (2025), 19

  14. [14]

    Bart J Collopy. 1988. Autonomy in long term care: Some crucial distinctions.The Gerontologist28, Suppl (1988), 10–17

  15. [15]

    Andrea Cuadra, Jessica Bethune, Rony Krell, Alexa Lempel, Katrin Hänsel, Armin Shahrokni, Deborah Estrin, and Nicola Dell. 2023. Designing Voice-First Ambient Interfaces to Support Aging in Place. InProceedings of the 2023 ACM Designing Interactive Systems Conference(Pittsburgh, PA, USA)(DIS ’23). Association for Computing Machinery, New York, NY, USA, 21...

  16. [16]

    Edward L Deci and Richard M Ryan. 2012. Self-determination theory.Handbook of theories of social psychology1, 20 (2012), 416–436

  17. [17]

    Yang Deng, Lizi Liao, Wenqiang Lei, Grace Hui Yang, Wai Lam, and Tat-Seng Chua

  18. [18]

    Proactive Conversational AI: A Comprehensive Survey of Advancements and Opportunities.ACM Trans. Inf. Syst.43, 3, Article 67 (March 2025), 45 pages. https://doi.org/10.1145/3715097

  19. [19]

    Artificial Intelligence- Carrying us into the Future

    MD Atik Enam, Chandni Murmu, and Emma Dixon. 2025. “Artificial Intelligence- Carrying us into the Future”: A Study of Older Adults’ Perceptions of LLM-Based Chatbots.International Journal of Human–Computer Interaction(2025), 1–24

  20. [20]

    Christiane Even, Torsten Hammann, Vera Heyl, Christian Rietz, Hans-Werner Wahl, Peter Zentel, and Anna Schlomann. 2022. Benefits and challenges of conversational agents in older adults: a scoping review.Zeitschrift für Gerontologie und Geriatrie55, 5 (2022), 381–387

  21. [21]

    Claude Ferrand, Guillaume Martinent, and Neriman Durmaz. 2014. Psychological need satisfaction and well-being in adults aged 80 years and older living in residential homes: Using a self-determination theory perspective.Journal of Aging Studies30 (2014), 104–111

  22. [22]

    Mirko Franco, Ombretta Gaggi, and Claudio E. Palazzi. 2025. Integrating Content Moderation Systems with Large Language Models.ACM Trans. Web19, 2, Article 18 (May 2025), 21 pages. https://doi.org/10.1145/3700789

  23. [23]

    Ann Gallagher, Sarah Li, Paul Wainwright, Ian Rees Jones, and Diana Lee. 2008. Dignity in the care of older people–a review of the theoretical and empirical literature.BMC nursing7, 1 (2008), 11

  24. [24]

    Kate S Glazko, Mina Huh, Jazette Johnson, Amy Pavel, and Jennifer Mankoff. 2025. Generative AI and Accessibility Workshop: Surfacing Opportunities and Risks. InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’25). Association for Computing Machinery, New York, NY, USA, Article 783, 6 pages. https:/...

  25. [25]

    Yijie Guo, Ruhan Wang, Zhenhan Huang, Tongtong Jin, Xiwen Yao, Yuan-Ling Feng, Weiwei Zhang, Yuan Yao, and Haipeng Mi. 2025. Exploring the Design of LLM-based Agent in Enhancing Self-disclosure Among the Older Adults. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, N...

  26. [26]

    Lars Hallnäs and Johan Redström. 2001. Slow Technology – Designing for Reflection.Personal Ubiquitous Comput.5, 3 (Jan. 2001), 201–212. https://doi. org/10.1007/PL00000019

  27. [27]

    Hernan Hernandez, Hernando Santamaria-Garcia, Sebastian Moguilner, Francesca R Farina, Agustina Legaz, Pavel Prado, Jhosmary Cuadros, Liset Gon- zalez, Raul Gonzalez-Gomez, Joaquín Migeot, et al . 2025. The exposome of healthy and accelerated aging across 40 countries.Nature medicine31, 9 (2025), 3089–3100

  28. [28]

    Iris Szu-Szu Ho, Kris Mcgill, Stephen Malden, Cara Wilson, Caroline Pearce, Eileen Kaner, John Vines, Navneet Aujla, Sue Lewis, Valerio Restocchi, et al. 2023. Examining the social networks of older adults receiving informal or formal care: a systematic review.BMC geriatrics23, 1 (2023), 531

  29. [29]

    Dominik Hornung, Claudia Müller, Irina Shklovski, Timo Jakobi, and Volker Wulf. 2017. Navigating Relationships and Boundaries: Concerns around ICT- uptake for Elderly People. InProceedings of the 2017 CHI Conference on Human Factors in Computing Systems(Denver, Colorado, USA)(CHI ’17). Association for Computing Machinery, New York, NY, USA, 7057–7069. htt...

  30. [30]

    Yuanhui Huang, Quan Zhou, and Anne Marie Piper. 2025. Designing Conver- sational AI for Aging: A Systematic Review of Older Adults’ Perceptions and Needs. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 181, 20 pages. https://doi.org/10.1145/3706598.3713578

  31. [31]

    Bahar Irfan, Sanna Kuoppamäki, and Gabriel Skantze. 2024. Recommendations for designing conversational companion robots with older adults through foundation models.Frontiers in Robotics and AI11 (2024), 1363713

  32. [32]

    Cynthia S Jacelon, Thomas W Connelly, Rana Brown, Kathy Proulx, and Thuy Vo. 2004. A concept analysis of dignity for older adults.Journal of advanced nursing48, 1 (2004), 76–83

  33. [33]

    Picard, Cynthia L

    Sooyeon Jeong, Laura Aymerich-Franch, Sharifa Alghowinem, Rosalind W. Picard, Cynthia L. Breazeal, and Hae Won Park. 2023. A Robotic Companion for Psycho- logical Well-being: A Long-term Investigation of Companionship and Therapeutic Alliance. InProceedings of the 2023 ACM/IEEE International Conference on Human- Robot Interaction(Stockholm, Sweden)(HRI ’2...

  34. [34]

    Mirabelle Jones, Nastasia Griffioen, Christina Neumayer, and Irina Shklovski

  35. [35]

    lawful acquisition

    Artificial Intimacy: Exploring Normativity and Personalization Through Fine-tuning LLM Chatbots. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 793, 16 pages. https://doi.org/10.1145/3706598. 3713728

  36. [36]

    Valerie K Jones, Michael Hanus, Changmin Yan, Marcia Y Shade, Julie Blaskewicz Boron, and Rafael Maschieri Bicudo. 2021. Reducing loneliness among aging adults: the roles of personal voice assistants and anthropomorphic interactions.Frontiers in public health9 (2021), 750736

  37. [37]

    Thomas Juster and Richard Suzman

    F. Thomas Juster and Richard Suzman. 1995. An Overview of the Health and Retirement Study.The Journal of Human Resources30 (1995), S7–S56. http: //www.jstor.org/stable/146277

  38. [38]

    Hanna Kallio, Anna-Maija Pietilä, Martin Johnson, and Mari Kangasniemi. 2016. Systematic methodological review: developing a framework for a qualitative semi-structured interview guide.Journal of advanced nursing72, 12 (2016), 2954–2965

  39. [39]

    Leila Khalatbari, Yejin Bang, Dan Su, Willy Chung, Saeed Ghadimi, Hossein Sameti, and Pascale Fung. 2023. Learn What NOT to Learn: Towards Generative Safety in Chatbots. arXiv:2304.11220 [cs.CL] https://arxiv.org/abs/2304.11220

  40. [40]

    Bran Knowles and Vicki L. Hanson. 2018. Older Adults’ Deployment of ‘Distrust’. ACM Trans. Comput.-Hum. Interact.25, 4, Article 21 (Aug. 2018), 25 pages. https: //doi.org/10.1145/3196490

  41. [41]

    Faye Kollig, Jessica Pater, Fayika Farhat Nova, and Casey Fiesler. 2025. Fictional Failures and Real-World Lessons: Ethical Speculation Through Design Fiction on Emotional Support Conversational AI. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 204,...

  42. [42]

    Karina LaRubbio, Malcolm Grba, and Diana Freed. 2025. Navigating Privacy and Trust: AI Assistants as Social Support for Older Adults. arXiv:2505.02975 [cs.CY] https://arxiv.org/abs/2505.02975

  43. [43]

    Siu Long Lee, Eiluned Pearce, Olesya Ajnakina, Sonia Johnson, Glyn Lewis, Farhana Mann, Alexandra Pitman, Francesca Solmi, Andrew Sommerlad, Andrew Steptoe, et al. 2021. The association between loneliness and depressive symptoms among adults aged 50 years and older: a 12-year population-based cohort study. The Lancet Psychiatry8, 1 (2021), 48–57

  44. [44]

    Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems.arXiv preprint arXiv:2106.01144(2021)

  45. [45]

    Zihan Liu, Han Li, Anfan Chen, Renwen Zhang, and Yi-Chieh Lee. 2024. Under- standing Public Perceptions of AI Conversational Agents: A Cross-Cultural Anal- ysis. InProceedings of the 2024 CHI Conference on Human Factors in Computing Sys- tems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 155, 17 pages. https:...

  46. [46]

    Liz Lloyd. 2011. Maintaining Dignity in Later Life: A Longitudinal Qualitative Study of Older People’s Experiences of Supportive Care. (2011)

  47. [47]

    Pauketat, and Jacy Reese Anthis

    Aikaterina Manoli, Janet V.T. Pauketat, and Jacy Reese Anthis. 2025. Characteriz- ing Relationships with Companion and Assistant Large Language Models. InCom- panion Publication of the 2025 Conference on Computer-Supported Cooperative Work and Social Computing (CSCW Companion ’25). Association for Computing Ma- chinery, New York, NY, USA, 312–319. https:/...

  48. [48]

    Arianna Manzini, Geoff Keeling, Lize Alberts, Shannon Vallor, Meredith Ringel Morris, and Iason Gabriel. 2025. The Code That Binds Us: Navigating the Ap- propriateness of Human-AI Assistant Relationships. InProceedings of the 2024 AAAI/ACM Conference on AI, Ethics, and Society(San Jose, California, USA)(AIES ’24). AAAI Press, 943–957

  49. [49]

    Michele J McIntosh and Janice M Morse. 2015. Situating and constructing diver- sity in semi-structured interviews.Global qualitative nursing research2 (2015), 2333393615597674

  50. [50]

    Nicholas Meade, Spandana Gella, Devamanyu Hazarika, Prakhar Gupta, Di Jin, Siva Reddy, Yang Liu, and Dilek Hakkani-Tür. 2023. Using In-Context Learning to Improve Dialogue Safety. arXiv:2302.00871 [cs.CL] https://arxiv.org/abs/2302. 00871

  51. [51]

    2025.Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits

    Jimin Mun, Liwei Jiang, Jenny Liang, Inyoung Chung, Nicole DeCario, Yejin Choi, Tadayoshi Kohno, and Maarten Sap. 2025.Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits. AAAI Press, 997–1010

  52. [52]

    Savanthi Murthy, Karthik S Bhat, Sauvik Das, and Neha Kumar. 2021. Individually vulnerable, collectively safe: The security and privacy practices of households with older adults.Proceedings of the ACM on Human-Computer Interaction5, CSCW1 (2021), 1–24

  53. [53]

    Julia S Nakamura, Joanna H Hong, Jacqui Smith, William J Chopik, Ying Chen, Tyler J VanderWeele, and Eric S Kim. 2022. Associations between satisfaction with aging and health and well-being outcomes among older US adults.JAMA network open5, 2 (2022), e2147797–e2147797

  54. [54]

    Meg Oliver. 2025. Older Americans turning to AI-powered chatbots for com- panionship. https://www.cbsnews.com/news/ai-chatbot-companionship-older- americans/ Accessed: 2026-01-14

  55. [55]

    Anthony D Ong, Bert N Uchino, and Elaine Wethington. 2016. Loneliness and health in older adults: a mini-review and synthesis.Gerontology62, 4 (2016), 443–449

  56. [56]

    OpenAI. 2024. Strengthening ChatGPT’s responses in sensitive conversa- tions. https://openai.com/index/strengthening-chatgpt-responses-in-sensitive- conversations/

  57. [57]

    Willeke Vos-den Ouden, Leonieke Van Boekel, Meriam Janssen, Roger Leenders, and Katrien Luijkx. 2021. The impact of social network change and health decline: a qualitative study on experiences of older adults who are ageing in place.BMC geriatrics21, 1 (2021), 480

  58. [58]

    Rachel O’Conor, Julia Yoshino Benavente, Mary J Kwasny, Kamal Eldeirawi, Romana Hasnain-Wynia, Alex D Federman, Jennifer Hebert-Beirne, and Michael S Wolf. 2019. Daily routine: Associations with health status and urgent health care utilization among older adults.The Gerontologist59, 5 (2019), 947–955

  59. [59]

    Andrew D Palmer, Jason T Newsom, and Karen S Rook. 2016. How does difficulty communicating affect the social relationships of older adults? An exploration using data from a national survey.Journal of communication disorders62 (2016), 131–146

  60. [60]

    de Graaf

    Shuyi Pan and Maartje M.A. de Graaf. 2025. Developing a Social Support Frame- work: Understanding the Reciprocity in Human-Chatbot Relationship. InPro- ceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 182, 13 pages. https://doi.org/10.1145/3706598.3713503

  61. [61]

    Dudek, James M

    Danika Passler Bates, Skyla Y. Dudek, James M. Berzuk, Adriana Lorena González, and James E. Young. 2024. SnuggleBot the Companion: Exploring In-Home Robot Interaction Strategies to Support Coping With Loneliness. InProceedings of the 2024 ACM Designing Interactive Systems Conference(Copenhagen, Denmark)(DIS ’24). Association for Computing Machinery, New ...

  62. [62]

    1990.Qualitative evaluation and research methods

    Michael Quinn Patton. 1990.Qualitative evaluation and research methods. SAGE Publications, inc, Newbury Park, CA

  63. [63]

    Murphy, Krystal Jackson, and Jessica Newman

    Deepika Raman, Nada Madkour, Evan R. Murphy, Krystal Jackson, and Jessica Newman. 2025. Intolerable Risk Threshold Recommendations for Artificial Intelligence. arXiv:2503.05812 [cs.CY] https://arxiv.org/abs/2503.05812

  64. [64]

    S Zahra Razavi, Lenhart K Schubert, Kimberly Van Orden, Mohammad Rafayet Ali, Benjamin Kane, and Ehsan Hoque. 2022. Discourse behavior of older adults interacting with a dialogue agent competent in multiple topics.ACM Transactions on Interactive Intelligent Systems (TiiS)12, 2 (2022), 1–21. DIS ’26, June 13–17, 2026, Singapore, Singapore Mengqi Shi, Tianq...

  65. [65]

    Minjin Rheu, Yue Dai, Jingbo Meng, and Wei Peng. 2024. When a chatbot disappoints you: Expectancy violation in human-chatbot interaction in a social support context.Communication Research51, 7 (2024), 782–814

  66. [66]

    Antonia Rodríguez-Martínez, Teresa Amezcua-Aguilar, Javier Cortés-Moreno, and Juan José Jiménez-Delgado. 2024. Qualitative Analysis of Conversational Chatbots to Alleviate Loneliness in Older Adults as a Strategy for Emotional Health.Healthcare12, 62 (2024), 62. https://doi.org/10.3390/healthcare12010062

  67. [67]

    Karen S Rook and Susan T Charles. 2017. Close social ties and health in later life: Strengths and vulnerabilities.American Psychologist72, 6 (2017), 567

  68. [68]

    Elayne Ruane, Abeba Birhane, and Anthony Ventresque. 2019. Conversational AI: Social and Ethical Considerations.AICS2563 (2019), 104–115

  69. [69]

    Ruslin Ruslin, Saepudin Mashuri, Muhammad Sarib Abdul Rasak, Firdiansyah Alhabsyi, and Hijrah Syam. 2022. Semi-structured Interview: A methodological reflection on the development of a qualitative research instrument in educational studies.IOSR Journal of Research & Method in Education (IOSR-JRME)12, 1 (2022), 22–29

  70. [70]

    Wissam Salhab, Darine Ameyed, Fehmi Jaafar, and Hamid Mcheick. 2024. A Systematic Literature Review on AI Safety: Identifying Trends, Challenges, and Future Directions.IEEE Access12 (2024), 131762–131784. https://doi.org/10.1109/ ACCESS.2024.3440647

  71. [71]

    Devansh Saxena, Ji-Youn Jung, Jodi Forlizzi, Kenneth Holstein, and John Zim- merman. 2025. AI Mismatches: Identifying Potential Algorithmic Harms Before AI Development. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 8, 23 pages. https://doi.org/10.11...

  72. [72]

    Ajwa Shahid, Jane Chung, and Seongkook Heo. 2025. Exploring Older Adults Personality Preferences for LLM-powered Conversational Companions. InPro- ceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’25). Association for Computing Machinery, New York, NY, USA, Article 249, 7 pages. https://doi.org/10.1145...

  73. [73]

    Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Ros- tamzadeh, Paul Nicholas, N’Mah Yilla-Akbari, Jess Gallegos, Andrew Smart, Emilio Garcia, and Gurleen Virk. 2023. Sociotechnical Harms of Algorith- mic Systems: Scoping a Taxonomy for Harm Reduction. InProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society(Montréal, QC, C...

  74. [74]

    Huiyoung Shin and Chaerim Park. 2022. Social support and psychological well- being in younger and older adults: The mediating effects of basic psychological need satisfaction.Frontiers in Psychology13 (2022), 1051968

  75. [75]

    Rashi Shrivastava and Richard Nieva. 2025. Lonely Seniors Are Turning To AI Bots For Companionship. https://www.forbes.com/sites/rashishrivastava/ 2025/10/18/lonely-seniors-are-turning-to-ai-bots-for-companionship/ Accessed: 2026-01-14

  76. [76]

    Tianqi Song, Black Sun, Jingshu Li, Han Li, Chi-Lan Yang, Yijia Xu, and Yi-Chieh Lee. 2026. Understanding Older Adults’ Experiences of Support, Concerns, and Risks from Kinship-Role AI-Generated Influencers. InProceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26). Association for Computing Machinery, New York, NY, USA, Art...

  77. [77]

    Shoeb Ali Syed. 2024. THE ROLE OF AI IN ALLEVIATING LONELINESS AMONG ADULTS IN THE UNITED STATES.International Journal of Engi- neering Technology Research & Management08 (04 2024), 404–421. https: //doi.org/10.5281/zenodo.15222179

  78. [78]

    Vera Liao, Ricardo Baeza-Yates, Lora Aroyo, Jess Holbrook, Ewa Luger, Michael Madaio, Ilana Golbin Blumenfeld, Maria De-Arteaga, Jessica Vitak, and Alexandra Olteanu

    Mohammad Tahaei, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael Muller, Simone Stumpf, Q. Vera Liao, Ricardo Baeza-Yates, Lora Aroyo, Jess Holbrook, Ewa Luger, Michael Madaio, Ilana Golbin Blumenfeld, Maria De-Arteaga, Jessica Vitak, and Alexandra Olteanu. 2023. Human-Centered Re- sponsible Artificial Intelligence: Current & Future Trends. ...

  79. [79]

    Eugene Tang, KangJie, Tianqi Song, Zicheng Zhu, Jingshu Li, and Yi-Chieh Lee

  80. [80]

    InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’25)

    AI Literacy Education for Older Adults: Motivations, Challenges and Preferences. InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’25). Association for Computing Machinery, New York, NY, USA, Article 54, 15 pages. https://doi.org/10.1145/ 3706599.3720033

Showing first 80 references.