Recognition: unknown
Trust as a Situated User State in Social LLM-Based Chatbots: A Longitudinal Study of Snapchat's My AI
Pith reviewed 2026-05-08 09:40 UTC · model grok-4.3
The pith
Trust in social LLM chatbots like Snapchat's My AI evolves as a changing user state through ongoing interactions rather than forming as a fixed initial judgment.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Trust in social LLM-based chatbots is a situated user state that develops and changes through repeated interactions, as users adapt expectations, refine prompting strategies, and actively regulate reliance on the system; it is shaped by perceived ability, conversational behavior, human-likeness, transparency, privacy concerns, and trust in the host platform rather than remaining a one-time evaluation.
What carries the argument
The conceptual model framing trust as a dynamic user state shaped by interaction context and expectations, built from longitudinal qualitative observations of user adaptations.
If this is right
- Excessive human-likeness in the chatbot can reduce trust over time even if it boosts initial engagement.
- Users refine their prompting strategies as they learn the system's limits and strengths.
- Designers of conversational agents should support ongoing adjustment of expectations rather than treating trust as set after onboarding.
- Privacy and transparency features need to address how trust shifts rather than only initial perceptions.
Where Pith is reading between the lines
- The same dynamic negotiation process could appear in other embedded AI chatbots on social media platforms.
- Future designs might include user controls for adjusting how much the chatbot reveals about its capabilities at different stages of use.
- Similar patterns of expectation adjustment may occur when users interact with non-chatbot LLM tools over extended periods.
Load-bearing premise
That self-reported experiences from a small sample of 27 Snapchat users over four weeks can be generalized to trust formation in other social LLM chatbots without major platform-specific or selection biases affecting the conceptual model.
What would settle it
A larger study in which most users report forming a stable trust level after initial interactions and show no significant changes in prompting strategies or reliance patterns over subsequent weeks would challenge the claim that trust is a continuous negotiation.
read the original abstract
Social chatbots based on large language models are increasingly embedded in everyday platforms, yet how users develop trust in these systems over time remains unclear. We present a four-week longitudinal qualitative survey study (N = 27) of trust formation in Snapchat's My AI, a socially embedded conversational agent. Our findings show that trust is shaped by perceived ability, conversational behavior, human-likeness, transparency, privacy concerns, and trust in the host platform. Trust does not remain stable, but evolves through interaction as users adapt their expectations, refine their prompting strategies, and actively regulate how and when they rely on the system. These processes reflect a continuous negotiation of trust, not a one-time evaluation. While conversational fluency supports engagement, excessive anthropomorphism and limited transparency can undermine trust over time. We synthesize these findings into a conceptual model that frames trust as a dynamic user state shaped by interaction context and expectations, with implications for the design of human-centered and adaptive conversational agents.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript reports a four-week longitudinal qualitative survey study (N=27) of Snapchat's My AI, a socially embedded LLM chatbot. It claims that trust is shaped by factors including perceived ability, conversational behavior, human-likeness, transparency, privacy concerns, and host-platform trust. Trust is not stable but evolves dynamically as users adapt expectations, refine prompting, and regulate reliance, reflecting continuous negotiation rather than one-time evaluation. The authors synthesize these user-reported themes into a conceptual model framing trust as a situated, interaction-dependent user state, with design implications for human-centered conversational agents.
Significance. If the core findings hold, the work provides a useful longitudinal perspective on trust dynamics in social LLM chatbots, moving beyond static models. The emphasis on adaptation processes and the proposed conceptual model could guide design of adaptive agents, particularly in platform-embedded contexts. The longitudinal qualitative approach is a positive feature for capturing temporal change, though the small, platform-specific sample constrains broader claims.
major comments (2)
- [Methods] Methods section: details on the survey instruments (e.g., exact questions or prompts used each week), qualitative coding procedures, inter-rater reliability, and handling of dropouts or incomplete responses are not provided. These omissions make it impossible to fully evaluate the support for the central claim that trust evolves as a 'continuous negotiation' rather than a static state.
- [Discussion] Discussion and Conclusion: the conceptual model is framed as applicable to 'social LLM-based chatbots' broadly, yet all evidence derives from a self-selected sample of 27 Snapchat My AI users. Platform-specific elements (social embedding, Snapchat privacy norms, host-platform trust) and selection effects are not explicitly tested or bounded, so the generalizability of the dynamic-state model remains unaddressed.
minor comments (2)
- [Abstract] Abstract: the phrase 'qualitative longitudinal survey study' could be clarified to specify data collection frequency and format (e.g., weekly open-ended responses).
- [Related Work] Related Work: additional citations to longitudinal trust studies in other AI or chatbot contexts would strengthen positioning.
Simulated Author's Rebuttal
We thank the referee for their thoughtful and constructive comments, which help clarify areas where the manuscript can be strengthened. We address each major comment below and outline the specific revisions we will make.
read point-by-point responses
-
Referee: [Methods] Methods section: details on the survey instruments (e.g., exact questions or prompts used each week), qualitative coding procedures, inter-rater reliability, and handling of dropouts or incomplete responses are not provided. These omissions make it impossible to fully evaluate the support for the central claim that trust evolves as a 'continuous negotiation' rather than a static state.
Authors: We agree that the Methods section requires greater transparency to support evaluation of our claims. The current version summarizes the four-week longitudinal qualitative survey design at a high level but does not include the requested specifics. In the revised manuscript we will expand this section to provide the exact weekly survey questions and prompts, a step-by-step description of the qualitative coding and theme-development process, details on inter-rater reliability or the validation procedures used by the research team, and explicit information on participant retention, including how incomplete responses and dropouts were handled and which cases were retained for the longitudinal analysis. These additions will directly address the concern and allow readers to assess the evidence for the dynamic, negotiated character of trust. revision: yes
-
Referee: [Discussion] Discussion and Conclusion: the conceptual model is framed as applicable to 'social LLM-based chatbots' broadly, yet all evidence derives from a self-selected sample of 27 Snapchat My AI users. Platform-specific elements (social embedding, Snapchat privacy norms, host-platform trust) and selection effects are not explicitly tested or bounded, so the generalizability of the dynamic-state model remains unaddressed.
Authors: We acknowledge the sample limitations and the risk of overgeneralization. The study is confined to a self-selected group of Snapchat My AI users, and platform-specific factors such as social embedding and host-platform trust are central to the observed processes. The conceptual model is intended as a context-grounded framework rather than a universal claim. In the revision we will add an explicit Limitations subsection that bounds the model, discusses selection effects and platform-specific influences, and clarifies that while the core insight of trust as a situated, continuously negotiated state may have relevance for other social LLM chatbots, empirical testing in additional platforms is required. This will tighten the scope without diminishing the contribution of the longitudinal perspective. revision: yes
Circularity Check
No circularity: conceptual model synthesized inductively from qualitative user data
full rationale
This is a longitudinal qualitative survey study (N=27) that collects self-reported experiences via surveys over four weeks and synthesizes observed themes (perceived ability, conversational behavior, human-likeness, transparency, privacy, host-platform trust, expectation adaptation, prompting refinement, and reliance regulation) into a conceptual model framing trust as a dynamic situated state. No equations, fitted parameters, predictions, uniqueness theorems, or ansatzes appear. The central claim is not derived by reducing to prior self-citations or by construction from inputs; it is an inductive summary of the collected data. The derivation chain is therefore self-contained and does not match any of the enumerated circularity patterns.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Petter Bae Brandtzaeg, Antoine Pultier, and Gro Mette Moen. 2019. Losing Control to Data-Hungry Apps: A Mixed-Methods Approach to Mobile App Privacy.Social Science Computer Review37, 4 (Aug. 2019), 466–488. https: //doi.org/10.1177/0894439318777706
-
[2]
Petter Bae Brandtzaeg, Marita Skjuve, and Asbjørn Følstad. 2022. My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship.Human Communication Research48, 3 (2022), 404–429. https://doi.org/10.1093/hcr/ hqac008
-
[3]
Virginia Braun and Victoria and Clarke. 2006. Using Thematic Analysis in Psychology.Qualitative Research in Psychology3, 2 (Jan. 2006), 77–101. https: //doi.org/10.1191/1478088706qp063oa
-
[4]
Anastasiia Chernykh. 2024. Ipsos SoMe-Tracker Q4’23 | Ipsos. https://www. ipsos.com/nb-no/ipsos-some-tracker-q423
2024
-
[5]
Leon Ciechanowski, Aleksandra Przegalinska, Mikolaj Magnuski, and Peter Gloor. 2019. In the shades of the uncanny valley: An experimental study of human–chatbot interaction.Future Generation Computer Systems92 (2019), 539–
2019
-
[6]
https://doi.org/10.1016/j.future.2018.01.055
-
[7]
Silva, and Francesca Romana Alparone
Roberta De Cicco, Susana C. Silva, and Francesca Romana Alparone. 2020. Mil- lennials’ attitude toward chatbots: an experimental study in a social relationship perspective.International Journal of Retail & Distribution Management48, 11 (July 2020), 1213–1233. https://doi.org/10.1108/IJRDM-12-2019-0406
-
[8]
Asbjørn Følstad, Cecilie Bertinussen Nordheim, and Cato Alexander Bjørkli. 2018. What Makes Users Trust a Chatbot for Customer Service? An Exploratory Inter- view Study. InInternet Science, Svetlana S. Bodrunova (Ed.). Springer International Publishing, Cham, 194–208. https://doi.org/10.1007/978-3-030-01437-7_16
-
[9]
Ella Glikson and Anita Williams Woolley. 2020. Human Trust in Artificial Intelli- gence: Review of Empirical Research.Academy of Management Annals14, 2 (July 2020), 627–660. https://doi.org/10.5465/annals.2018.0057 Publisher: Academy of Management
-
[10]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in Automation: In- tegrating Empirical Evidence on Factors That Influence Trust.Human Factors57, 3 (2015), 407–434. https://doi.org/10.1177/0018720814547570 arXiv:https://doi.org/10.1177/0018720814547570 PMID: 25875432
-
[11]
Carolin Ischen, Theo Araujo, Hilde Voorveld, Guda van Noort, and Edith Smit
-
[12]
Privacy Concerns in Chatbot Interactions. InChatbot Research and Design, Asbjørn Følstad, Theo Araujo, Symeon Papadopoulos, Effie Lai-Chong Law, Ole- Christoffer Granmo, Ewa Luger, and Petter Bae Brandtzaeg (Eds.). Springer International Publishing, Cham, 34–48. https://doi.org/10.1007/978-3-030-39540- 7_3
-
[13]
Linnea Laestadius, Andrea Bishop, Michael Gonzalez, Diana Illenčík, and Celeste Campos-Castillo. 2024. Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika.New Media & Society26, 10 (Oct. 2024), 5923–5941. https://doi.org/10. 1177/14614448221142007 Publisher: SAGE Publications
2024
-
[14]
Trust in automation: Designing for appro- priate reliance
John D. Lee and Katrina A. See. 2004. Trust in Automation: Designing for Appropriate Reliance.Human Factors46, 1 (March 2004), 50–80. https://doi.org/ 10.1518/hfes.46.1.50_30392 Publisher: SAGE Publications Inc
-
[15]
Roger C. Mayer, James H. Davis, and F. David Schoorman. 1995. An Integrative Model of Organizational Trust.The Academy of Management Review20, 3 (July 1995), 709. https://doi.org/10.2307/258792 Publisher: Academy of Management
-
[16]
Harrison Mcknight, Michelle Carter, Jason Bennett Thatcher, and Paul F
D. Harrison Mcknight, Michelle Carter, Jason Bennett Thatcher, and Paul F. Clay
-
[17]
Trust in a specific technology: An investigation of its components and measures.ACM Trans. Manage. Inf. Syst.2, 2 (July 2011), 12:1–12:25. https: //doi.org/10.1145/1985347.1985353
-
[18]
Andreea Muresan and Henning Pohl. 2019. Chats with Bots: Balancing Imitation and Engagement. InExtended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3313084
-
[19]
Cecilie Bertinussen Nordheim, Asbjørn Følstad, and Cato Alexander Bjørkli
-
[20]
https:// doi.org/10.1093/iwc/iwz022 Place: United Kingdom Publisher: Oxford University Press
An initial model of trust in chatbots for customer service—Findings from a questionnaire study.Interacting with Computers31, 3 (2019), 317–335. https:// doi.org/10.1093/iwc/iwz022 Place: United Kingdom Publisher: Oxford University Press
-
[21]
Raffaele Rodogno. 2016. Social robots, fiction, and sentimentality.Ethics and Information Technology18, 4 (Dec. 2016), 257–268. https://doi.org/10.1007/s10676- 015-9371-z
-
[22]
Marita Skjuve, Asbjørn Følstad, Knut Inge Fostervold, and Petter Bae Brandtzæg
-
[23]
My Chatbot Companion – a Study of Human-Chatbot Relationships. 14(2021). https://doi.org/10.1016/j.ijhcs.2021.102601 Accepted: 2023-02- 07T08:42:43Z Publisher: Elsevier
-
[24]
Marita Skjuve, Ida Maria Haugstveit, Asbjørn Følstad, and Petter Bae Brandtzaeg
-
[25]
https://doi.org/10.17011/ht/urn.201902201607 Place: Finland Publisher: Agora Center
Help! Is my chatbot falling into the uncanny valley? An empirical study of user experience in human-chatbot interaction.Human Technology15, 1 (2019), 30–54. https://doi.org/10.17011/ht/urn.201902201607 Place: Finland Publisher: Agora Center
-
[26]
Vivian Ta, Caroline Griffith, Carolynn Boatfield, Xinyu Wang, Maria Civitello, Haley Bader, Esther DeCero, and Alexia Loggarakis. 2020. User Experiences of Social Support From Companion Chatbots in Everyday Contexts: Thematic Analysis.J Med Internet Res22, 3 (6 Mar 2020), e16235. https://doi.org/10.2196/ 16235
2020
-
[27]
Jennifer Zamora. 2017. I’m Sorry, Dave, I’m Afraid I Can’t Do That: Chatbot Perception and Expectations. InProceedings of the 5th International Conference on Human Agent Interaction (HAI ’17). Association for Computing Machinery, New York, NY, USA, 253–260. https://doi.org/10.1145/3125739.3125766
-
[28]
Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2020. The Design and Im- plementation of XiaoIce, an Empathetic Social Chatbot.Computational Linguistics 46, 1 (March 2020), 53–93. https://doi.org/10.1162/coli_a_00368
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.