Recognition: no theorem link
Polite But Boring? Trade-offs Between Engagement and Psychological Reactance to Chatbot Feedback Styles
Pith reviewed 2026-05-16 10:38 UTC · model grok-4.3
The pith
Chatbot feedback styles create trade-offs where politeness lowers reactance but reduces engagement, while verbal leakage increases both surprise and reactance.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The study found that the politeness style reduced psychological reactance and increased behavioral intentions relative to direct feedback, yet participants viewed it as boring and unengaging. The verbal leakage style, by contrast, elicited higher feelings of surprise, engagement, and humour even though it also provoked reactance. These results indicate that chatbot feedback for behaviour change inherently involves balancing reactance against engagement rather than eliminating one at the expense of the other.
What carries the argument
Comparison of Direct, Politeness, and Verbal Leakage feedback styles measured against reactance, surprise, engagement, humour, and behavioural intentions.
If this is right
- Direct feedback should be avoided because it increases reactance and reduces intentions to follow the suggested behaviour.
- Polite feedback can raise short-term compliance but risks leaving users disengaged.
- Verbal leakage can be used when designers want to heighten surprise and engagement even if reactance rises.
- Effective chatbot interventions must weigh reactance and engagement together rather than optimise for either alone.
Where Pith is reading between the lines
- Verbal leakage could be tested in domains such as fitness apps or habit trackers to check whether the added engagement produces longer-lasting change.
- A hybrid style that softens leakage with polite framing might lower reactance while keeping engagement high.
- The same reactance-engagement tension may appear in non-chatbot settings such as email reminders or coaching conversations.
- Cultural or individual differences in tolerance for leakage versus politeness could alter which style works best.
Load-bearing premise
Participants' reported feelings of reactance, surprise, and engagement accurately reflect the psychological effects that would occur in real use without study-specific influences.
What would settle it
A follow-up study that tracks actual behaviour change, such as measured steps or diet adherence, after repeated exposure to each feedback style over weeks.
Figures
read the original abstract
As conversational agents become increasingly common in behaviour change interventions, understanding optimal feedback delivery mechanisms becomes increasingly important. However, choosing a style that both lessens psychological reactance (perceived threats to freedom) while simultaneously eliciting feelings of surprise and engagement represents a complex design problem. We explored how three different feedback styles: 'Direct', 'Politeness', and 'Verbal Leakage' (slips or disfluencies to reveal a desired behaviour) affect user perceptions and behavioural intentions. Matching expectations from literature, the 'Direct' chatbot led to lower behavioural intentions and higher reactance, while the 'Politeness' chatbot evoked higher behavioural intentions and lower reactance. However, 'Politeness' was also seen as unsurprising and unengaging by participants. In contrast, 'Verbal Leakage' evoked reactance, yet also elicited higher feelings of surprise, engagement, and humour. These findings highlight that effective feedback requires navigating trade-offs between user reactance and engagement, with novel approaches such as 'Verbal Leakage' offering promising alternative design opportunities.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript reports an empirical comparison of three chatbot feedback styles (Direct, Politeness, and Verbal Leakage) in behavior-change interventions. It finds that Direct feedback produces higher reactance and lower behavioral intentions, Politeness produces lower reactance and higher intentions but is rated low on surprise and engagement, and Verbal Leakage produces higher reactance yet also higher surprise, engagement, and humor. The authors conclude that feedback design requires navigating reactance-engagement trade-offs and that Verbal Leakage represents a promising alternative approach.
Significance. If the results hold after addressing methodological gaps, the work usefully extends reactance theory into conversational-agent design by demonstrating concrete trade-offs and by introducing Verbal Leakage as a novel, implementable style. The empirical pattern aligns with prior literature on direct versus polite feedback while adding a disfluency-based alternative that could inform more engaging behavior-change systems.
major comments (2)
- [Methods] The central claim that Verbal Leakage trades higher reactance for higher surprise/engagement rests on participants perceiving the disfluencies exactly as intended slips. No manipulation check, scenario-realism validation, or control for demand characteristics is described, leaving open the possibility that the observed pattern arises from unintended interpretations or study artifacts rather than the style itself.
- [Results] The abstract and reported findings provide no information on sample size, statistical tests, effect sizes, or controls for confounds (e.g., chatbot voice, individual differences, or self-report biases). These omissions make it impossible to evaluate whether the data actually support the trade-off conclusions at a load-bearing level.
minor comments (1)
- [Abstract] The abstract omits basic methodological details (N, analysis approach) that readers need to assess the strength of the claims.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback. We agree that strengthening the methodological transparency and reporting details will improve the manuscript and plan revisions accordingly. Below we respond point-by-point to the major comments.
read point-by-point responses
-
Referee: [Methods] The central claim that Verbal Leakage trades higher reactance for higher surprise/engagement rests on participants perceiving the disfluencies exactly as intended slips. No manipulation check, scenario-realism validation, or control for demand characteristics is described, leaving open the possibility that the observed pattern arises from unintended interpretations or study artifacts rather than the style itself.
Authors: We acknowledge this is a valid concern. The original study design relied on the distinctiveness of the disfluency manipulations (e.g., explicit slips revealing the target behavior) to differentiate Verbal Leakage from the other conditions, but we did not include an explicit manipulation check. In the revision we will add a post-experiment item asking participants to rate how intentional versus accidental the chatbot's statements appeared, plus a brief scenario-realism scale. We will also expand the limitations section to discuss demand characteristics and note that future work could include a no-feedback control condition. These additions will allow readers to better evaluate whether the engagement/reactance trade-off is driven by the intended mechanism. revision: yes
-
Referee: [Results] The abstract and reported findings provide no information on sample size, statistical tests, effect sizes, or controls for confounds (e.g., chatbot voice, individual differences, or self-report biases). These omissions make it impossible to evaluate whether the data actually support the trade-off conclusions at a load-bearing level.
Authors: We agree the abstract and results summary were insufficiently detailed. The full manuscript reports a sample of 180 participants, one-way ANOVAs with post-hoc Tukey tests, and effect sizes (partial eta-squared) for the key outcomes; individual-difference measures (reactance proneness) were collected but showed no significant interactions. The chatbots were text-only to eliminate voice confounds. In the revision we will (1) move sample size, test statistics, and effect sizes into the abstract, (2) add a dedicated “Statistical Analysis” subsection, and (3) report any covariate checks for self-report bias. These changes will make the evidential basis for the trade-off claims fully transparent. revision: yes
Circularity Check
No significant circularity: purely empirical comparison of experimental conditions
full rationale
The paper reports results from a between-subjects experiment comparing three chatbot feedback styles (Direct, Politeness, Verbal Leakage) on measures of reactance, surprise, engagement, humour, and behavioural intentions. No equations, derivations, fitted parameters, or predictive models are present. All central claims rest on observed participant data rather than any self-referential construction, self-citation chain, or renaming of prior results. The study is self-contained against external benchmarks (standard Likert scales and behavioural intention items) with no load-bearing steps that reduce to inputs by definition.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Psychological reactance can be validly measured via self-report scales
- domain assumption Self-reported behavioral intentions serve as a reasonable proxy for actual behavior
Forward citations
Cited by 1 Pith paper
-
The Differential Effects of Agreeableness and Extraversion on Older Adults' Perceptions of Conversational AI Explanations in Assistive Settings
High agreeableness in LLM voice assistants increases older adults' empathy perceptions and real-time explanations outperform history-based ones, but personality does not affect perceived intelligence.
Reference graph
Works this paper leans on
-
[1]
Adnan Abbas, Caleb Wohn, Donghan Hu, Eugenia H Rho, and Sang Won Lee
-
[2]
InProceedings of the 7th ACM Conference on Conversational User Interfaces (CUI ’25)
PITCH: Designing Agentic Conversational Support for Planning and Self-reflection. InProceedings of the 7th ACM Conference on Conversational User Interfaces (CUI ’25). Association for Computing Machinery, New York, NY, USA, Article 62, 22 pages. doi:10.1145/3719160.3736634
-
[3]
Lize Alberts, Ulrik Lyngs, and Max Van Kleek. 2024. Computers as bad social actors: Dark patterns and anti-patterns in interfaces that act socially.Proceedings of the ACM on Human-Computer Interaction8, CSCW1 (2024), 1–25. doi:10.1145/ 3653693
work page 2024
-
[4]
Deepali Aneja, Rens Hoegen, Daniel McDuff, and Mary Czerwinski. 2021. Un- derstanding Conversational and Expressive Style in a Multimodal Embodied Conversational Agent. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems(Yokohama, Japan)(CHI ’21). New York, NY, USA, Article 102, 10 pages. doi:10.1145/3411764.3445708
-
[5]
Scott Appling, Amy Bruckman, and Munmun De Choudhury. 2022. Reactions to Fact Checking.Proc. ACM Hum.-Comput. Interact.6, CSCW2, Article 403 (Nov. 2022), 17 pages. doi:10.1145/3555128
-
[6]
Franziska Babel, Robin Welsch, Linda Miller, Philipp Hock, Sam Thellman, and Tom Ziemke. 2024. A Robot Jumping the Queue: Expectations About Politeness and Power During Conflicts in Everyday Human-Robot Encounters. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA)(CHI ’24). Association for Computing Machine...
-
[7]
Francesco Bartolucci, Donata Favaro, Fulvia Pennoni, and Dario Sciulli. 2024. An Analysis of the Effect of Streaming on Civic Participation Through a Causal Hidden Markov Model.Social Indicators Research172, 1 (2024), 163–190. doi:10. 1007/s11205-023-03261-z
work page 2024
-
[8]
Debra Z Basil, Nancy M Ridgway, and Michael D Basil. 2008. Guilt and giving: A process model of empathy and efficacy.Psychology & Marketing25, 1 (2008), 1–23. doi:10.1002/mar.20200
-
[9]
Marshall H. Becker and Lois A. Maiman. 1975. Sociobehavioral Determinants of Compliance with Health and Medical Care Recommendations.Medical Care 13, 1 (2024/10/18/ 1975), 10–24. doi:10.1097/00005650-197501000-00002
-
[10]
Timothy W. Bickmore and Rosalind W. Picard. 2004. Towards caring machines. InCHI ’04 Extended Abstracts on Human Factors in Computing Systems(Vienna, Polite But Boring? Trade-offs Between Engagement and Psychological Reactance to Chatbot Feedback Styles CHI ’26, April 13–17, 2026, Barcelona, Spain Austria)(CHI EA ’04). Association for Computing Machinery,...
-
[11]
Shoshana Blum-Kulka. 1987. Indirectness and politeness in requests: Same or different?Journal of Pragmatics11, 2 (1987), 131–146. doi:10.1016/0378- 2166(87)90192-5
-
[12]
Annika Boos, Olivia Herzog, Jakob Reinhardt, Klaus Bengler, and Markus Zim- mermann. 2022. A Compliance–Reactance Framework for Evaluating Human- Robot Interaction.Frontiers in Robotics and AI9 (2022). doi:10.3389/frobt.2022. 733504
-
[13]
Robert Bowman, Orla Cooney, Joseph W Newbold, Anja Thieme, Leigh Clark, Gavin Doherty, and Benjamin Cowan. 2024. Exploring how politeness impacts the user experience of chatbots for mental health support.International Journal of Human-Computer Studies184 (2024), 103181. doi:10.1016/j.ijhcs.2023.103181
-
[14]
Jack W Brehm. 1966. A theory of psychological reactance. (1966)
work page 1966
-
[15]
1981.Psychological reactance: A theory of freedom and control
Sharon S Brehm and Jack W Brehm. 1981.Psychological reactance: A theory of freedom and control. Academic Press
work page 1981
-
[16]
Noel T Brewer, Humberto Parada Jr, Marissa G Hall, Marcella H Boynton, Seth M Noar, and Kurt M Ribisl. 2019. Understanding Why Pictorial Cigarette Pack Warnings Increase Quit Attempts.Annals of Behavioral Medicine53, 3 (2019), 232–243. doi:10.1093/abm/kay032
-
[17]
1987.Politeness: Some universals in language usage
Penelope Brown. 1987.Politeness: Some universals in language usage. Vol. 4. Cambridge University Press
work page 1987
-
[18]
Judee K Burgoon, Joseph A Bonito, Paul Benjamin Lowry, Sean L Humpherys, Gregory D Moody, James E Gaskin, and Justin Scott Giboney. 2016. Application of Expectancy Violations Theory to communication with and judgments about embodied agents during a decision-making task.International Journal of Human- Computer Studies91 (2016), 24–36. doi:10.1016/j.ijhcs.2...
-
[19]
Christopher J Carpenter and Alexandre Pascual. 2016. Testing the reactance vs. the reciprocity of politeness explanations for the effectiveness of the “but you are free” compliance-gaining technique.Social Influence11, 2 (2016), 101–110. doi:10.1080/15534510.2016.1156569
-
[20]
Ana Paula Chaves and Marco Aurelio Gerosa. 2021. How Should My Chatbot Interact? A Survey on Social Characteristics in Human–Chatbot Interaction Design.International Journal of Human–Computer Interaction37, 8 (2021), 729–758. doi:10.1080/10447318.2020.1841438
-
[21]
Eugene Cho and S. Shyam Sundar. 2022. Should Siri be a Source or Medium for Ads? The Role of Source Orientation and User Motivations in User Responses to Persuasive Content from Voice Assistants. InExtended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems(New Orleans, LA, USA)(CHI EA ’22). Association for Computing Machinery, New...
-
[22]
Leigh Clark, Nadia Pantidi, Orla Cooney, Philip Doyle, Diego Garaialde, Justin Edwards, Brendan Spillane, Emer Gilmartin, Christine Murad, Cosmin Munteanu, Vincent Wade, and Benjamin R. Cowan. 2019. What Makes a Good Conversation? Challenges in Designing Truly Conversational Agents. InPro- ceedings of the 2019 CHI Conference on Human Factors in Computing ...
-
[23]
Michael J Cody, Peter J Marston, and Myrna Foster. 2012. Deception: Paralin- guistic and Verbal Leakage. InCommunication Yearbook 8. Routledge, 464–490
work page 2012
-
[24]
Samuel Rhys Cox, Yi-Chieh Lee, and Wei Tsang Ooi. 2023. Comparing How a Chatbot References User Utterances from Previous Chatting Sessions: An Investigation of Users’ Privacy Concerns and Perceptions. InProceedings of the 11th International Conference on Human-Agent Interaction. doi:10.1145/3623809. 3623875
-
[25]
Samuel Rhys Cox and Wei Tsang Ooi. 2022. Does Chatbot Language For- mality Affect Users’ Self-Disclosure?. InProceedings of the 4th Conference on Conversational User Interfaces(Glasgow, United Kingdom)(CUI ’22). Associ- ation for Computing Machinery, New York, NY, USA, Article 1, 13 pages. doi:10.1145/3543829.3543831
-
[26]
Samuel Rhys Cox, Yunlong Wang, Ashraf Abdul, Christian von der Weth, and Brian Y. Lim. 2021. Directed Diversity: Leveraging Language Embed- ding Distances for Collective Creativity in Crowd Ideation. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–35. doi:10.1145/3411764.3445782
-
[27]
Roelof A.J. de Vries, Khiet P. Truong, Sigrid Kwint, Constance H.C. Drossaert, and Vanessa Evers. 2016. Crowd-Designed Motivation: Motivational Messages for Exercise Adherence Based on Behavior Change Theory. InProceedings of the 2016 CHI Conference on Human Factors in Computing Systems(San Jose, California, USA)(CHI ’16). Association for Computing Machin...
-
[28]
Bella M DePaulo, James J Lindsay, Brian E Malone, Laura Muhlenbruck, Kelly Charlton, and Harris Cooper. 2003. Cues to Deception.Psychological Bulletin 129, 1 (2003), 74. doi:10.1037/0033-2909.129.1.74
-
[29]
Stephanie Diepeveen, Tom Ling, Marc Suhrcke, Martin Roland, and Theresa M Marteau. 2013. Public acceptability of government intervention to change health-related behaviours: a systematic review and narrative synthesis.BMC Public Health13 (2013), 1–11. doi:10.1186/1471-2458-13-756
-
[30]
James Price Dillard and Claire Dzur Harkness. 1992. Exploring the Affective Impact of Interpersonal Influence Messages.Journal of Language and Social Psychology11, 3 (1992), 179–191. doi:10.1177/0261927X92113004
-
[31]
James Price Dillard, Terry A Kinney, and Michael G Cruz. 1996. Influence, appraisals, and emotions in close relationships.Communications Monographs 63, 2 (1996), 105–130. doi:10.1080/03637759609376382
-
[32]
James Price Dillard and Lijiang Shen. 2005. On the Nature of Reactance and its Role in Persuasive Health Communication.Communication Monographs72, 2 (2005), 144–168. doi:10.1080/03637750500111815
-
[33]
James Price Dillard, Lijiang Shen, and Renata Grillova Vail. 2007. Does Perceived Message Effectiveness Cause Persuasion or vice Versa? 17 Consistent Answers. Human Communication Research33, 4 (2007), 467–488. doi:10.1111/j.1468-2958. 2007.00308.x
work page internal anchor Pith review Pith/arXiv arXiv doi:10.1111/j.1468-2958 2007
-
[34]
Yujie Dong, Wu Li, and Meng Chen. 2024. Personalization reactance in online medical consultations: effects of two-sided personalization and health topic sensitivity on reactance.Human Communication Research50, 1 (2024), 66–78. doi:10.1093/hcr/hqad039
-
[35]
Paul Ekman and Wallace V Friesen. 1969. Nonverbal leakage and clues to deception.Psychiatry32, 1 (1969), 88–106. doi:10.1080/00332747.1969.11023575
-
[36]
Nina Errey, Christy Jie Liang, Tuck Wah Leong, Yongqing Chen, Hassan Vally, and Catherine M. Bennett. 2024. Nudging with Narrative Visualization: Commu- nicating to a Young Adult Audience in the Pandemic.Proc. ACM Hum.-Comput. Interact.8, CSCW2, Article 411 (Nov. 2024), 21 pages. doi:10.1145/3686950
-
[37]
2011.Rhetorical Style: The Uses of Language in Persuasion
Jeanne Fahnestock. 2011.Rhetorical Style: The Uses of Language in Persuasion. OUP USA. doi:10.1093/acprof:oso/9780199764129.001.0001
work page doi:10.1093/acprof:oso/9780199764129.001.0001 2011
-
[38]
Christopher M Federico and Pierce D Ekstrom. 2018. The Political Self: How Identity Aligns Preferences With Epistemic Needs.Psychological Science29, 6 (2018), 901–913. doi:10.1177/0956797617748679
-
[39]
B. J. Fogg. 2002. Persuasive technology: using computers to change what we think and do.Ubiquity2002, December, Article 5 (Dec. 2002), 32 pages. doi:10.1145/764008.763957
-
[40]
Celina R Furman, Ethan Kross, and Ashley N Gearhardt. 2020. Distanced Self- Talk Enhances Goal Pursuit to Eat Healthier.Clinical Psychological Science8, 2 (2020), 366–373. doi:10.1177/2167702619896366
-
[41]
Andrew Gambino, Jesse Fox, and Rabindra A Ratan. 2020. Building a stronger CASA: Extending the computers are social actors paradigm.Human-Machine Communication1 (2020), 71–85. doi:10.30658/hmc.1.5
-
[42]
Aimi S. Ghazali, Jaap Ham, Emilia I. Barakova, and Panos Markopoulos. 2017. Pardon the rude robot: Social cues diminish reactance to high controlling lan- guage. In2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). 411–417. doi:10.1109/ROMAN.2017.8172335
-
[43]
Mathyas Giudici, Pietro Crovari, and Franca Garzotto. 2025. Persuasive Con- versational Agents for Environmental Sustainability: A Survey.ACM Comput. Surv.(Nov. 2025). doi:10.1145/3774751
-
[44]
Joseph Grandpre, Eusebio M Alvaro, Michael Burgoon, Claude H Miller, and John R Hall. 2003. Adolescent Reactance and Anti-Smoking Campaigns: A Theoretical Approach.Health Communication15, 3 (2003), 349–366. doi:10.1207/ S15327027HC1503_6
work page 2003
-
[45]
Huan Guo, Hang Song, Yuanyuan Liu, Kai Xu, and Heyong Shen. 2019. Social distance modulates the process of uncertain decision-making: evidence from event-related potentials.Psychology Research and Behavior Management(2019), 701–714. doi:10.2147/PRBM.S210910
-
[46]
Jaap Ham and Cees JH Midden. 2014. A Persuasive Robot to Stimulate Energy Conservation: The Influence of Positive and Negative Social Feedback and Task Similarity on Energy-Consumption Behavior.International Journal of Social Robotics6 (2014), 163–171. doi:10.1007/s12369-013-0205-z
-
[47]
Kyle Hamilton, Luca Longo, and Bojan Bozic. 2024. GPT Assisted Annotation of Rhetorical and Linguistic Features for Interpretable Propaganda Technique Detection in News Text. InCompanion Proceedings of the ACM on Web Conference
work page 2024
-
[48]
1431–1440. doi:10.1145/3589335.3651909
-
[49]
Xu Han, Michelle Zhou, Matthew J. Turner, and Tom Yeh. 2021. Designing Ef- fective Interview Chatbots: Automatic Chatbot Profiling and Design Suggestion Generation for Chatbot Debugging. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems(Yokohama, Japan)(CHI ’21). Asso- ciation for Computing Machinery, New York, NY, USA, Articl...
-
[50]
Katrin Hartwig, Tom Biselli, Franziska Schneider, and Christian Reuter. 2024. From Adolescents’ Eyes: Assessing an Indicator-Based Intervention to Combat Misinformation on TikTok. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 905, ...
-
[51]
Hekler, Predrag Klasnja, Jon E
Eric B. Hekler, Predrag Klasnja, Jon E. Froehlich, and Matthew P. Buman. 2013. Mind the Theoretical Gap: Interpreting, Using, and Developing Behavioral The- ory in HCI Research. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(Paris, France)(CHI ’13). Association for Computing Machinery, New York, NY, USA, 3307–3316. doi:10.114...
-
[52]
Sander Hermsen, Jeana Frost, Reint Jan Renes, and Peter Kerkhof. 2016. Using feedback through digital technology to disrupt and change habitual behavior: A critical review of current literature.Computers in Human Behavior57 (2016), 61–74. doi:10.1016/j.chb.2015.12.023
-
[53]
Thomas Holtgraves. 1997. Styles of Language Use: Individual and Cultural Variability in Conversational Indirectness.Journal of Personality and Social Psychology73, 3 (1997), 624. doi:10.1037/0022-3514.73.3.624
-
[54]
Yaxin Hu, Yuxiao Qu, Adam Maus, and Bilge Mutlu. 2022. Polite or Direct? Conversation Design of a Smart Display for Older Adults Based on Politeness Theory. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems(New Orleans, LA, USA)(CHI ’22). Article 307, 15 pages. doi:10.1145/ 3491102.3517525
-
[55]
Hudecek, Eva Lermer, Susanne Gaube, Julia Cecil, Silke F
Matthias F.C. Hudecek, Eva Lermer, Susanne Gaube, Julia Cecil, Silke F. Heiss, and Falk Batz. 2024. Fine for others but not for me: The role of perspective in patients’ perception of artificial intelligence in online medical platforms. Computers in Human Behavior: Artificial Humans2, 1 (2024), 100046. doi:10. 1016/j.chbah.2024.100046
-
[56]
Danette Ifert Johnson. 2008. Modal expressions in refusals of friends’ interper- sonal requests: Politeness and effectiveness.Communication Studies59, 2 (2008), 148–163. doi:10.1080/10510970802062477
-
[57]
Brennan Jones, Yan Xu, Qisheng Li, and Stefan Scherer. 2024. Designing a Proactive Context-Aware AI Chatbot for People’s Long-Term Goals. InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems(Hon- olulu, HI, USA)(CHI EA ’24). Association for Computing Machinery, New York, NY, USA, Article 104, 7 pages. doi:10.1145/3613905.3650912
-
[58]
Kyuha Jung, Gyuho Lee, Yuanhui Huang, and Yunan Chen. 2025. ’I’ve talked to ChatGPT about my issues last night. ’: Examining Mental Health Conversations with Large Language Models through Reddit Analysis.Proc. ACM Hum.-Comput. Interact.9, 7, Article CSCW356 (Oct. 2025), 25 pages. doi:10.1145/3757537
-
[59]
Elise Karinshak, Sunny Xun Liu, Joon Sung Park, and Jeffrey T Hancock. 2023. Working With AI to Persuade: Examining a Large Language Model’s Ability to Generate Pro-Vaccination Messages.Proceedings of the ACM on Human- Computer Interaction7, CSCW1 (2023), 1–29. doi:10.1145/3579592
-
[60]
Geoff Kaufman and Mary Flanagan. 2015. A psychologically “embedded” ap- proach to designing games for prosocial causes.Cyberpsychology: Journal of Psychosocial Research on Cyberspace9, 3 (2015). doi:10.5817/CP2015-3-5
-
[61]
Pranav Khadpe, Ranjay Krishna, Li Fei-Fei, Jeffrey T. Hancock, and Michael S. Bernstein. 2020. Conceptual Metaphors Impact Perceptions of Human-AI Col- laboration.Proc. ACM Hum.-Comput. Interact.4, CSCW2, Article 163 (Oct. 2020), 26 pages. doi:10.1145/3415234
-
[62]
Jieun Kim and Susan R. Fussell. 2025. Should Voice Agents Be Polite in an Emergency? Investigating Effects of Speech Style and Voice Tone in Emergency Simulation. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 61, 17 pages. doi:10.1145/3706598.3714203
-
[63]
Minji Kim. 2019. When Similarity Strikes Back: Conditional Persuasive Ef- fects of Character-Audience Similarity in Anti-Smoking Campaign.Human Communication Research45, 1 (2019), 52–77. doi:10.1093/hcr/hqy013
-
[64]
Min-Sun Kim, Karadeen Y Kam, William F Sharkey, and Theodore M Singelis
-
[65]
Deception: Moral Transgression or Social Necessity?
“Deception: Moral Transgression or Social Necessity?”: Cultural-Relativity of Deception Motivations and Perceptions of Deceptive Communication.Journal of International and Intercultural Communication1, 1 (2008), 23–50. doi:10.1080/ 17513050701621228
work page 2008
-
[66]
Rafal Kocielnik and Gary Hsieh. 2017. Send Me a Different Message: Utilizing Cognitive Space to Create Engaging Message Triggers. InProceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (Portland, Oregon, USA)(CSCW ’17). Association for Computing Machinery, New York, NY, USA, 2193–2207. doi:10.1145/2998181.2998324
-
[67]
Rafal Kocielnik, Lillian Xiao, Daniel Avrahami, and Gary Hsieh. 2018. Reflection Companion: A Conversational System for Engaging Users in Reflection on Physical Activity.Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.2, 2, Article 70 (July 2018), 26 pages. doi:10.1145/3214273
-
[68]
Hana Kopecka, Jose Such, and Michael Luck. 2024. Preferences for AI Ex- planations Based on Cognitive Style and Socio-Cultural Factors.Proc. ACM Hum.-Comput. Interact.8, CSCW1, Article 109 (April 2024), 32 pages. doi:10. 1145/3637386
work page 2024
-
[69]
Nikola Kovacevic, Tobias Boschung, Christian Holz, Markus Gross, and Rafael Wampfler. 2024. Chatbots With Attitude: Enhancing Chatbot Interactions Through Dynamic Personality Infusion. InProceedings of the 6th ACM Con- ference on Conversational User Interfaces(Luxembourg, Luxembourg)(CUI ’24). Association for Computing Machinery, New York, NY, USA, Articl...
-
[70]
Ann Kronrod, Amir Grinstein, and Luc Wathieu. 2011. Enjoy! Hedonic Consump- tion and Compliance with Assertive Messages.Journal of Consumer Research 39, 1 (08 2011), 51–61. doi:10.1086/661933
-
[71]
Ann Kronrod, Amir Grinstein, and Luc Wathieu. 2012. Go Green! Should Environmental Messages be So Assertive?Journal of Marketing76, 1 (2012), 95–102. doi:10.1509/jm.10.0416
-
[72]
Yi Li, Xuanxuan Ding, Yifan Chen, Yeye Li, and Nan Ma. 2025. Customizable AI for Depression Care: Improving the User Experience of Large Language Model-Driven Chatbots. InProceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25). Association for Computing Machinery, New York, NY, USA, 1844–1866. doi:10.1145/3715336.3735795
-
[73]
Christine Lisetti, Reza Amini, Ugan Yasavur, and Naphtali Rishe. 2013. I Can Help You Change! An Empathic Virtual Agent Delivers Behavior Change Health Interventions.ACM Trans. Manage. Inf. Syst.4, 4, Article 19 (Dec. 2013), 28 pages. doi:10.1145/2544103
-
[74]
Xun Sunny Liu and Jeff Hancock. 2024. Social robots are good for me, but better for other people: The presumed allo-enhancement effect of social robot perceptions.Computers in Human Behavior: Artificial Humans2, 2 (2024), 100079. doi:10.1016/j.chbah.2024.100079
-
[75]
Jeffrey Loewenstein. 2019. Surprise, Recipes for Surprise, and Social Influence. Topics in Cognitive Science11, 1 (2019), 178–193. doi:10.1111/tops.12312
-
[76]
Adaptable Commitment Interfaces
Kai Lukoff, Ulrik Lyngs, Karina Shirokova, Raveena Rao, Larry Tian, Himanshu Zade, Sean A. Munson, and Alexis Hiniker. 2023. SwitchTube: A Proof-of- Concept System Introducing “Adaptable Commitment Interfaces” as a Tool for Digital Wellbeing. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany)(CHI ’23). Associa...
-
[78]
Jingbo Meng and Yue (Nancy) Dai. 2021. Emotional Support from AI Chatbots: Should a Supportive Partner Self-Disclose or Not?Journal of Computer-Mediated Communication26, 4 (05 2021), 207–222. doi:10.1093/jcmc/zmab005
-
[79]
Luise Metzger, Linda Miller, Martin Baumann, and Johannes Kraus. 2024. Em- powering Calibrated (Dis-)Trust in Conversational Agents: A User Study on the Persuasive Power of Limitation Disclaimers vs. Authoritative Style. InProceed- ings of the CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Article 481, 19 pages. doi:10.1...
-
[80]
Josh Aaron Miller, Kutub Gandhi, Matthew Alexander Whitby, Mehmet Kosa, Seth Cooper, Elisa D. Mekler, and Ioanna Iacovides. 2024. A Design Framework for Reflective Play. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 519, 21 pages. ...
-
[81]
Terran Mott, Aaron Fanganello, and Tom Williams. 2024. What a Thing to Say! Which Linguistic Politeness Strategies Should Robots Use in Noncompliance Interactions?. InProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction(Boulder, CO, USA)(HRI ’24). 501–510. doi:10.1145/ 3610977.3634943
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.