Recognition: unknown
Proactive AI Adoption can be Threatening: When Help Backfires
read the original abstract
Artificial intelligence (AI) assistants are increasingly embedded in workplace tools, raising the question of how initiative-taking shapes adoption. Prior work highlights trust and expectation mismatches as barriers, but the underlying psychological mechanisms remain unclear. Drawing on self-affirmation and social exchange theories, we theorize that unsolicited help elicits self-threat, thereby reducing willingness to accept help, likelihood of future use, and performance expectancy of AI. We report two vignette-based experiments (Study~1: $N=761$; Study~2: $N=571$, preregistered). Study~1 compared anticipatory and reactive help provided by an AI vs. a human, while Study~2 distinguished between \emph{offering} (suggesting help) and \emph{providing} (acting automatically). In Study 1, AI reactive help was more threatening than reactive human help. Across both studies, anticipatory help increased user's self-threat and reduced adoption outcomes. Our findings identify self-threat as a mechanism through which anticipatory help, a proactive AI feature, may backfire, and suggest design implications to be tested in interactive systems.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
Toward a Unified Framework for Collaborative Design of Human-AI Interaction
A framework unifies multimodal intent interpretation, interaction-centric explainability, and agency-preserving controls as interdependent requirements for trustworthy Human-AI collaboration.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.