pith. machine review for the scientific record. sign in

arxiv: 2509.09309 · v2 · submitted 2025-09-11 · 💻 cs.HC

Recognition: unknown

Proactive AI Adoption can be Threatening: When Help Backfires

Authors on Pith no claims yet
classification 💻 cs.HC
keywords helpadoptionanticipatoryreactiveself-threatemphhumanproactive
0
0 comments X
read the original abstract

Artificial intelligence (AI) assistants are increasingly embedded in workplace tools, raising the question of how initiative-taking shapes adoption. Prior work highlights trust and expectation mismatches as barriers, but the underlying psychological mechanisms remain unclear. Drawing on self-affirmation and social exchange theories, we theorize that unsolicited help elicits self-threat, thereby reducing willingness to accept help, likelihood of future use, and performance expectancy of AI. We report two vignette-based experiments (Study~1: $N=761$; Study~2: $N=571$, preregistered). Study~1 compared anticipatory and reactive help provided by an AI vs. a human, while Study~2 distinguished between \emph{offering} (suggesting help) and \emph{providing} (acting automatically). In Study 1, AI reactive help was more threatening than reactive human help. Across both studies, anticipatory help increased user's self-threat and reduced adoption outcomes. Our findings identify self-threat as a mechanism through which anticipatory help, a proactive AI feature, may backfire, and suggest design implications to be tested in interactive systems.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Toward a Unified Framework for Collaborative Design of Human-AI Interaction

    cs.HC 2026-05 unverdicted novelty 5.0

    A framework unifies multimodal intent interpretation, interaction-centric explainability, and agency-preserving controls as interdependent requirements for trustworthy Human-AI collaboration.