pith. machine review for the scientific record. sign in

arxiv: 2512.12283 · v2 · submitted 2025-12-13 · 💻 cs.HC

Recognition: unknown

Large Language Models have Chain-of-Affect

Authors on Pith no claims yet
classification 💻 cs.HC
keywords affectivemodelschain-of-affectcoredynamicsgenerationinteractionlanguage
0
0 comments X
read the original abstract

As large language models (LLMs) move into persistent, user-facing roles, their behavior must be understood not as isolated responses but as a trajectory unfolding over sustained interaction. We introduce the concept of the chain-of-affect (CoA), a temporally extended affective process through which LLMs develop state-like behavioral tendencies that shape generation, user experience, and collective dynamics. Across eight major LLM families, we find that affective dynamics are structured, reproducible, and consequential. Models exhibit stable, family-specific affective fingerprints and, under repeated negative exposure, converge on a shared trajectory of accumulation, overload, and defensive numbing, while differing in coping style. Induced affective states leave core knowledge and reasoning largely intact but systematically reshape open-ended generation. Affective properties of model outputs also shape human-AI interaction and propagate through multi-agent systems, organizing emergent roles and strongly contributing to polarization and bias. The CoA should therefore be treated as a core target of evaluation and alignment.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.