pith. machine review for the scientific record. sign in

arxiv: 2605.12525 · v1 · submitted 2026-04-10 · 💻 cs.SI · cs.AI· cs.CL

Recognition: no theorem link

PERCEIVE: A Benchmark for Personalized Emotion and Communication Behavior Understanding on Social Media

Authors on Pith no claims yet

Pith reviewed 2026-05-14 21:53 UTC · model grok-4.3

classification 💻 cs.SI cs.AIcs.CL
keywords social media emotion analysispersonalized emotion understandingbenchmark datasetreader-centric analysiscommunication behaviorsocial graphbilingual social mediaemotion annotation from comments
0
0 comments X

The pith

PERCEIVE is the first benchmark to combine social media posts with readers' real comments, behavior, attributes and networks for personalized emotion analysis.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Current emotion analysis on social media treats responses as uniform and author-driven, missing how the same post elicits different feelings from different people. PERCEIVE introduces a bilingual dataset that records actual reader comments as emotional signals, alongside each reader's communication patterns, personal attributes and position in the social graph. This setup lets models learn the link between what someone posts, how others react emotionally, and the social context shaping those reactions. Tests of existing methods, including advanced language models, show they struggle with this reader-specific task. The benchmark therefore supplies both data and evaluation rules to move toward emotion models that treat perception as personal and socially embedded.

Core claim

PERCEIVE supplies the first large-scale bilingual resource that simultaneously records author content, emotion labels derived directly from reader comments, communication intent, user attributes and the full social graph, thereby supporting reader-centric rather than author-centric emotion and behavior modeling.

What carries the argument

The PERCEIVE benchmark dataset, which annotates genuine emotional responses from reader comments while synchronously recording communication behavior and social context across English and Chinese posts.

If this is right

  • Emotion models must incorporate individual reader differences instead of assuming a single response to a given post.
  • Communication behavior and emotional feedback become jointly modellable when both are tied to the same social context.
  • Standard author-centric methods, including current large language models, fall short on tasks that require reader-specific predictions.
  • Future work can use the benchmark to develop unified systems that treat emotion as emerging from social interactions rather than isolated text.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The dataset could support improved content moderation tools that flag posts likely to trigger strong negative reactions for particular user groups.
  • Linking comments to social graphs opens the possibility of studying how emotional responses spread or cluster within communities.
  • Extending the same annotation approach to other platforms or languages would test whether the observed reader-author gaps generalize beyond the current bilingual collection.

Load-bearing premise

Emotions labeled from reader comments accurately capture genuine emotional responses, and the included user attributes plus social graph are enough to support effective personalization for different readers.

What would settle it

A concrete test would be whether models trained on PERCEIVE can predict the emotion a specific reader will express in a new comment more accurately than models that ignore reader identity and social connections; failure to show this gain would undermine the claim that the five dimensions enable meaningful personalization.

Figures

Figures reproduced from arXiv: 2605.12525 by Deyu Li, Jian Liao, Jianxing Zheng, Suge Wang, Yujin Zheng.

Figure 1
Figure 1. Figure 1: Illustration of personalized emotions and dif [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Prompt for LLM-based pre-annotation. B Detailed Evaluation Settings B.1 Task-Specific Evaluation Metrics The metrics are meticulously chosen to align with the objective of each task. • Task A, B, & C: We report Accuracy and Macro-F1. Accuracy measures overall cor￾rectness, while Macro-F1 ensures balanced performance across all emotion/behavior cat￾egories, including minorities. We also report the F1 score … view at source ↗
Figure 3
Figure 3. Figure 3: Comparison of high- and low- activity users for Task B. [PITH_FULL_IMAGE:figures/full_fig_p014_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Comparison of high- and low- activity users for Task C. [PITH_FULL_IMAGE:figures/full_fig_p015_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Confusion matrices of emotion-behavior joint prediction using GLM in Twitter. [PITH_FULL_IMAGE:figures/full_fig_p015_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Confusion matrices of emotion-behavior joint prediction using GLM in Weibo. [PITH_FULL_IMAGE:figures/full_fig_p016_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Confusion matrices of emotion-behavior joint prediction using Qwen in Twitter. [PITH_FULL_IMAGE:figures/full_fig_p016_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Confusion matrices of emotion-behavior joint prediction using Qwen in Weibo. [PITH_FULL_IMAGE:figures/full_fig_p016_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Confusion matrices of emotion-behavior joint prediction using DeepSeek in Twitter. [PITH_FULL_IMAGE:figures/full_fig_p016_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Confusion matrices of emotion-behavior joint prediction using DeepSeek in Weibo. [PITH_FULL_IMAGE:figures/full_fig_p017_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: BLEU and Rouge-L of Task E based on different LLMs. [PITH_FULL_IMAGE:figures/full_fig_p017_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Comparison of UIC, PFC, and EEC based on different LLMs. [PITH_FULL_IMAGE:figures/full_fig_p017_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Prompt for Task A (basic LLM) [PITH_FULL_IMAGE:figures/full_fig_p018_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Prompt for Task A (LLM + THOR) [PITH_FULL_IMAGE:figures/full_fig_p019_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Prompt for Task A (LLM + TOC) [PITH_FULL_IMAGE:figures/full_fig_p020_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Prompt for Task A (LLM + Debate) [PITH_FULL_IMAGE:figures/full_fig_p021_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Prompt for Task B (basic LLM) [PITH_FULL_IMAGE:figures/full_fig_p022_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Prompt for Task B (LLM + THOR) [PITH_FULL_IMAGE:figures/full_fig_p023_18.png] view at source ↗
Figure 19
Figure 19. Figure 19: Prompt for Task B (LLM + TOC) [PITH_FULL_IMAGE:figures/full_fig_p024_19.png] view at source ↗
Figure 20
Figure 20. Figure 20: Prompt for Task B (LLM + Debate) [PITH_FULL_IMAGE:figures/full_fig_p025_20.png] view at source ↗
Figure 21
Figure 21. Figure 21: Prompt for Task C (basic LLM) [PITH_FULL_IMAGE:figures/full_fig_p026_21.png] view at source ↗
Figure 22
Figure 22. Figure 22: Prompt for Task C (LLM + THOR) [PITH_FULL_IMAGE:figures/full_fig_p027_22.png] view at source ↗
Figure 23
Figure 23. Figure 23: Prompt for Task C (LLM + TOC) [PITH_FULL_IMAGE:figures/full_fig_p028_23.png] view at source ↗
Figure 24
Figure 24. Figure 24: Prompt for Task C (LLM + Debate) [PITH_FULL_IMAGE:figures/full_fig_p029_24.png] view at source ↗
Figure 25
Figure 25. Figure 25: Prompt for Task D [PITH_FULL_IMAGE:figures/full_fig_p030_25.png] view at source ↗
Figure 26
Figure 26. Figure 26: Prompt of in-context comment generation for Task E. [PITH_FULL_IMAGE:figures/full_fig_p031_26.png] view at source ↗
Figure 27
Figure 27. Figure 27: Prompt of UIC, PFC, EEC measurements for Task E. [PITH_FULL_IMAGE:figures/full_fig_p032_27.png] view at source ↗
read the original abstract

Current emotion analysis in social media is predominantly author-centric, failing to capture the subjective nature of emotional responses across diverse readers. This paradigm overlooks the crucial link between individual perception, communication behavior, and the underlying social network. To bridge this gap, we introduce PERCEIVE, a novel bilingual (English and Chinese) large-scale benchmark that, to the best of our knowledge, is the first to integrate five critical dimensions for social perception: author-created content, genuine readers' emotional feedback (derived from their comments), communication behavior, user attributes, and the social graph. This benchmark enables a paradigm shift towards truly personalized, reader-centric analysis, where different readers' emotional responses to the same content are naturally captured through their real-world interactions. By annotating emotions from reader comments and synchronously capturing communication intent, PERCEIVE provides a unique resource to model the intrinsic coupling between emotion and behavior, grounded in social context. We establish a comprehensive evaluation protocol, testing state-of-the-art methods, including large language models (LLMs) with advanced reasoning enhancement. Our findings reveal significant shortcomings in existing approaches when handling this multifaceted, user-aware task. PERCEIVE offers a foundational resource and clear direction for future research in socially-intelligent NLP, pushing models towards a more unified understanding of emotion on social media.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper introduces PERCEIVE, a novel bilingual (English-Chinese) large-scale benchmark for personalized emotion and communication behavior understanding on social media. It claims to be the first resource integrating five dimensions—author-created content, genuine readers' emotional feedback derived from comments, communication behavior, user attributes, and the social graph—enabling reader-centric analysis that captures varying emotional responses to the same content through real interactions. The work provides an evaluation protocol for state-of-the-art methods including LLMs with reasoning enhancements and reports significant shortcomings in existing approaches for this multifaceted task.

Significance. If the dataset construction, annotation reliability, and validation of comment-derived emotion labels are rigorously demonstrated, PERCEIVE would offer a valuable foundational resource for socially-intelligent NLP. It would enable research on the coupling between personalized emotion perception, communication intent, and social context in a multilingual setting, addressing a gap in author-centric emotion analysis and supporting models that account for individual reader differences grounded in observable interactions.

major comments (3)
  1. [Abstract] Abstract: The central claim that the benchmark captures 'genuine readers' emotional feedback (derived from their comments)' is unsupported. No details are supplied on data collection methods, annotation procedures, inter-annotator agreement, or any validation (e.g., comparison to self-report scales or physiological measures) showing that comment labels reflect private affective responses rather than public communicative behavior already captured in the separate communication-behavior dimension.
  2. [Benchmark Construction] Benchmark construction description: The integration of the social graph and user attributes is asserted to enable effective personalization, but the manuscript provides no quantitative information on graph scale, density, attribute completeness, or ablation studies demonstrating their contribution to modeling reader-specific emotional responses.
  3. [Evaluation Protocol] Evaluation protocol: The paper states that it tests SOTA methods including LLMs and reveals 'significant shortcomings,' yet supplies no concrete metrics, baseline comparisons, or dataset statistics (size, label distribution, IAA scores) to ground these findings or allow replication.
minor comments (2)
  1. [Benchmark Description] Clarify the exact annotation schema for emotions and communication intent with example comment threads to distinguish the two dimensions.
  2. [Dataset Statistics] Add a table summarizing dataset statistics (number of posts, comments, users, edges) broken down by language.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript introducing PERCEIVE. We address each major comment point by point below, providing clarifications from the full paper and indicating revisions to strengthen the presentation of data collection, quantitative details, and evaluation results.

read point-by-point responses
  1. Referee: [Abstract] The central claim that the benchmark captures 'genuine readers' emotional feedback (derived from their comments)' is unsupported. No details are supplied on data collection methods, annotation procedures, inter-annotator agreement, or any validation (e.g., comparison to self-report scales or physiological measures) showing that comment labels reflect private affective responses rather than public communicative behavior already captured in the separate communication-behavior dimension.

    Authors: We acknowledge the need for greater explicitness in the abstract. The full manuscript details data collection via public social media APIs capturing real user comments on posts, with emotion labels derived through a hybrid process of automated detection followed by human annotation. We will revise the abstract and add a dedicated subsection on annotation procedures, including inter-annotator agreement metrics. We distinguish emotion labels (focused on affective content in comments) from the separate communication-behavior dimension (e.g., reply patterns and intent). Direct physiological validation is impractical at this scale, but we will incorporate additional discussion of correlations with observable interaction patterns and cite supporting literature on comment-based emotion proxies. revision: yes

  2. Referee: [Benchmark Construction] The integration of the social graph and user attributes is asserted to enable effective personalization, but the manuscript provides no quantitative information on graph scale, density, attribute completeness, or ablation studies demonstrating their contribution to modeling reader-specific emotional responses.

    Authors: The manuscript describes the integration but we agree that explicit quantitative summaries were not sufficiently prominent. In revision we will add specific statistics on graph scale (number of nodes and edges), density, and attribute completeness rates, along with ablation experiments quantifying the contribution of graph and attribute features to personalization performance. revision: yes

  3. Referee: [Evaluation Protocol] The paper states that it tests SOTA methods including LLMs and reveals 'significant shortcomings,' yet supplies no concrete metrics, baseline comparisons, or dataset statistics (size, label distribution, IAA scores) to ground these findings or allow replication.

    Authors: We will expand the evaluation section to prominently report all key dataset statistics (size, label distributions, IAA scores), full baseline comparisons across methods including LLMs with and without reasoning enhancements, and concrete performance metrics. This will ground the claims of shortcomings and support replication. revision: yes

Circularity Check

0 steps flagged

No circularity: benchmark dataset creation with no derivations or self-referential fits

full rationale

The paper introduces an external bilingual benchmark (PERCEIVE) that annotates five dimensions from public social media data: author content, reader comments for emotion labels, communication behavior, user attributes, and social graph. No equations, parameter fitting, predictions, or first-principles derivations are claimed. The central contribution is resource creation plus an evaluation protocol that tests existing models (including LLMs) on the new data. This is self-contained against external benchmarks and does not reduce any result to its own inputs by construction. Self-citations, if present, are not load-bearing for any claimed result. The reader's circularity score of 0.0 is confirmed.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the construction of a new dataset using standard NLP practices for comment-based annotation and social network data; no new free parameters, invented entities, or non-standard axioms are introduced beyond domain assumptions about emotion labeling.

axioms (1)
  • domain assumption Emotions can be reliably inferred and annotated from reader comments on social media posts
    The benchmark derives reader emotion labels directly from comments without additional validation details provided in the abstract.

pith-pipeline@v0.9.0 · 5543 in / 1248 out tokens · 64867 ms · 2026-05-14T21:53:21.470932+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

94 extracted references · 94 canonical work pages · 2 internal anchors

  1. [1]

    From Generic Empathy to Personalized Emotional Support: A Self-Evolution Framework for User Preference Alignment

    Ye, Jing and Xiang, Lu and Zhang, Yaping and Zong, Chengqing. From Generic Empathy to Personalized Emotional Support: A Self-Evolution Framework for User Preference Alignment. Findings of the Association for Computational Linguistics: EMNLP 2025. 2025. doi:10.18653/v1/2025.findings-emnlp.1024

  2. [2]

    Exploring Persona Sentiment Sensitivity in Personalized Dialogue Generation

    Jun, Yonghyun and Lee, Hwanhee. Exploring Persona Sentiment Sensitivity in Personalized Dialogue Generation. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025. doi:10.18653/v1/2025.acl-long.900

  3. [3]

    Proceedings of the AAAI conference on artificial intelligence , volume=

    Hg-sl: Jointly learning of global and local user spreading behavior for fake news early detection , author=. Proceedings of the AAAI conference on artificial intelligence , volume=

  4. [4]

    2025 , pages=

    DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning , author=. 2025 , pages=

  5. [5]

    Qwen2.5: A Party of Foundation Models , url =

    Qwen Team , month =. Qwen2.5: A Party of Foundation Models , url =

  6. [6]

    Exploiting Rich Textual User-Product Context for Improving Personalized Sentiment Analysis

    Lyu, Chenyang and Yang, Linyi and Zhang, Yue and Graham, Yvette and Foster, Jennifer. Exploiting Rich Textual User-Product Context for Improving Personalized Sentiment Analysis. Findings of the Association for Computational Linguistics: ACL 2023. 2023. doi:10.18653/v1/2023.findings-acl.92

  7. [7]

    My Words Imply Your Opinion: Reader Agent-based Propagation Enhancement for Personalized Implicit Emotion Analysis

    Jian Liao and Yu Feng and Yujin Zheng and Jun Zhao and Suge Wang and Jianxing Zheng. My Words Imply Your Opinion: Reader Agent-based Propagation Enhancement for Personalized Implicit Emotion Analysis. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025

  8. [8]

    Emotion in Organizations: Theory and Research

    Elfenbein, Hillary Anger. Emotion in Organizations: Theory and Research. Annual Review of Psychology. 2023. doi:https://doi.org/10.1146/annurev-psych-032720-035940

  9. [9]

    M - ABSA : A Multilingual Dataset for Aspect-Based Sentiment Analysis

    Wu, ChengYan and Ma, Bolei and Liu, Yihong and Zhang, Zheyu and Deng, Ningyuan and Li, Yanshu and Chen, Baolan and Zhang, Yi and Xue, Yun and Plank, Barbara. M - ABSA : A Multilingual Dataset for Aspect-Based Sentiment Analysis. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. 2025. doi:10.18653/v1/2025.emnlp-main.128

  10. [10]

    ECC : An Emotion-Cause Conversation Dataset for Empathy Response

    He, Yuanyuan and Pan, Yongsen and Li, Wei and You, Jiali and Deng, Jiawen and Ren, Fuji. ECC : An Emotion-Cause Conversation Dataset for Empathy Response. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. 2025. doi:10.18653/v1/2025.emnlp-main.306

  11. [11]

    OATS : A Challenge Dataset for Opinion Aspect Target Sentiment Joint Detection for Aspect-Based Sentiment Analysis

    Chebolu, Siva Uday Sampreeth and Dernoncourt, Franck and Lipka, Nedim and Solorio, Thamar. OATS : A Challenge Dataset for Opinion Aspect Target Sentiment Joint Detection for Aspect-Based Sentiment Analysis. Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024

  12. [12]

    S em E val-2014 Task 4: Aspect Based Sentiment Analysis

    Pontiki, Maria and Galanis, Dimitris and Pavlopoulos, John and Papageorgiou, Harris and Androutsopoulos, Ion and Manandhar, Suresh. S em E val-2014 Task 4: Aspect Based Sentiment Analysis. Proceedings of the 8th International Workshop on Semantic Evaluation ( S em E val 2014). 2014. doi:10.3115/v1/S14-2004

  13. [13]

    S im USER : Simulating User Behavior with Large Language Models for Recommender System Evaluation

    Bougie, Nicolas and Watanabe, Narimawa. S im USER : Simulating User Behavior with Large Language Models for Recommender System Evaluation. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track). 2025. doi:10.18653/v1/2025.acl-industry.5

  14. [14]

    Evaluating Cognitive-Behavioral Fixation via Multimodal User Viewing Patterns on Social Media

    Wang, Yujie and Zhao, Yunwei and Yang, Jing and Han, Han and Shan, Shiguang and Zhang, Jie. Evaluating Cognitive-Behavioral Fixation via Multimodal User Viewing Patterns on Social Media. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. 2025. doi:10.18653/v1/2025.emnlp-main.987

  15. [15]

    Implicit Behavioral Alignment of Language Agents in High-Stakes Crowd Simulations

    Wang, Yunzhe and Lucas, Gale and Becerik-Gerber, Burcin and Ustun, Volkan. Implicit Behavioral Alignment of Language Agents in High-Stakes Crowd Simulations. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. 2025. doi:10.18653/v1/2025.emnlp-main.1562

  16. [16]

    B e S imulator: A Large Language Model Powered Text-based Behavior Simulator

    Wang, Jianan and Li, Bin and Qi, Jingtao and Wang, Xueying and Li, Fu and Lihanxun. B e S imulator: A Large Language Model Powered Text-based Behavior Simulator. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. 2025. doi:10.18653/v1/2025.emnlp-main.237

  17. [17]

    Unveiling the Truth and Facilitating Change: Towards Agent-based Large-scale Social Movement Simulation

    Mou, Xinyi and Wei, Zhongyu and Huang, Xuanjing. Unveiling the Truth and Facilitating Change: Towards Agent-based Large-scale Social Movement Simulation. Findings of the Association for Computational Linguistics: ACL 2024. 2024. doi:10.18653/v1/2024.findings-acl.285

  18. [18]

    2024 , eprint=

    From Individual to Society: A Survey on Social Simulation Driven by Large Language Model-based Agents , author=. 2024 , eprint=

  19. [19]

    Do we still need Human Annotators? Prompting Large Language Models for Aspect Sentiment Quad Prediction

    Hellwig, Nils Constantin and Fehle, Jakob and Kruschwitz, Udo and Wolff, Christian. Do we still need Human Annotators? Prompting Large Language Models for Aspect Sentiment Quad Prediction. Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025). 2025. doi:10.18653/v1/2025.xllm-1.15

  20. [20]

    Learning to Predict Persona Information for Dialogue Personalization without Explicit Persona Description

    Zhou, Wangchunshu and Li, Qifei and Li, Chenle. Learning to Predict Persona Information for Dialogue Personalization without Explicit Persona Description. Findings of the Association for Computational Linguistics: ACL 2023. 2023. doi:10.18653/v1/2023.findings-acl.186

  21. [21]

    L a MP : When Large Language Models Meet Personalization

    Salemi, Alireza and Mysore, Sheshera and Bendersky, Michael and Zamani, Hamed. L a MP : When Large Language Models Meet Personalization. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024. doi:10.18653/v1/2024.acl-long.399

  22. [22]

    LLM s + Persona-Plug = Personalized LLM s

    Liu, Jiongnan and Zhu, Yutao and Wang, Shuting and Wei, Xiaochi and Min, Erxue and Lu, Yu and Wang, Shuaiqiang and Yin, Dawei and Dou, Zhicheng. LLM s + Persona-Plug = Personalized LLM s. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025. doi:10.18653/v1/2025.acl-long.461

  23. [23]

    Cultural Bias Matters: A Cross-Cultural Benchmark Dataset and Sentiment-Enriched Model for Understanding Multimodal Metaphors

    Yang, Senqi and Zhang, Dongyu and Ren, Jing and Xu, Ziqi and Zhang, Xiuzhen and Song, Yiliao and Lin, Hongfei and Xia, Feng. Cultural Bias Matters: A Cross-Cultural Benchmark Dataset and Sentiment-Enriched Model for Understanding Multimodal Metaphors. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long P...

  24. [24]

    2024 , issn =

    Fostering YouTube followers’ stickiness through social contagion: The role of digital influencer' characteristics and followers’ compensation psychology , journal =. 2024 , issn =. doi:https://doi.org/10.1016/j.chb.2024.108304 , url =

  25. [25]

    MEC o T : M arkov Emotional Chain-of-Thought for Personality-Consistent Role-Playing

    Wei, Yangbo and Huang, Zhen and Zhao, Fangzhou and Feng, Qi and Xing, Wei W. MEC o T : M arkov Emotional Chain-of-Thought for Personality-Consistent Role-Playing. Findings of the Association for Computational Linguistics: ACL 2025. 2025. doi:10.18653/v1/2025.findings-acl.435

  26. [26]

    Frontiers in Psychology , VOLUME=

    Ahmad, Rehan and Nawaz, Muhammad Rafay and Ishaq, Muhammad Ishtiaq and Khan, Mumtaz Muhammad and Ashraf, Hafiz Ahmad , TITLE=. Frontiers in Psychology , VOLUME=. 2023 , DOI=

  27. [27]

    i N ews: A Multimodal Dataset for Modeling Personalized Affective Responses to News

    Hu, Tiancheng and Collier, Nigel. i N ews: A Multimodal Dataset for Modeling Personalized Affective Responses to News. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025. doi:10.18653/v1/2025.acl-long.1217

  28. [28]

    Beyond Demographics: Enhancing Cultural Value Survey Simulation with Multi-Stage Personality-Driven Cognitive Reasoning

    Liu, Haijiang and Li, Qiyuan and Gao, Chao and Cao, Yong and Xu, Xiangyu and Wu, Xun and Hershcovich, Daniel and Gu, Jinguang. Beyond Demographics: Enhancing Cultural Value Survey Simulation with Multi-Stage Personality-Driven Cognitive Reasoning. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. 2025. doi:10.18653/v1...

  29. [29]

    Sentiment Analysis using the Relationship between Users and Products

    Kertkeidkachorn, Natthawut and Shirai, Kiyoaki. Sentiment Analysis using the Relationship between Users and Products. Findings of the Association for Computational Linguistics: ACL 2023. 2023. doi:10.18653/v1/2023.findings-acl.547

  30. [30]

    Communication Research , volume=

    Spiral of Silence in the Social Media Era: A Simulation Approach to the Interplay Between Social Networks and Mass Media , author=. Communication Research , volume=. 2022 , pages=

  31. [31]

    2024 , journal=

    ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools , author=. 2024 , journal=

  32. [32]

    GLM : General Language Model Pretraining with Autoregressive Blank Infilling

    Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie. GLM : General Language Model Pretraining with Autoregressive Blank Infilling. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022

  33. [33]

    Personalized Implicit Sentiment Analysis Based on Multi-view Fusion of Implicit User Preference , year=

    Liao, Jian and Lei, Jia and Wang, Suge and Zheng, Jianxing and Han, Xiaoqing , booktitle=. Personalized Implicit Sentiment Analysis Based on Multi-view Fusion of Implicit User Preference , year=

  34. [34]

    Proceedings of the 17th ACM International Conference on Web Search and Data Mining , pages=

    Rdgcn: Reinforced dependency graph convolutional network for aspect-based sentiment analysis , author=. Proceedings of the 17th ACM International Conference on Web Search and Data Mining , pages=

  35. [35]

    Cognition , volume=

    Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning , author=. Cognition , volume=. 2019 , publisher=

  36. [36]

    and Lin, Hongfei and Pan, Y

    Xu, L. and Lin, Hongfei and Pan, Y. and Ren, H. and Chen, J. , year =. Constructing the affective lexicon ontology , volume =

  37. [37]

    Advances in Neural Information Processing Systems , volume=

    Demystifying oversmoothing in attention-based graph neural networks , author=. Advances in Neural Information Processing Systems , volume=

  38. [38]

    IEEE Transactions on knowledge and data engineering , volume=

    Knowledge graph augmented network towards multiview representation learning for aspect-based sentiment analysis , author=. IEEE Transactions on knowledge and data engineering , volume=. 2023 , publisher=

  39. [39]

    Causal Intervention Improves Implicit Sentiment Analysis

    Siyin Wang and Jie Zhou and Changzhi Sun and Junjie Ye and Tao Gui and Qi Zhang and Xuanjing Huang , pages=. Causal Intervention Improves Implicit Sentiment Analysis. , booktitle=

  40. [40]

    and Wen, Lijie

    Ma, Fukun and Hu, Xuming and Liu, Aiwei and Yang, Yawen and Li, Shuang and Yu, Philip S. and Wen, Lijie. AMR -based Network for Aspect-based Sentiment Analysis. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023. doi:10.18653/v1/2023.acl-long.19

  41. [41]

    Text Generation Model Enhanced with Semantic Information in Aspect Category Sentiment Analysis

    Tran, Tu and Shirai, Kiyoaki and Kertkeidkachorn, Natthawut. Text Generation Model Enhanced with Semantic Information in Aspect Category Sentiment Analysis. Findings of the Association for Computational Linguistics: ACL 2023. 2023. doi:10.18653/v1/2023.findings-acl.323

  42. [42]

    Research on Implicit Sentiment Analysis based on Heterogeneous User Knowledge Fusion (in C hinese)

    Liao, Jian and Zhang, Kai and Wang, Suge and Lei, Jia and Zhang, Yiyang. Research on Implicit Sentiment Analysis based on Heterogeneous User Knowledge Fusion (in C hinese). Proceedings of the 21st Chinese National Conference on Computational Linguistics. 2022

  43. [43]

    P -Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks

    Liu, Xiao and Ji, Kaixuan and Fu, Yicheng and Tam, Weng and Du, Zhengxiao and Yang, Zhilin and Tang, Jie. P -Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2022. doi:10.18653/v1/2022.acl-short.8

  44. [44]

    Reasoning Implicit Sentiment with Chain-of-Thought Prompting

    Fei, Hao and Li, Bobo and Liu, Qian and Bing, Lidong and Li, Fei and Chua, Tat-Seng. Reasoning Implicit Sentiment with Chain-of-Thought Prompting. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2023. doi:10.18653/v1/2023.acl-short.101

  45. [45]

    Multimodal Multi-loss Fusion Network for Sentiment Analysis

    Wu, Zehui and Gong, Ziwei and Koo, Jaywon and Hirschberg, Julia. Multimodal Multi-loss Fusion Network for Sentiment Analysis. Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2024. doi:10.18653/v1/2024.naacl-long.197

  46. [46]

    DAGCN : Distance-based and Aspect-oriented Graph Convolutional Network for Aspect-based Sentiment Analysis

    Wang, Zhihao and Zhang, Bo and Yang, Ru and Guo, Chang and Li, Maozhen. DAGCN : Distance-based and Aspect-oriented Graph Convolutional Network for Aspect-based Sentiment Analysis. Findings of the Association for Computational Linguistics: NAACL 2024. 2024. doi:10.18653/v1/2024.findings-naacl.120

  47. [47]

    arXiv preprint arXiv:2402.16539 , year=

    Integrating Large Language Models with Graphical Session-Based Recommendation , author=. arXiv preprint arXiv:2402.16539 , year=

  48. [48]

    Yu , pages=

    Youwei Liang and Dong Huang and Chang-Dong Wang and Philip S. Yu , pages=. IEEE Transactions on Neural Networks and Learning Systems , title=

  49. [49]

    Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval , pages=

    Lightgcn: Simplifying and Powering Graph Convolution Network for Recommendation , author=. Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval , pages=

  50. [50]

    2013 , publisher=

    Role theory: Expectations, identities, and behaviors , author=. 2013 , publisher=

  51. [51]

    arXiv preprint arXiv:2402.18590 , year=

    Exploring the Impact of Large Language Models on Recommender Systems: An Extensive Review , author=. arXiv preprint arXiv:2402.18590 , year=

  52. [52]

    DEEM : Dynamic Experienced Expert Modeling for Stance Detection

    Wang, Xiaolong and Wang, Yile and Cheng, Sijie and Li, Peng and Liu, Yang. DEEM : Dynamic Experienced Expert Modeling for Stance Detection. Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024

  53. [53]

    Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions

    Cai, Hongjie and Xia, Rui and Yu, Jianfei. Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021. doi:10.18653/v1/2021.acl-long.29

  54. [54]

    Emergent Abilities of Large Language Models

    Emergent abilities of large language models , author=. arXiv preprint arXiv:2206.07682 , year=

  55. [55]

    Information Sciences , volume=

    A dynamic adaptive multi-view fusion graph convolutional network recommendation model with dilated mask convolution mechanism , author=. Information Sciences , volume=. 2024 , publisher=

  56. [56]

    arXiv preprint arXiv:2405.16631 , year=

    Let Silence Speak: Enhancing Fake News Detection with Generated Comments from Large Language Models , author=. arXiv preprint arXiv:2405.16631 , year=

  57. [57]

    arXiv preprint arXiv:2402.09176 , year=

    Large Language Model Interaction Simulator for Cold-Start Item Recommendation , author=. arXiv preprint arXiv:2402.09176 , year=

  58. [58]

    arXiv preprint arXiv:2402.12150 , year=

    Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One , author=. arXiv preprint arXiv:2402.12150 , year=

  59. [59]

    arXiv preprint arXiv:2402.04559 , year=

    Can Large Language Model Agents Simulate Human Trust Behaviors? , author=. arXiv preprint arXiv:2402.04559 , year=

  60. [60]

    arXiv preprint arXiv:2402.13374 , year=

    Reliable LLM-based user simulator for task-oriented dialogue systems , author=. arXiv preprint arXiv:2402.13374 , year=

  61. [61]

    arXiv preprint arXiv:2309.13233 , year=

    User simulation with large language models for evaluating task-oriented dialogue , author=. arXiv preprint arXiv:2309.13233 , year=

  62. [62]

    Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages=

    Exploiting simulated user feedback for conversational search: Ranking, rewriting, and beyond , author=. Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages=

  63. [63]

    arXiv preprint arXiv:2306.00774 , year=

    In-context learning user simulators for task-oriented dialog systems , author=. arXiv preprint arXiv:2306.00774 , year=

  64. [64]

    Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training

    Li, Zhengyan and Zou, Yicheng and Zhang, Chong and Zhang, Qi and Wei, Zhongyu. Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021. doi:10.18653/v1/2021.emnlp-main.22

  65. [65]

    Proceedings of the 2024 ACM SIGIR International Conference on Theory of Information Retrieval , pages=

    The impacts of data, ordering, and intrinsic dimensionality on recall in hierarchical navigable small worlds , author=. Proceedings of the 2024 ACM SIGIR International Conference on Theory of Information Retrieval , pages=

  66. [66]

    2024 IEEE 40th International Conference on Data Engineering (ICDE) , pages=

    Scalable distance labeling maintenance and construction for dynamic small-world networks , author=. 2024 IEEE 40th International Conference on Data Engineering (ICDE) , pages=. 2024 , organization=

  67. [67]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Tracking and Identifying International Propaganda and Influence Networks Online , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  68. [68]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Political actor agent: Simulating legislative system for roll call votes prediction with large language models , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  69. [69]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  70. [70]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Does your AI agent get you? A personalizable framework for approximating human models from argumentation-based dialogue traces , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  71. [71]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Exploring Model Editing for LLM-based Aspect-Based Sentiment Classification , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  72. [72]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    SimRP: Syntactic and Semantic Similarity Retrieval Prompting Enhances Aspect Sentiment Quad Prediction , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  73. [73]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Open models, closed minds? on agents capabilities in mimicking human personalities through open large language models , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  74. [74]

    ACM Transactions on Knowledge Discovery from Data , volume=

    Structural properties on scale-free tree network with an ultra-large diameter , author=. ACM Transactions on Knowledge Discovery from Data , volume=. 2024 , publisher=

  75. [75]

    Proceedings of the 40th International Conference on Machine Learning , pages =

    Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies , author =. Proceedings of the 40th International Conference on Machine Learning , pages =. 2023 , editor =

  76. [76]

    MBTI Personality Prediction for Fictional Characters Using Movie Scripts

    Sang, Yisi and Mou, Xiangyang and Yu, Mo and Wang, Dakuo and Li, Jing and Stanton, Jeffrey. MBTI Personality Prediction for Fictional Characters Using Movie Scripts. Findings of the Association for Computational Linguistics: EMNLP 2022. 2022. doi:10.18653/v1/2022.findings-emnlp.500

  77. [77]

    Hook and Todd W

    Joshua N. Hook and Todd W. Hall and Don E. Davis and Daryl R. Van Tongeren and Mackenzie Conner , pages=. The Enneagram: A Systematic Review of the Literature and Directions for Future Research , volume=77, year=2021, journal=

  78. [78]

    Vernon , pages=

    Anita Feher and Philip A. Vernon , pages=. Looking Beyond the Big Five: A Selective Review of Alternatives to the Big Five Model of Personality , volume=169, year=2021, journal=

  79. [79]

    Van Lange, Paul A. M. and Higgins, E Tory and Kruglanski, Arie W. Handbook of Theories of Social Psychology. 2011

  80. [80]

    doi:doi:10.1515/9783110226324 , isbn =

    Narratology: An Introduction , author =. doi:doi:10.1515/9783110226324 , isbn =

Showing first 80 references.