pith. machine review for the scientific record. sign in

arxiv: 2605.07692 · v1 · submitted 2026-05-08 · 💻 cs.AI

Recognition: unknown

GASim: A Graph-Accelerated Hybrid Framework for Social Simulation

Allen He, Hantao Yao, Wu Liu, Xuan Zhou, Yanhui Sun, Yongdong Zhang

Authors on Pith no claims yet

Pith reviewed 2026-05-11 03:01 UTC · model grok-4.3

classification 💻 cs.AI
keywords social simulationhybrid multi-agent frameworkgraph-optimized memorygraph message passingentropy-driven groupinglarge language modelsagent-based modelingpublic opinion simulation
0
0 comments X

The pith

Graph optimizations let hybrid social simulators run nearly 10 times faster while using under 20 percent of the tokens.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

GASim addresses the high latency in hybrid social simulators that mix LLM-driven agents with simpler numerical models by swapping costly memory retrieval for graph propagation on core agents and replacing sequential model steps with parallel graph updates on the rest. Entropy measures decide which agents count as core and need the heavier treatment. A sympathetic reader would care because this change makes it feasible to run much larger simulations of social patterns such as public opinion shifts at far lower time and money cost while the results still track real-world data.

Core claim

GASim replaces LLM-based memory retrieval with lightweight propagation over a sparse memory graph for core agents, substitutes sequential ABM execution with parallel updates by fine-grained feature aggregation and Graph Attention Network for ordinary agents, and coordinates the split through Entropy-Driven Grouping that identifies emergent core agents in information-diverse neighborhoods, delivering a 9.94-fold end-to-end speedup and less than 20 percent baseline token use while preserving alignment with real-world public opinion trends.

What carries the argument

Graph-Optimized Memory (GOM) for core LLM agents together with Graph Message Passing (GMP) and Entropy-Driven Grouping (EDG) that partitions agents by information entropy.

If this is right

  • End-to-end simulation runtime drops by a factor of 9.94 relative to the traditional hybrid baseline.
  • Token consumption for the LLM component falls below 20 percent of the original level.
  • Alignment between simulated opinion dynamics and observed real-world public opinion trends is maintained.
  • The hybrid framework can therefore support larger agent populations without proportional growth in latency or cost.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same substitution of graph operations for sequential retrieval and execution steps might apply to other mixed-agent systems that combine reasoning-heavy and rule-based components.
  • Entropy-based identification of core agents could be reused in network simulations to locate nodes where richer modeling yields the largest accuracy gains.
  • Lower overall resource demands open the possibility of running social models interactively or over extended time periods that were previously impractical.

Load-bearing premise

The graph approximations for memory propagation and message passing plus the entropy-based partitioning keep the behavioral fidelity of both LLM and ordinary agents intact across the tested scenarios.

What would settle it

Direct comparison of simulated public opinion trends against real-world data or against an unapproximated baseline at substantially larger population sizes or longer time horizons showing clear divergence.

Figures

Figures reproduced from arXiv: 2605.07692 by Allen He, Hantao Yao, Wu Liu, Xuan Zhou, Yanhui Sun, Yongdong Zhang.

Figure 1
Figure 1. Figure 1: Comparison between the traditional hybrid [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overall Pipeline of GASim. At each step, Entropy Driven Grouping (EDG) identifies emergent core [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Total token consumption across agent scales. [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Visualization of trend alignment results across [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
read the original abstract

Large-scale social simulators are essential for studying complex social patterns. Prior work explores hybrid methods to scale up simulations, combining large language models (LLM)-based agents with numerical agent-based models (ABM). However, this incurs high latency due to expensive memory retrieval and sequential ABM execution. To address this challenge, we propose GASim, a graph-accelerated hybrid multi-agent framework for large-scale social simulations. For core agents driven by LLM, GASim introduces Graph-Optimized Memory (GOM) to replace intensive LLM-based retrieval pipelines with lightweight propagation over a sparse memory graph. For the majority of ordinary agents, GASim employs Graph Message Passing (GMP), substituting sequential ABM execution with parallel updates by fine-grained feature aggregation and Graph Attention Network. We further introduce Entropy-Driven Grouping (EDG) that coordinates this hybrid partitioning, leveraging information entropy to dynamically identify emergent core agents situated in information-diverse neighborhoods. Extensive experiments show that GASim not only delivers a substantial 9.94-fold end-to-end speedup over the traditional hybrid framework but also consumes less than 20% of baseline tokens, significantly reducing costs while preserving strong alignment with real-world public opinion trends. Our code is available at https://github.com/Jasmine0201/GASim.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes GASim, a graph-accelerated hybrid multi-agent framework for large-scale social simulations. It replaces LLM-based memory retrieval for core agents with Graph-Optimized Memory (GOM) propagation over a sparse graph, substitutes sequential ABM execution for ordinary agents with parallel Graph Message Passing (GMP) using Graph Attention Networks, and introduces Entropy-Driven Grouping (EDG) to dynamically partition agents based on information entropy. Experiments claim a 9.94-fold end-to-end speedup and less than 20% baseline token consumption while preserving alignment with real-world public opinion trends; code is released at https://github.com/Jasmine0201/GASim.

Significance. If the graph approximations maintain behavioral fidelity, the framework could substantially lower the cost of hybrid LLM-ABM social simulations, enabling larger agent populations and longer horizons that are currently prohibitive. The public code release is a clear strength that supports reproducibility and community validation of the reported speedups and alignment.

major comments (3)
  1. [Abstract] Abstract: the central claim that GASim 'preserves strong alignment with real-world public opinion trends' while delivering the 9.94-fold speedup is load-bearing, yet the abstract (and by extension the experimental section) provides no quantitative fidelity metrics such as KL divergence on opinion distributions, Pearson correlation with ground-truth trends, or per-step trajectory error accumulation to demonstrate statistical indistinguishability from the non-approximated baseline.
  2. [§3] §3 (GOM and GMP descriptions): the Graph-Optimized Memory propagation and GMP updates with GAT are presented as faithful substitutes for full LLM retrieval and sequential ABM, but no ablation or direct divergence comparison (e.g., opinion-distribution KL or agent-behavior correlation) is reported to bound behavioral drift across the tested agent counts and time scales.
  3. [§4] §4 (experimental results): the reported 9.94× speedup and <20% token figures lack error bars, run-to-run variance, or statistical significance tests, and no component-wise ablations isolate the contribution of GOM, GMP, and EDG, leaving open whether the gains reflect acceleration or altered dynamics.
minor comments (2)
  1. [Abstract] Abstract: the scale of the simulations (agent count, number of time steps) and the specific real-world datasets used for trend alignment are not stated, which would help readers assess the scope of the claims.
  2. [§3] Notation: the definitions of information entropy in EDG and the exact GAT update rules in GMP could be clarified with explicit equations to aid reproducibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. The comments highlight important aspects of result presentation and validation that we will address in the revision. We respond point-by-point below.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central claim that GASim 'preserves strong alignment with real-world public opinion trends' while delivering the 9.94-fold speedup is load-bearing, yet the abstract (and by extension the experimental section) provides no quantitative fidelity metrics such as KL divergence on opinion distributions, Pearson correlation with ground-truth trends, or per-step trajectory error accumulation to demonstrate statistical indistinguishability from the non-approximated baseline.

    Authors: We agree that the abstract does not report quantitative fidelity metrics and that this weakens the central claim. The experimental section presents visual comparisons of opinion trends against real-world data and states qualitative alignment, but does not include the suggested statistical measures. In the revised manuscript we will add Pearson correlation, KL divergence on opinion distributions, and per-step error metrics computed against both the non-approximated baseline and ground-truth trends. revision: yes

  2. Referee: [§3] §3 (GOM and GMP descriptions): the Graph-Optimized Memory propagation and GMP updates with GAT are presented as faithful substitutes for full LLM retrieval and sequential ABM, but no ablation or direct divergence comparison (e.g., opinion-distribution KL or agent-behavior correlation) is reported to bound behavioral drift across the tested agent counts and time scales.

    Authors: Section 3 focuses on the design rationale for GOM and GMP. While end-to-end comparisons between GASim and the baseline hybrid framework are shown in §4, we did not include component-specific divergence metrics or ablations that isolate behavioral drift. We will add these analyses (KL divergence and agent-behavior correlation) across varying agent counts and horizons in the revised version to bound any approximation error. revision: yes

  3. Referee: [§4] §4 (experimental results): the reported 9.94× speedup and <20% token figures lack error bars, run-to-run variance, or statistical significance tests, and no component-wise ablations isolate the contribution of GOM, GMP, and EDG, leaving open whether the gains reflect acceleration or altered dynamics.

    Authors: The reported 9.94× speedup and token consumption are based on the experimental runs described in §4, but we acknowledge the absence of error bars, variance reporting, significance tests, and component-wise ablations. We will re-run the experiments with multiple random seeds, report means and standard deviations, include statistical tests, and add ablations that isolate the contribution of each component (GOM, GMP, EDG) to both speedup and behavioral fidelity. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical performance claims only

full rationale

The paper's load-bearing claims are measured experimental outcomes (9.94-fold speedup, <20% token usage, alignment with real-world opinion trends) obtained by running the proposed GASim components against baselines and external data. No first-principles derivation, fitted-parameter prediction, or self-citation chain is presented that reduces the reported results to the inputs by construction. GOM, GMP, and EDG are introduced as new algorithmic approximations whose behavioral fidelity is assessed via external benchmarks rather than internal redefinition or renaming of known quantities.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The framework rests on standard assumptions from graph neural networks and multi-agent modeling; no new physical entities or ad-hoc constants are introduced in the abstract.

pith-pipeline@v0.9.0 · 5534 in / 1145 out tokens · 23453 ms · 2026-05-11T03:01:39.070349+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

60 extracted references · 60 canonical work pages · 3 internal anchors

  1. [1]

    Shaked Brody, Uri Alon, and Eran Yahav. 2022. How attentive are graph attention networks? In The Tenth International Conference on Learning Representations, pages 1-- 26

  2. [2]

    Jinyuan Chen, Jiuchen Shi, Quan Chen, and Minyi Guo. 2025. Kairos: Low-latency multi-agent serving with shared llms and excessive loads in the public cloud. CoRR, abs/2508.06948

  3. [3]

    Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi - Min Chan, Heyang Yu, Yaxi Lu, Yi - Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2024. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors. In The Twelfth International Conference on Learning Representation...

  4. [4]

    Prateek Chhikara, Dev Khant, Saket Aryan, Taranjeet Singh, and Deshraj Yadav. 2025. Mem0 : Building production-ready AI agents with scalable long-term memory. CoRR, abs/2504.19413

  5. [5]

    Yun - Shiuan Chuang, Agam Goyal, Nikunj Harlalka, Siddharth Suresh, Robert Hawkins, Sijia Yang, Dhavan Shah, Junjie Hu, and Timothy T. Rogers. 2024. Simulating opinion dynamics with networks of llm-based agents. In Findings of the Association for Computational Linguistics, pages 3326--3346

  6. [6]

    Guillaume Deffuant, Frédéric Amblard, Gérard Weisbuch, and Thierry Faure. 2002. How can extremism prevail? a study based on the relative agreement interaction model. Journal of Artificial Societies and Social Simulation, 5(04)

  7. [7]

    Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre - Emmanuel Mazar \' e , Maria Lomeli, Lucas Hosseini, and Herv \' e J \' e gou. 2024. The faiss library. CoRR, abs/2401.08281

  8. [8]

    Goldberg, Xiaojin Zhu, and Stephen Wright

    Andrew B. Goldberg, Xiaojin Zhu, and Stephen Wright. 2007. Dissimilarity in graph-based semi-supervised classification. In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, pages 155--162

  9. [9]

    Rainer Hegselmann and Ulrich Krause. 2002. Opinion dynamics and bounded confidence models, analysis and simulation. Journal of Artificial Societies and Social Simulation, 5(03)

  10. [10]

    Zhiwei Jin, Juan Cao, Yongdong Zhang, and Jiebo Luo. 2016. News verification by exploiting conflicting social viewpoints in microblogs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, page 2972–2978

  11. [11]

    Bai Jinbo and Li Hongbo. 2019. Study on a pareto principle case of social network. In Proceedings of the 2019 4th International Conference on Social Sciences and Economic Development, pages 113--117

  12. [12]

    Lazarsfeld, Bernard Berelson, and Hazel Gaudet

    Paul F. Lazarsfeld, Bernard Berelson, and Hazel Gaudet. 2021. The People's Choice: How the Voter Makes Up His Mind in a Presidential Campaign. Columbia University Press

  13. [13]

    Kun Liu, Qi Liu, Xinchen Liu, Jie Li, Yongdong Zhang, Jiebo Luo, Xiaodong He, and Wu Liu. 2025 a . Hoigen-1m: A large-scale dataset for human-object interaction video generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 24001--24010

  14. [14]

    Kun Liu, Mengxue Qu, Yang Liu, Yunchao Wei, Wenming Zhe, Yao Zhao, and Wu Liu. 2025 b . Single-frame supervision for spatio-temporal video grounding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 47(7):5177--5191

  15. [15]

    Yijun Liu, Wu Liu, Xiaoyan Gu, Yong Rui, Xiaodong He, and Yongdong Zhang. 2026. LMAgent : A large-scale multimodal agents society for multi-user simulation. IEEE Transactions on Multimedia, pages 1--12

  16. [16]

    Jan Lorenz, Martin Neumann, and Tobias Schröder. 2021. Individual attitude change and societal dynamics: Computational experiments with psychological theories. Psychological Review, 128(04):623--642

  17. [17]

    Adyasha Maharana, Dong - Ho Lee, Sergey Tulyakov, Mohit Bansal, Francesco Barbieri, and Yuwei Fang. 2024. Evaluating very long-term conversational memory of LLM agents. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, pages 13851--13870

  18. [18]

    Huiyu Min, Jiuxin Cao, Jiawei Ge, and Bo Liu. 2024. A multi-agent system for fine-grained opinion dynamics analysis in online social networks. IEEE Trans. Comput. Soc. Syst. , 11(1):815--828

  19. [19]

    Xinyi Mou, Zhongyu Wei, and Xuanjing Huang. 2024. Unveiling the truth and facilitating change: Towards agent-based large-scale social movement simulation. In Findings of the Association for Computational Linguistics, pages 4789--4809

  20. [20]

    Bernstein

    Joon Sung Park, Joseph O'Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative Agents : Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, page 1–22

  21. [21]

    Preston Rasmussen, Pavlo Paliychuk, Travis Beauvais, Jack Ryan, and Daniel Chalef. 2025. Zep: A temporal knowledge graph architecture for agent memory. CoRR, abs/2501.13956

  22. [22]

    Yanhui Sun, Wu Liu, Wentao Wang, Hantao Yao, Jiebo Luo, and Yong - Dong Zhang. 2025. DynamiX : Large-scale dynamic social network simulator. CoRR, abs/2507.19929

  23. [23]

    V \' ctor Vargas - P \' e rez, Jes \' u s Gir \' a ldez - Cru, Pablo Mesejo, and Oscar Cord \' o n. 2025. Unveiling agents' confidence in opinion dynamics models via graph neural networks. IEEE Trans. Comput. Soc. Syst. , 12(2):725--737

  24. [24]

    Kun Xiang, Zhili Liu, Terry Jingchen Zhang, Yinya Huang, Yunshuang Nie, Kaixin Cai, Yiyang Yin, Runhui Huang, Hanhui Li, Yihan Zeng, Yu-Jie Yuan, Jianhua Han, Lanqing Hong, Hang Xu, and Xiaodan Liang. 2026. AtomThink : Multimodal slow thinking with atomic step reasoning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 48(5):5725--5741

  25. [25]

    Wujiang Xu, Zujie Liang, Kai Mei, Hang Gao, Juntao Tan, and Yongfeng Zhang. 2025. A-Mem : Agentic memory for LLM agents. In The Thirty-ninth Annual Conference on Neural Information Processing Systems, pages 1--28

  26. [26]

    Ziyi Yang, Zaibin Zhang, Zirui Zheng, Yuxian Jiang, Ziyue Gan, Zhiyu Wang, Zijian Ling, Jinsong Chen, Martz Ma, Bowen Dong, Prateek Gupta, Shuyue Hu, Zhenfei Yin, Guohao Li, Xu Jia, Lijun Wang, Bernard Ghanem, Huchuan Lu, Chaochao Lu, and 4 others. 2024. OASIS: open agent social interaction simulations with one million agents. CoRR, abs/2411.11581

  27. [27]

    Jun Zhang, Yuwei Yan, Junbo Yan, Zhiheng Zheng, Jinghua Piao, Depeng Jin, and Yong Li. 2025. A parallelized framework for simulating large-scale LLM agents with realistic environments and interactions. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics, pages 1339--1349

  28. [28]

    Kun Liu and Qi Liu and Xinchen Liu and Jie Li and Yongdong Zhang and Jiebo Luo and Xiaodong He and Wu Liu , title =

  29. [29]

    Single-Frame Supervision for Spatio-Temporal Video Grounding , year=

    Kun Liu and Mengxue Qu and Yang Liu and Yunchao Wei and Wenming Zhe and Yao Zhao and Wu Liu , journal=. Single-Frame Supervision for Spatio-Temporal Video Grounding , year=

  30. [30]

    2026 , volume=

    Xiang, Kun and Liu, Zhili and Zhang, Terry Jingchen and Huang, Yinya and Nie, Yunshuang and Cai, Kaixin and Yin, Yiyang and Huang, Runhui and Li, Hanhui and Zeng, Yihan and Yuan, Yu-Jie and Han, Jianhua and Hong, Lanqing and Xu, Hang and Liang, Xiaodan , journal=. 2026 , volume=

  31. [31]

    IEEE Transactions on Multimedia , pages =

    Yijun Liu and Wu Liu and Xiaoyan Gu and Yong Rui and Xiaodong He and Yongdong Zhang , title =. IEEE Transactions on Multimedia , pages =

  32. [32]

    CoRR , volume =

    Prateek Chhikara and Dev Khant and Saket Aryan and Taranjeet Singh and Deshraj Yadav , title =. CoRR , volume =

  33. [33]

    Wujiang Xu and Zujie Liang and Kai Mei and Hang Gao and Juntao Tan and Yongfeng Zhang , booktitle=

  34. [34]

    CoRR , volume =

    Preston Rasmussen and Pavlo Paliychuk and Travis Beauvais and Jack Ryan and Daniel Chalef , title =. CoRR , volume =

  35. [35]

    CoRR , volume =

    Yanhui Sun and Wu Liu and Wentao Wang and Hantao Yao and Jiebo Luo and Yong. CoRR , volume =

  36. [36]

    , title =

    Park, Joon Sung and O'Brien, Joseph and Cai, Carrie Jun and Morris, Meredith Ringel and Liang, Percy and Bernstein, Michael S. , title =. Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology , pages =

  37. [37]

    A Parallelized Framework for Simulating Large-Scale LLM Agents with Realistic Environments and Interactions

    Zhang, Jun and Yan, Yuwei and Yan, Junbo and Zheng, Zhiheng and Piao, Jinghua and Jin, Depeng and Li, Yong. A Parallelized Framework for Simulating Large-Scale LLM Agents with Realistic Environments and Interactions. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics. 2025

  38. [38]

    CoRR , volume =

    Ziyi Yang and Zaibin Zhang and Zirui Zheng and Yuxian Jiang and Ziyue Gan and Zhiyu Wang and Zijian Ling and Jinsong Chen and Martz Ma and Bowen Dong and Prateek Gupta and Shuyue Hu and Zhenfei Yin and Guohao Li and Xu Jia and Lijun Wang and Bernard Ghanem and Huchuan Lu and Chaochao Lu and Wanli Ouyang and Yu Qiao and Philip Torr and Jing Shao , title =....

  39. [39]

    2016 , booktitle =

    Jin, Zhiwei and Cao, Juan and Zhang, Yongdong and Luo, Jiebo , title =. 2016 , booktitle =

  40. [40]

    Very Large-Scale Multi-Agent Simulation in AgentScope , journal =

    Xuchen Pan and Dawei Gao and Yuexiang Xie and Zhewei Wei and Yaliang Li and Bolin Ding and Ji. Very Large-Scale Multi-Agent Simulation in AgentScope , journal =

  41. [41]

    CoRR , volume =

    Jinyuan Chen and Jiuchen Shi and Quan Chen and Minyi Guo , title =. CoRR , volume =

  42. [42]

    Unveiling the Truth and Facilitating Change: Towards Agent-based Large-scale Social Movement Simulation , booktitle =

    Xinyi Mou and Zhongyu Wei and Xuanjing Huang , editor =. Unveiling the Truth and Facilitating Change: Towards Agent-based Large-scale Social Movement Simulation , booktitle =

  43. [43]

    The Tenth International Conference on Learning Representations , year =

    Shaked Brody and Uri Alon and Eran Yahav , title =. The Tenth International Conference on Learning Representations , year =

  44. [44]

    Aho and Jeffrey D

    Alfred V. Aho and Jeffrey D. Ullman , title =. 1972

  45. [45]

    Unveiling the Truth and Facilitating Change: Towards Agent-based Large-scale Social Movement Simulation , doi =

    Mou, Xinyi and Wei, Zhongyu and Huang, Xuanjing , year =. Unveiling the Truth and Facilitating Change: Towards Agent-based Large-scale Social Movement Simulation , doi =

  46. [46]

    2021 , publisher =

    The People's Choice: How the Voter Makes Up His Mind in a Presidential Campaign , author =. 2021 , publisher =

  47. [47]

    2019 , booktitle=

    Study on a Pareto Principle Case of Social Network , author=. 2019 , booktitle=

  48. [48]

    Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics , pages =

    Dissimilarity in Graph-Based Semi-Supervised Classification , author =. Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics , pages =

  49. [49]

    Opinion Dynamics and Bounded Confidence Models, Analysis and Simulation , volume =

    Hegselmann, Rainer and Krause, Ulrich , year =. Opinion Dynamics and Bounded Confidence Models, Analysis and Simulation , volume =

  50. [50]

    How Can Extremism Prevail? a Study Based on the Relative Agreement Interaction Model , volume =

    Deffuant, Guillaume and Amblard, Frédéric and Weisbuch, Gérard and Faure, Thierry , year =. How Can Extremism Prevail? a Study Based on the Relative Agreement Interaction Model , volume =

  51. [51]

    Individual Attitude Change and Societal Dynamics: Computational Experiments With Psychological Theories , volume =

    Lorenz, Jan and Neumann, Martin and Schröder, Tobias , year =. Individual Attitude Change and Societal Dynamics: Computational Experiments With Psychological Theories , volume =

  52. [52]

    A Survey on the Memory Mechanism of Large Language Model-based Agents , journal =

    Zeyu Zhang and Quanyu Dai and Xiaohe Bo and Chen Ma and Rui Li and Xu Chen and Jieming Zhu and Zhenhua Dong and Ji. A Survey on the Memory Mechanism of Large Language Model-based Agents , journal =

  53. [53]

    AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors , booktitle =

    Weize Chen and Yusheng Su and Jingwei Zuo and Cheng Yang and Chenfei Yuan and Chi. AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors , booktitle =

  54. [54]

    The Faiss library , journal =

    Matthijs Douze and Alexandr Guzhva and Chengqi Deng and Jeff Johnson and Gergely Szilvasy and Pierre. The Faiss library , journal =

  55. [55]

    Rossi and Subhabrata Mukherjee and Xianfeng Tang and Qi He and Zhigang Hua and Bo Long and Tong Zhao and Neil Shah and Amin Javari and Yinglong Xia and Jiliang Tang , title =

    Haoyu Han and Yu Wang and Harry Shomer and Kai Guo and Jiayuan Ding and Yongjia Lei and Mahantesh Halappanavar and Ryan A. Rossi and Subhabrata Mukherjee and Xianfeng Tang and Qi He and Zhigang Hua and Bo Long and Tong Zhao and Neil Shah and Amin Javari and Yinglong Xia and Jiliang Tang , title =. CoRR , volume =

  56. [56]

    Huiyu Min and Jiuxin Cao and Jiawei Ge and Bo Liu , title =

  57. [57]

    Unveiling Agents' Confidence in Opinion Dynamics Models via Graph Neural Networks , journal =

    V. Unveiling Agents' Confidence in Opinion Dynamics Models via Graph Neural Networks , journal =

  58. [58]

    Kipf and Max Welling , title =

    Thomas N. Kipf and Max Welling , title =. 5th International Conference on Learning Representations,

  59. [59]

    Simulating Opinion Dynamics with Networks of LLM-based Agents , booktitle =

    Yun. Simulating Opinion Dynamics with Networks of LLM-based Agents , booktitle =

  60. [60]

    Evaluating Very Long-Term Conversational Memory of

    Adyasha Maharana and Dong. Evaluating Very Long-Term Conversational Memory of. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics , pages =