pith. machine review for the scientific record. sign in

arxiv: 2605.12512 · v1 · submitted 2026-03-31 · 💻 cs.SI · cs.AI

Recognition: unknown

Beyond Individual Mimicry: Constructing Human-Like Social network with Graph-Augmented LLM Agents

Chuxuan Zhang, Haoran Bu, Hui Pang, Litian Zhang, Xi Zhang, Zhanyuan Liu

Authors on Pith no claims yet

Pith reviewed 2026-05-14 21:15 UTC · model grok-4.3

classification 💻 cs.SI cs.AI
keywords social botsLLM agentsgraph augmentationbot detectionsocial networksGraphMind-BotnetGNN detection
0
0 comments X

The pith

GraphMind augments LLMs with graph learning so social bots can build human-like global network structures and evade detection.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

LLM-driven social bots can copy local human conversation patterns but remain graph-unaware, so they cannot coordinate the overall shape of their connections and stay visible to graph neural network detectors. The paper introduces GraphMind to give these bots explicit mechanisms for learning and reproducing the statistical properties of real social networks. Using this approach the authors build GraphMind-Botnet and test it against both text-only and graph-based detectors. The resulting networks cause clear drops in detection accuracy, showing that global link structure is the missing piece for realistic LLM-generated social activity.

Core claim

GraphMind equips LLM-driven social bots to explicitly learn and fit human-like social network structures. When these bots are assembled into GraphMind-Botnet, both text-based and graph-based detection models show substantially degraded performance in distinguishing them from real users.

What carries the argument

GraphMind, a graph-augmentation layer that lets LLM agents learn global network topology and coordinate link formation across the bot population.

If this is right

  • Text-based detectors lose effectiveness once bots coordinate their connections globally.
  • Graph neural network detectors also suffer reduced accuracy on the augmented networks.
  • Social link construction becomes the decisive factor in whether an LLM botnet appears human-like.
  • Current bot detection approaches contain fundamental weaknesses when global topology is deliberately matched to human patterns.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Detection systems will likely need hybrid models that jointly examine local text and learned network topology.
  • The same graph-augmentation technique could be tested on other multi-agent simulations such as opinion dynamics or information spread.
  • Platforms may face pressure to collect richer structural signals if generated networks become harder to separate from organic ones.
  • Scalability limits of the augmentation method remain untested at the size of major social platforms.

Load-bearing premise

That adding graph learning to LLMs will produce networks whose global statistics match real human social graphs at scale without the need for explicit fitting metrics or validation controls.

What would settle it

Compare the degree distribution, clustering coefficient, and community structure of networks generated by GraphMind-Botnet against the same metrics from a large real-world social graph dataset; if the distributions diverge significantly, the indistinguishability claim fails.

Figures

Figures reproduced from arXiv: 2605.12512 by Chuxuan Zhang, Haoran Bu, Hui Pang, Litian Zhang, Xi Zhang, Zhanyuan Liu.

Figure 1
Figure 1. Figure 1: The comparation between existing LLM￾driven social bot and GraphMind (ours). Our botnet resembles more human-like social networks, leading to significantly improved evasion performance against GNN-based detection. However, while these approaches are effective at the level of individual behaviors and local interaction pat￾terns, they often overlook a fundamental characteristic of real-world social systems: … view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the framework. (Left) GraphMind Framework, where two modules are employed to enable LLMs to generate diverse, strength-aware interactions and to construct multi-hop follow chains, thereby mitigating isolated nodes. (Right) Botnet simulation, in which GraphMind social bots autonomously build human-like social networks, improving structural realism and enhancing robustness against GNN-based detec… view at source ↗
Figure 3
Figure 3. Figure 3: Structural property analysis of different network [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Degree distributions simulations (OASIS: 80.5% acc, BotSim: 72.7% acc). 5.2.2 Network Structure Analysis To further analyze the differences between our botnet, human networks, and prior methods, we qualitatively evaluate their structural commonalities using standard network analysis metrics. Path Length. The distribution of shortest-path dis￾tances between node pairs is a fundamental macroscopic property o… view at source ↗
Figure 5
Figure 5. Figure 5: Network visualizations comparing human and bot follow networks across different datasets. [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Loss function of FIM D.1.2 Dataset and Prompt We sample 3,000 node pairs with existing follow rela￾tionships from the TwiBot-22 dataset. Based on their historical interaction frequencies, we annotate the social relationship strength of one node with respect to the other. We then employ DeepSeek to complete the cor￾responding reasoning process for each annotated pair. Representative examples are shown in [… view at source ↗
Figure 7
Figure 7. Figure 7: Loss function of GSI Hyperparameter Value Optimizer AdamW Learning rate 5 × 10−5 Training epochs 5 LR scheduler Cosine Warmup steps 100 Random seed 42 [PITH_FULL_IMAGE:figures/full_fig_p013_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Botnet visualization of Twibot-20 [PITH_FULL_IMAGE:figures/full_fig_p015_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Human network visualization of Twibot-20. [PITH_FULL_IMAGE:figures/full_fig_p016_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Botnet visualization of Twibot-22 [PITH_FULL_IMAGE:figures/full_fig_p017_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Visualization of EvoBot (human + bot) [PITH_FULL_IMAGE:figures/full_fig_p018_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Botnet Visualization of EvoBot [PITH_FULL_IMAGE:figures/full_fig_p019_12.png] view at source ↗
read the original abstract

Driven by large language models (LLMs), social bot can autonomously engage in local interactions, whose human-like behaviors enable them to evade social bot detection. However, while these botnets exhibit realistic local social interactions, they fail to preserve human-like social network. This is because LLM-based bots are graph-unaware and cannot coordinate over global interactions, which makes those botnets vulnerable to graph neural network (GNN)-based detection. To address this limitation, we propose GraphMind, which equips LLM-driven social bots to explicitly learn and fit human-like social network structures. Building on this foundation, we further construct GraphMind-Botnet, a LLM-driven botnet designed to evaluate the performance of existing social bot detection algorithms. Experiments on datasets derived from GraphMind-Botnet show that both text-based and graph-based detection models show substantially degraded performance in distinguishing. Our results highlight the critical role of social link construction in LLM-driven social network generation, while exposing fundamental weaknesses in existing bot detection mechanisms.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript proposes GraphMind, a graph-augmented LLM framework that enables social bots to learn and fit global human-like network structures beyond local interactions. It constructs GraphMind-Botnet as a testbed and reports that both text-based and GNN-based bot detectors exhibit substantially degraded performance on the resulting datasets, underscoring the role of social link construction in evading detection.

Significance. If the empirical claims hold with proper validation, the work would demonstrate a meaningful advance in LLM-driven social simulation by addressing the gap between local mimicry and global network fidelity. This could expose systematic weaknesses in current bot detection pipelines and motivate new graph-aware countermeasures. The absence of reported fitting metrics, baselines, and statistical controls currently prevents a full assessment of whether the result is robust or merely local-pattern matching.

major comments (3)
  1. [Abstract / Experiments] Abstract and Experiments section: The headline result that 'both text-based and graph-based detection models show substantially degraded performance' is stated without any implementation details, network statistics (e.g., degree distribution, clustering coefficient, modularity, average path length), baselines, error bars, dataset descriptions, or reference real-world networks. This leaves the central claim unsupported by visible evidence.
  2. [Methods] Methods section: The description of GraphMind claims it 'explicitly learn[s] and fit[s] human-like social network structures,' yet no fitting procedure, target metrics, loss functions, or validation against real datasets (e.g., Twitter or Facebook snapshots) is provided. Without these, it is impossible to verify whether global signatures are truly matched or only local interactions are approximated.
  3. [Results] Results section: The claim that GraphMind-Botnet evades GNN detectors requires quantitative comparisons showing that global topological features remain indistinguishable; the current text supplies none of the necessary statistical tests or overfitting controls, undermining the assertion that the degradation is due to successful global mimicry rather than other factors.
minor comments (2)
  1. [Abstract] Abstract: The final sentence ends abruptly with 'in distinguishing.' and should be completed (e.g., 'in distinguishing bots from humans').
  2. [Introduction / Methods] Notation: The terms 'GraphMind' and 'GraphMind-Botnet' are introduced without an explicit definition or diagram showing how the graph augmentation module interfaces with the LLM agent loop.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive comments. We agree that the current manuscript lacks sufficient implementation details, network statistics, fitting procedures, and quantitative validations to fully support the central claims. We will perform a major revision to address these issues by expanding the relevant sections with the requested evidence and clarifications.

read point-by-point responses
  1. Referee: [Abstract / Experiments] Abstract and Experiments section: The headline result that 'both text-based and graph-based detection models show substantially degraded performance' is stated without any implementation details, network statistics (e.g., degree distribution, clustering coefficient, modularity, average path length), baselines, error bars, dataset descriptions, or reference real-world networks. This leaves the central claim unsupported by visible evidence.

    Authors: We acknowledge that the abstract and experiments section require more supporting details. In the revised manuscript, we will expand the Experiments section to include full implementation details of the detection models, comprehensive network statistics (degree distribution, clustering coefficient, modularity, average path length), baselines, error bars with statistical significance, dataset descriptions, and direct comparisons to reference real-world networks such as Twitter and Facebook snapshots. This will provide visible evidence for the performance degradation results. revision: yes

  2. Referee: [Methods] Methods section: The description of GraphMind claims it 'explicitly learn[s] and fit[s] human-like social network structures,' yet no fitting procedure, target metrics, loss functions, or validation against real datasets (e.g., Twitter or Facebook snapshots) is provided. Without these, it is impossible to verify whether global signatures are truly matched or only local interactions are approximated.

    Authors: We agree that the Methods section needs elaboration on the fitting process. The revised version will include a detailed account of GraphMind's fitting procedure, specifying the target metrics for human-like global structures (e.g., degree distribution and clustering), the loss functions used for optimization, and validation results against real-world datasets including Twitter and Facebook snapshots. This will demonstrate how global signatures are explicitly matched rather than relying solely on local approximations. revision: yes

  3. Referee: [Results] Results section: The claim that GraphMind-Botnet evades GNN detectors requires quantitative comparisons showing that global topological features remain indistinguishable; the current text supplies none of the necessary statistical tests or overfitting controls, undermining the assertion that the degradation is due to successful global mimicry rather than other factors.

    Authors: We recognize the importance of rigorous quantitative support. In the revised Results section, we will add direct quantitative comparisons of global topological features between GraphMind-Botnet and real networks, along with statistical tests (such as Kolmogorov-Smirnov tests for distribution similarity) and explicit controls for overfitting. These additions will strengthen the evidence that performance degradation stems from successful global mimicry. revision: yes

Circularity Check

0 steps flagged

No self-referential fitting or load-bearing self-citation in the derivation chain

full rationale

The paper proposes GraphMind to equip LLM agents with graph awareness for fitting human-like network structures and then builds GraphMind-Botnet for detection experiments. No equations, fitted parameters, or predictions are presented that reduce by construction to the same inputs (e.g., no fitted network statistics renamed as predictions). The abstract and described approach rely on external LLM and graph tools without invoking self-citations for uniqueness theorems or ansatzes. The central experimental claim of degraded detector performance is framed as an outcome of the construction rather than a definitional tautology, leaving the derivation self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 2 invented entities

The central claim rests on the domain assumption that LLM agents can be made graph-aware enough to fit human-like global structures and that this fitting directly causes degraded detector performance; no free parameters or invented entities are specified in the abstract.

axioms (1)
  • domain assumption LLM-driven bots can learn and fit human-like social network structures when augmented with graph mechanisms
    Invoked as the foundation for GraphMind in the abstract.
invented entities (2)
  • GraphMind no independent evidence
    purpose: Graph-augmented LLM system for constructing human-like social networks
    New system introduced to address the stated limitation of prior botnets.
  • GraphMind-Botnet no independent evidence
    purpose: LLM-driven botnet for evaluating social bot detection algorithms
    Constructed using GraphMind to test detector robustness.

pith-pipeline@v0.9.0 · 5488 in / 1322 out tokens · 43866 ms · 2026-05-14T21:15:50.365549+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

86 extracted references · 86 canonical work pages · 7 internal anchors

  1. [1]

    Neurocomputing , pages=

    Mgtab: A multi-relational graph-based twitter account detection benchmark , author=. Neurocomputing , pages=. 2025 , publisher=

  2. [2]

    , author=

    Kronecker graphs: an approach to modeling networks. , author=. Journal of Machine Learning Research , volume=

  3. [3]

    Networks , volume=

    Generating large scale-free networks with the Chung--Lu random graph model , author=. Networks , volume=. 2021 , publisher=

  4. [4]

    , author=

    Lora: Low-rank adaptation of large language models. , author=

  5. [5]

    OpenAI blog , volume=

    Language models are unsupervised multitask learners , author=. OpenAI blog , volume=

  6. [6]

    Proceedings of the 23rd international conference on world wide web , pages=

    Information network or social network? The structure of the Twitter follow graph , author=. Proceedings of the 23rd international conference on world wide web , pages=

  7. [7]

    Proceedings of the 36th annual acm symposium on user interface software and technology , pages=

    Generative agents: Interactive simulacra of human behavior , author=. Proceedings of the 36th annual acm symposium on user interface software and technology , pages=

  8. [8]

    Discrete Applied Mathematics , volume=

    Reconstructing markov processes from independent and anonymous experiments , author=. Discrete Applied Mathematics , volume=. 2016 , publisher=

  9. [9]

    International conference on machine learning , pages=

    Anonymous walk embeddings , author=. International conference on machine learning , pages=. 2018 , organization=

  10. [10]

    Computer Communications , volume=

    Ego network structure in online social networks and its impact on information diffusion , author=. Computer Communications , volume=. 2016 , publisher=

  11. [11]

    Defence Strategic Communications , volume=

    Examining the use of botnets and their evolution in propaganda dissemination , author=. Defence Strategic Communications , volume=

  12. [12]

    American journal of sociology , volume=

    The strength of weak ties , author=. American journal of sociology , volume=. 1973 , publisher=

  13. [13]

    The Anatomy of the Facebook Social Graph

    The anatomy of the facebook social graph , author=. arXiv preprint arXiv:1111.4503 , year=

  14. [14]

    arXiv preprint arXiv:2411.11581 , year=

    Oasis: Open agent social interaction simulations with one million agents , author=. arXiv preprint arXiv:2411.11581 , year=

  15. [15]

    Proceedings of the SIGCHI conference on human factors in computing systems , pages=

    Predicting tie strength with social media , author=. Proceedings of the SIGCHI conference on human factors in computing systems , pages=

  16. [16]

    Sociometry , pages=

    Interaction theory and the social network , author=. Sociometry , pages=. 1967 , publisher=

  17. [17]

    Advances in Neural Information Processing Systems , volume=

    Llms as zero-shot graph learners: Alignment of gnn representations with llm token embeddings , author=. Advances in Neural Information Processing Systems , volume=

  18. [18]

    Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery , volume=

    Social network analysis: An overview , author=. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery , volume=. 2018 , publisher=

  19. [19]

    arXiv preprint arXiv:2506.18019 , year=

    Graphs Meet AI Agents: Taxonomy, Progress, and Future Opportunities , author=. arXiv preprint arXiv:2506.18019 , year=

  20. [20]

    Big Data and Cognitive Computing , volume=

    Llm fine-tuning: Concepts, opportunities, and challenges , author=. Big Data and Cognitive Computing , volume=. 2025 , publisher=

  21. [21]

    Proceedings of the 17th international conference on World Wide Web , pages=

    Statistical properties of community structure in large social and information networks , author=. Proceedings of the 17th international conference on World Wide Web , pages=

  22. [22]

    2018 second international conference on electronics, communication and aerospace technology (ICECA) , pages=

    LoRa technology-an overview , author=. 2018 second international conference on electronics, communication and aerospace technology (ICECA) , pages=. 2018 , organization=

  23. [23]

    Authorea Preprints , year=

    Prompt engineering for ChatGPT: a quick guide to techniques, tips, and best practices , author=. Authorea Preprints , year=

  24. [24]

    Model Context Protocol (MCP): Landscape, Security Threats, and Future Research Directions

    Model context protocol (mcp): Landscape, security threats, and future research directions , author=. arXiv preprint arXiv:2503.23278 , year=

  25. [25]

    arXiv preprint arXiv:2508.17711 , year=

    Enhancing llm-based social bot via an adversarial learning framework , author=. arXiv preprint arXiv:2508.17711 , year=

  26. [26]

    , author=

    Honeybot, Your Man in the Middle for Automated Social Engineering. , author=. LEET , pages=

  27. [27]

    Proceedings of the 30th ACM international conference on information & knowledge management , pages=

    Twibot-20: A comprehensive twitter bot detection benchmark , author=. Proceedings of the 30th ACM international conference on information & knowledge management , pages=

  28. [28]

    Decision Support Systems , volume=

    Fame for sale: Efficient detection of fake Twitter followers , author=. Decision Support Systems , volume=. 2015 , publisher=

  29. [29]

    Proceedings of the 26th international conference on world wide web companion , pages=

    The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race , author=. Proceedings of the 26th international conference on world wide web companion , pages=

  30. [30]

    Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery , author=

  31. [31]

    Proceedings of the 30th ACM international conference on information & knowledge management , pages=

    Satar: A self-supervised approach to twitter account representation learning and its application in bot detection , author=. Proceedings of the 30th ACM international conference on information & knowledge management , pages=

  32. [32]

    Companion proceedings of the 2019 world wide web conference , pages=

    Detect me if you can: Spam bot detection using inductive representation learning , author=. Companion proceedings of the 2019 world wide web conference , pages=

  33. [33]

    arXiv preprint arXiv:2310.11667 , year=

    Sotopia: Interactive evaluation for social intelligence in language agents , author=. arXiv preprint arXiv:2310.11667 , year=

  34. [34]

    arXiv preprint arXiv:2307.14984 , year=

    S3: Social-network simulation system with large language model-empowered agents , author=. arXiv preprint arXiv:2307.14984 , year=

  35. [35]

    Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology , pages=

    Social simulacra: Creating populated prototypes for social computing systems , author=. Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology , pages=

  36. [36]

    2003 , publisher=

    Theories of communication networks , author=. 2003 , publisher=

  37. [37]

    Statistics and its Interface , volume=

    Multi-class adaboost , author=. Statistics and its Interface , volume=

  38. [38]

    Proceedings of the fifth annual workshop on Computational learning theory , pages=

    A training algorithm for optimal margin classifiers , author=. Proceedings of the fifth annual workshop on Computational learning theory , pages=

  39. [39]

    Semi-Supervised Classification with Graph Convolutional Networks

    Semi-Supervised Classification with Graph Convolutional Networks , author=. arXiv preprint arXiv:1609.02907 , year=

  40. [40]

    Graph Attention Networks

    Graph attention networks , author=. arXiv preprint arXiv:1710.10903 , year=

  41. [41]

    Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining , pages=

    Are we really making much progress? revisiting, benchmarking and refining heterogeneous graph neural networks , author=. Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining , pages=

  42. [42]

    Proceedings of the 2021 IEEE/ACM international conference on advances in social networks analysis and mining , pages=

    BotRGCN: Twitter bot detection with relational graph convolutional networks , author=. Proceedings of the 2021 IEEE/ACM international conference on advances in social networks analysis and mining , pages=

  43. [43]

    Advances in Neural Information Processing Systems , volume=

    Augmenting language models with long-term memory , author=. Advances in Neural Information Processing Systems , volume=

  44. [44]

    IEEE Transactions on Knowledge and Data Engineering , year=

    Large language models on graphs: A comprehensive survey , author=. IEEE Transactions on Knowledge and Data Engineering , year=

  45. [45]

    ArXiv , volume=

    Anatomy of an AI-powered malicious social botnet , author=. ArXiv , volume=

  46. [46]

    arXiv preprint arXiv:2412.03563 , year=

    From individual to society: A survey on social simulation driven by large language model-based agents , author=. arXiv preprint arXiv:2412.03563 , year=

  47. [47]

    Sage Publications , year=

    Demystifying Social Bots: On the Intelligence of Automated Social Media Actors , author=. Sage Publications , year=

  48. [48]

    Csi Transactions on Ict , year=

    Digital forensic research: current state of the art , author=. Csi Transactions on Ict , year=

  49. [49]

    Social Science Electronic Publishing , year=

    Reuters Institute Digital News Report 2017 Asia-Pacific Supplementary Report , author=. Social Science Electronic Publishing , year=

  50. [50]

    Proceedings of the 2019 4th International Conference on Social Sciences and Economic Development (ICSSED 2019) , year=

    Study on a Pareto Principle Case of Social Network , author=. Proceedings of the 2019 4th International Conference on Social Sciences and Economic Development (ICSSED 2019) , year=

  51. [51]

    Psychol Sci , number=

    Language-Style Similarity and Social Networks: , author=. Psychol Sci , number=

  52. [52]

    2025 , journal=

    Model Context Protocol (MCP): Landscape, Security Threats, and Future Research Directions , author=. 2025 , journal=

  53. [53]

    2022 , title=

    Feng, Shangbin and Tan, Zhaoxuan and Wan, Herun and Wang, Ningnan and Chen, Zilong and Zhang, Binchi and Zheng, Qinghua and Zhang, Wenqian and Lei, Zhenyu and Yang, Shujie and others , booktitle=. 2022 , title=

  54. [54]

    ArXiv , volume=

    Toolformer: Language Models Can Teach Themselves to Use Tools , author=. ArXiv , volume=

  55. [55]

    DeepSeek-V3 Technical Report

    Deepseek-v3 technical report , author=. arXiv preprint arXiv:2412.19437 , year=

  56. [56]

    arXiv preprint arXiv:2404.15070 , year=

    Botdgt: Dynamicity-aware social bot detection with dynamic graph transformers , author=. arXiv preprint arXiv:2404.15070 , year=

  57. [57]

    International Conference on Pattern Recognition , pages=

    Botscl: Heterophily-aware social bot detection with supervised contrastive learning , author=. International Conference on Pattern Recognition , pages=. 2024 , organization=

  58. [58]

    arXiv preprint arXiv:2401.00893 , year=

    Social-llm: Modeling user behavior at scale using language models and social network data , author=. arXiv preprint arXiv:2401.00893 , year=

  59. [59]

    Proceedings of the AAAI conference on artificial intelligence , volume=

    Scalable and generalizable social bot detection through data selection , author=. Proceedings of the AAAI conference on artificial intelligence , volume=

  60. [60]

    2019 First IEEE International conference on trust, privacy and security in intelligent systems and applications (TPS-ISA) , pages=

    Twitter bot detection using bidirectional long short-term memory neural networks and word embeddings , author=. 2019 First IEEE International conference on trust, privacy and security in intelligent systems and applications (TPS-ISA) , pages=. 2019 , organization=

  61. [61]

    Advances in Neural Information Processing Systems , volume=

    Twibot-22: Towards graph-based twitter bot detection , author=. Advances in Neural Information Processing Systems , volume=

  62. [62]

    IEEE Transactions on Neural Networks and Learning Systems , volume=

    Dispelling the fake: Social bot detection based on edge confidence evaluation , author=. IEEE Transactions on Neural Networks and Learning Systems , volume=. 2024 , publisher=

  63. [63]

    Bots and Misinformation Spread on Social Media: Implications for COVID-19 , volume=

    Himelein-Wachowiak, McKenzie and Giorgi, Salvatore and Devoto, Amanda and Rahman, Muhammad and Ungar, Lyle and Schwartz, H Andrew and Epstein, David H and Leggio, Lorenzo and Curtis, Brenda , year=. Bots and Misinformation Spread on Social Media: Implications for COVID-19 , volume=. Journal of Medical Internet Research , publisher=. doi:10.2196/26933 , ab...

  64. [64]

    Assessing the risks of ‘infodemics’ in response to COVID-19 epidemics , volume=

    Gallotti, Riccardo and Valle, Francesco and Castaldo, Nicola and Sacco, Pierluigi and De Domenico, Manlio , year=. Assessing the risks of ‘infodemics’ in response to COVID-19 epidemics , volume=. Nature Human Behaviour , publisher=. doi:10.1038/s41562-020-00994-6 , number=

  65. [65]

    Proceedings of the ACM Web Conference 2024 , pages=

    Bots, elections, and controversies: Twitter insights from Brazil's polarised elections , author=. Proceedings of the ACM Web Conference 2024 , pages=

  66. [66]

    Nature , volume=

    The next-generation bots interfering with the US election , author=. Nature , volume=

  67. [67]

    A decade of social bot detection , volume=

    Cresci, Stefano , year=. A decade of social bot detection , volume=. Communications of the ACM , publisher=. doi:10.1145/3409116 , abstractNote=

  68. [68]

    arXiv preprint arXiv:2311.09618 , year=

    Simulating opinion dynamics with networks of llm-based agents , author=. arXiv preprint arXiv:2311.09618 , year=

  69. [69]

    arXiv preprint arXiv:2402.16333 , year=

    Unveiling the truth and facilitating change: Towards agent-based large-scale social movement simulation , author=. arXiv preprint arXiv:2402.16333 , year=

  70. [70]

    arXiv preprint arXiv:2403.09498 , year=

    From skepticism to acceptance: Simulating the attitude dynamics toward fake news , author=. arXiv preprint arXiv:2403.09498 , year=

  71. [71]

    Journal of mathematical sociology , volume=

    Dynamic models of segregation , author=. Journal of mathematical sociology , volume=. 1971 , publisher=

  72. [72]

    Computational Economics , volume=

    Opinion dynamics driven by various ways of averaging , author=. Computational Economics , volume=. 2005 , publisher=

  73. [73]

    arXiv preprint arXiv:2306.03446 , year=

    Computational agent-based models in opinion dynamics: A survey on social simulations and empirical studies , author=. arXiv preprint arXiv:2306.03446 , year=

  74. [74]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    BotSim: LLM-Powered Malicious Social Botnet Simulation , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  75. [75]

    Advances in neural information processing systems , volume=

    Walklm: A uniform language model fine-tuning framework for attributed graph embedding , author=. Advances in neural information processing systems , volume=

  76. [76]

    arXiv preprint arXiv:2506.22084 , year=

    Transformers are graph neural networks , author=. arXiv preprint arXiv:2506.22084 , year=

  77. [77]

    Psychology today , volume=

    The small world problem , author=. Psychology today , volume=. 1967 , publisher=

  78. [78]

    Annual review of sociology , volume=

    Birds of a feather: Homophily in social networks , author=. Annual review of sociology , volume=. 2001 , publisher=

  79. [79]

    Cli- matebert: A pretrained language model for climate-related text,

    Climatebert: A pretrained language model for climate-related text , author=. arXiv preprint arXiv:2110.12010 , year=

  80. [80]

    arXiv preprint arXiv:2110.06500 , year=

    Differentially private fine-tuning of language models , author=. arXiv preprint arXiv:2110.06500 , year=

Showing first 80 references.