pith. machine review for the scientific record. sign in

arxiv: 2605.02335 · v1 · submitted 2026-05-04 · 💻 cs.MA · cs.AI

Recognition: unknown

LLM-enabled Social Agents

Authors on Pith no claims yet

Pith reviewed 2026-05-08 02:10 UTC · model grok-4.3

classification 💻 cs.MA cs.AI
keywords LLM agentssocial agentspersona descriptionsrole definitionsmulti-agent systemssocial behaviorlanguage models
0
0 comments X

The pith

Persona-based role definitions are required to turn LLM language competence into socially intelligible agent behavior.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Large language models allow agents to communicate fluently, yet this does not automatically produce behavior aligned with social norms and contexts. The authors argue that agents must be grounded in explicit role definitions, which are made operational through detailed persona descriptions. These descriptions supply the necessary information about intentions, constraints, and expectations. Establishing this baseline matters because it sets the stage for developing agents that can participate meaningfully in human or simulated social environments. The paper then points to research needs in how to represent these roles, how to combine them with LLM control, and how to assess the resulting social performance.

Core claim

LLM-enabled social agents should be grounded in role definitions operationalized through persona descriptions because fluent language use does not by itself yield socially intelligible behaviour. This approach provides the norms, intentions, and contextual constraints needed for meaningful participation. The paper outlines research directions for representation, hybrid control, and evaluation on this basis.

What carries the argument

Role definitions operationalized through persona descriptions, which provide the social grounding absent from raw language models.

If this is right

  • Agents achieve better alignment with social norms and expectations.
  • Hybrid control systems can integrate language generation with structured role constraints.
  • Evaluation methods can focus on social intelligibility rather than just linguistic fluency.
  • Representation techniques for roles become central to agent design.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Current LLM agent systems may need retrofitting with persona layers to improve social coherence.
  • Testing in specific domains like negotiation or team collaboration could validate the approach.
  • Connections to traditional multi-agent system theories might strengthen the grounding provided by personas.

Load-bearing premise

Operationalizing social roles through persona descriptions supplies enough information about norms, intentions, and contextual constraints for language models to generate socially intelligible behavior.

What would settle it

A demonstration that LLM agents produce consistent, norm-following social interactions in complex scenarios using only general language instructions without any persona or role definitions would falsify the necessity claim.

Figures

Figures reproduced from arXiv: 2605.02335 by Moharram Challenger, \"Onder G\"urcan.

Figure 1
Figure 1. Figure 1: An LLM-enabled Social Agent’s Internal Architecture (improved from [50]) The key conceptual step is to define LLM-enabled social agents through roles rather than generic capability bundles. In social environments, an agent is a situated participant that occupies one or more roles, and these roles shape what it treats as relevant, what it prioritizes, what actions are legitimate, and how its behaviour is in… view at source ↗
read the original abstract

Large Language Models (LLMs) have transformed agent-agent and human-agent interaction by enabling software, physical, and simulation agents to communicate and deliberate through natural language. Yet fluent language use does not by itself yield socially intelligible behaviour. Most current systems remain weakly grounded in roles, norms, intentions, and contextual constraints, limiting their capacity for meaningful participation in social environments. This paper develops a conceptual baseline for LLM-enabled social agents by arguing that they should be grounded in role definitions operationalized through persona descriptions. On this basis, we outline research directions for representation, hybrid control, and evaluation. The paper concludes that persona-based role definitions are a necessary foundation for turning language competence into social behaviour.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript claims that while LLMs enable natural-language communication and deliberation among agents, fluent language use alone fails to produce socially intelligible behavior because current systems are weakly grounded in roles, norms, intentions, and contextual constraints. It develops a conceptual baseline by arguing that agents should be grounded in role definitions operationalized through persona descriptions, outlines research directions in representation, hybrid control, and evaluation, and concludes that persona-based role definitions constitute a necessary foundation for converting language competence into social behavior.

Significance. If the central claim holds, the work could supply a useful organizing framework for future empirical and engineering efforts in LLM-based multi-agent systems and human-AI interaction by highlighting the need for explicit social grounding. The paper receives credit for clearly identifying a limitation of existing language-only agents and for sketching concrete research directions rather than stopping at critique.

major comments (2)
  1. [Abstract] Abstract: the claim that 'persona-based role definitions are a necessary foundation' is presented without a mechanism, formal distinction, or comparative argument showing why natural-language persona specifications escape the weak-grounding problem the paper attributes to language use in general (including system prompts and few-shot examples). This distinction is load-bearing for the necessity assertion.
  2. [Abstract] The manuscript provides no empirical data, formal derivation, or falsifiable test of the sufficiency of persona descriptions for producing socially intelligible behavior, leaving the central claim as an untested assertion rather than a demonstrated result.
minor comments (1)
  1. [Abstract] The abstract and conclusion could more explicitly link the three outlined research directions (representation, hybrid control, evaluation) to concrete ways of validating the persona-grounding hypothesis.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and insightful comments, which help sharpen the scope of our conceptual contribution. We address each major comment below and indicate the revisions made to the manuscript.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that 'persona-based role definitions are a necessary foundation' is presented without a mechanism, formal distinction, or comparative argument showing why natural-language persona specifications escape the weak-grounding problem the paper attributes to language use in general (including system prompts and few-shot examples). This distinction is load-bearing for the necessity assertion.

    Authors: The manuscript distinguishes persona-based role definitions from general language mechanisms by arguing that personas explicitly operationalize stable roles, norms, intentions, and contextual constraints, whereas typical system prompts and few-shot examples remain ad-hoc and lack such anchoring. This is developed through the paper's analysis of current LLM agent limitations. We agree that the necessity claim would benefit from a more explicit comparative argument and illustrative contrast. We have therefore expanded the abstract and the section on representation to include a clearer mechanism and side-by-side comparison of persona specifications versus generic prompts. revision: partial

  2. Referee: [Abstract] The manuscript provides no empirical data, formal derivation, or falsifiable test of the sufficiency of persona descriptions for producing socially intelligible behavior, leaving the central claim as an untested assertion rather than a demonstrated result.

    Authors: The paper is framed as a conceptual baseline that identifies a limitation in existing systems and proposes persona-based grounding as a necessary foundation, together with research directions for representation, hybrid control, and evaluation. It does not claim empirical sufficiency or provide tests, as these are positioned as future work. We have revised the abstract and conclusion to state this scope more explicitly and to emphasize that the contribution is the identification of the grounding requirement rather than a demonstration of sufficiency. revision: yes

Circularity Check

0 steps flagged

No circularity: conceptual proposal from observed limitation to suggested remedy

full rationale

The paper begins from the stated limitation that fluent language use in LLMs does not produce socially intelligible behavior due to weak grounding in roles, norms, intentions, and constraints. It then argues for grounding via role definitions operationalized as persona descriptions and outlines future directions. This is a forward conceptual argument rather than a derivation. No equations, fitted parameters, self-referential definitions, or load-bearing self-citations appear in the abstract or described structure. The conclusion that persona-based definitions are necessary is presented as an argued position, not a reduction to the input by construction. The full text is not shown to contain any of the enumerated circular patterns.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The paper rests on domain assumptions about the nature of social behavior and the effectiveness of persona descriptions, without independent evidence or formal support.

axioms (2)
  • domain assumption Fluent language use by LLMs does not by itself produce socially intelligible behavior.
    Explicitly stated in the abstract as the core limitation motivating the work.
  • domain assumption Role definitions can be operationalized through persona descriptions to supply the missing grounding in norms, intentions, and context.
    Central premise of the proposed baseline; no supporting derivation or evidence is given.

pith-pipeline@v0.9.0 · 5400 in / 1329 out tokens · 62273 ms · 2026-05-08T02:10:29.704996+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

71 extracted references

  1. [1]

    Plan-Then-Execute: An Empirical Study of User Trust and Team Performance When Using LLM Agents As A Daily Assistant; 2025

    He G, Demartini G, Gadiraju U. Plan-Then-Execute: An Empirical Study of User Trust and Team Performance When Using LLM Agents As A Daily Assistant; 2025

  2. [2]

    Mango Mango, How to Let The Lettuce Dry Without A Spinner?

    Chan S, Li J, Yao B, Mahmood A, Huang CM, Jimison H, et al. “Mango Mango, How to Let The Lettuce Dry Without A Spinner?”: Exploring User Perceptions of Using An LLM-Based Conversational Assis- tant Toward Cooking Partner. Proceedings of the ACM on Human-Computer Interaction. 2025;9(7)

  3. [3]

    A Plan Reuse Mechanism for LLM-Driven Agent

    Li G, Wu R, Tan H, Chen G. A Plan Reuse Mechanism for LLM-Driven Agent. Jisuanji Yanjiu yu Fazhan/Computer Research and Development. 2024;61(11):2706 – 2720

  4. [4]

    From Prompt to Action: A Comprehensive Review of LLM Autonomous Agents; 2025

    Rafique Z, Wasim M, Hussain M, Hussain M, Memon MI. From Prompt to Action: A Comprehensive Review of LLM Autonomous Agents; 2025

  5. [5]

    LLM-Powered AI Agent Systems and Their Applications in Industry; 2025

    Liang G, Tong Q. LLM-Powered AI Agent Systems and Their Applications in Industry; 2025. p. 463 – 471

  6. [6]

    Large Language Model Agents for Biomedicine: A Comprehensive Review of Meth- ods, Evaluations, Challenges, and Future Directions

    Xu X, Sankar R. Large Language Model Agents for Biomedicine: A Comprehensive Review of Meth- ods, Evaluations, Challenges, and Future Directions. Information (Switzerland). 2025;16(10)

  7. [7]

    The rise and potential opportunities of large language model agents in bioinformatics and biomedicine

    Yang T, Xiao Y , Bao Z, Hao J, Peng J. The rise and potential opportunities of large language model agents in bioinformatics and biomedicine. Briefings in Bioinformatics. 2025;26(6)

  8. [8]

    Leveraging Multi-Agent System Powered by Large Language Model to Improve Transparency and Reliability in Automated Supply Chain Coordination

    Zhao X, Kuang W, Kim YW. Leveraging Multi-Agent System Powered by Large Language Model to Improve Transparency and Reliability in Automated Supply Chain Coordination. Annual Conference of the International Group for Lean Construction, IGLC. 2025;33:681 – 692

  9. [9]

    Towards an llm-powered social digital twinning platform

    G ¨urcan ¨O, Falck V , Rousseau MG, Lima LL. Towards an llm-powered social digital twinning platform. In: International Conference on Practical Applications of Agents and Multi-Agent Systems. Springer

  10. [10]

    LLM-Augmented Agent-Based Modelling for Social Simulations: Challenges and Opportu- nities

    G ¨urcan O. LLM-Augmented Agent-Based Modelling for Social Simulations: Challenges and Opportu- nities. Frontiers in Artificial Intelligence and Applications. 2024;386:134 – 144

  11. [11]

    Mind the (Belief) Gap: Group Identity in the World of LLMs; 2025

    Borah A, Houalla M, Mihalcea R. Mind the (Belief) Gap: Group Identity in the World of LLMs; 2025. p. 18441 – 18463

  12. [12]

    From language to action: a review of large language models as autonomous agents and tool users

    Chowa SS, Alvi R, Rahman SS, Rahman MA, Raiaan MAK, Islam MR, et al. From language to action: a review of large language models as autonomous agents and tool users. Artificial Intelligence Review. 2026;59(2)

  13. [13]

    LLM-Based Multi-agent Systems: Frameworks, Evaluation, Open Challenges, and Research Frontiers

    Shaikh SH. LLM-Based Multi-agent Systems: Frameworks, Evaluation, Open Challenges, and Research Frontiers. Communications in Computer and Information Science. 2026;2827 CCIS:149 – 170

  14. [14]

    Advancements in Multi-Agent Large Language Model Systems for Next- Generation AI; 2025

    Sissodia R, Dwivedi V , Verma T. Advancements in Multi-Agent Large Language Model Systems for Next- Generation AI; 2025

  15. [15]

    Designing Intelligent Agents for Students With Disabilities: Promot- ing Inclusion and Equity Through the Lens of Cultural-Historical Activity Theory

    Zhang L, Carter RA, Lim SN. Designing Intelligent Agents for Students With Disabilities: Promot- ing Inclusion and Equity Through the Lens of Cultural-Historical Activity Theory. Journal of Special Education Technology. 2025

  16. [16]

    Survey of Different Large Language Model Architectures: Trends, Benchmarks, and Challenges

    Shao M, Basit A, Karri R, Shafique M. Survey of Different Large Language Model Architectures: Trends, Benchmarks, and Challenges. IEEE Access. 2024;12:188664 – 188706

  17. [17]

    Are Large Language Models the New Interface for Data Pipelines?; 2024

    Barbon S, Ceravolo P, Groppe S, Jarrar M, Maghool S, S `edes F, et al. Are Large Language Models the New Interface for Data Pipelines?; 2024

  18. [18]

    In-Context Learning in Large Language Models (LLMs): Mech- anisms, Capabilities, and Implications for Advanced Knowledge Representation and Reasoning

    Mohamed A, Rashid ME, Shaalan K. In-Context Learning in Large Language Models (LLMs): Mech- anisms, Capabilities, and Implications for Advanced Knowledge Representation and Reasoning. IEEE Access. 2025;13:95574 – 95593

  19. [19]

    A Review on Large Language Models: Architectures, Applications, Taxonomies, Open Issues and Challenges

    Raiaan MAK, Mukta MSH, Fatema K, Fahad NM, Sakib S, Mim MMJ, et al. A Review on Large Language Models: Architectures, Applications, Taxonomies, Open Issues and Challenges. IEEE Access. 2024;12:26839 – 26874

  20. [20]

    Large Language Models and Their Applications in the Military Field; 2025

    Zhao Z, Wu Z, Meng Q. Large Language Models and Their Applications in the Military Field; 2025. p. 311 – 319

  21. [21]

    The Applications of Large Language Models in Emergency Management; 2024

    Jiang Y . The Applications of Large Language Models in Emergency Management; 2024. p. 199 – 202

  22. [22]

    Comprehensive survey on communication structure adaptive control and collaborative opti- mization for multi-agent systems based on deepseek large language model

    Xiang B. Comprehensive survey on communication structure adaptive control and collaborative opti- mization for multi-agent systems based on deepseek large language model. vol. 13800; 2025

  23. [23]

    Enhancing Telecom Operation Support Systems with Multi-Agent Large Language Models; 2025

    Lee KY , Tseng PA, Chen WC. Enhancing Telecom Operation Support Systems with Multi-Agent Large Language Models; 2025. p. 309 – 316

  24. [24]

    Large Language Model-Enabled Multi-Agent Manufacturing Systems; 2024

    Lim J, V ogel-Heuser B, Kovalenko I. Large Language Model-Enabled Multi-Agent Manufacturing Systems; 2024. p. 3940 – 3946. 10July 2026

  25. [25]

    Dynamic Consensus Communication Mechanism for Large Language Model- Based Multi-Agent Systems

    Yang L, Li S, Deng A. Dynamic Consensus Communication Mechanism for Large Language Model- Based Multi-Agent Systems. Journal of Signal Processing Systems. 2026;98(1)

  26. [26]

    LLM-Powered Multi-Agent Systems: A Technical Framework for Collaborative Intelli- gence Through Optimized Knowledge Retrieval and Communication; 2025

    Gogineni V . LLM-Powered Multi-Agent Systems: A Technical Framework for Collaborative Intelli- gence Through Optimized Knowledge Retrieval and Communication; 2025. p. 452 – 456

  27. [27]

    RECONCILE: Round-Table Conference Improves Reasoning via Con- sensus among Diverse LLMs

    Chen JCY , Saha S, Bansal M. RECONCILE: Round-Table Conference Improves Reasoning via Con- sensus among Diverse LLMs. vol. 1; 2024. p. 7066 – 7085

  28. [28]

    Embodied LLM Agents Learn to Cooperate in Organized Teams

    Guo X, Huang K, Liu J, Fan W, Velez N, Wu Q, et al. Embodied LLM Agents Learn to Cooperate in Organized Teams. IEEE Transactions on Computational Social Systems. 2026

  29. [29]

    The AI Hippocampus: How Far are We From Human Memory? Transactions on Machine Learning Research

    Jia Z, Li J, Kang Y , Wang Y , Wu T, Wang Q, et al. The AI Hippocampus: How Far are We From Human Memory? Transactions on Machine Learning Research. 2025. Available from:https://openreview. net/forum?id=Sk7pwmLuAY

  30. [30]

    In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents

    Tan Z, Yan J, Hsu IH, Han R, Wang Z, Le LT, et al. In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents. vol. 1; 2025. p. 8416 – 8439

  31. [31]

    SAGE: Self-evolving Agents with Reflective and Memory-augmented Abilities

    Liang X, Tao M, Xia Y , Wang J, Li K, Wang Y , et al. SAGE: Self-evolving Agents with Reflective and Memory-augmented Abilities. Neurocomputing. 2025;647

  32. [32]

    FuseMind: Fusing reflection and prediction elevates agent’s reasoning capabilities

    Ma X, Ge X, Xu H, Fu D, Wang Z, Liu Y , et al. FuseMind: Fusing reflection and prediction elevates agent’s reasoning capabilities. Neurocomputing. 2025;658

  33. [33]

    From prompt to persona: a literature review on LLMs as single cognitive agents

    Tissaoui A. From prompt to persona: a literature review on LLMs as single cognitive agents. Journal of Ambient Intelligence and Humanized Computing. 2026;17(1):205 – 221

  34. [34]

    Generative agents: Interactive simulacra of human behavior

    Park JS, O’Brien J, Cai CJ, Morris MR, Liang P, Bernstein MS. Generative agents: Interactive simulacra of human behavior. In: Proceedings of the 36th annual acm symposium on user interface software and technology; 2023. p. 1-22

  35. [35]

    Getting LLM to think and act like a human being: logical path reasoning and replanning; 2025

    Zhang L, Li Q, Wang Y , Zhao J. Getting LLM to think and act like a human being: logical path reasoning and replanning; 2025. p. 301 – 308

  36. [36]

    Multi-Agent Autonomous Driving Systems with Large Language Models: A Survey of Recent Advances, Resources, and Future Directions; 2025

    Wu Y , Li D, Chen Y , Jiang R, Zou HP, Huang WC, et al. Multi-Agent Autonomous Driving Systems with Large Language Models: A Survey of Recent Advances, Resources, and Future Directions; 2025. p. 12756 – 12773

  37. [37]

    Harnessing the Power of LLMs for Normative Reason- ing in MASs

    Savarimuthu BTR, Ranathunga S, Cranefield S. Harnessing the Power of LLMs for Normative Reason- ing in MASs. Lecture Notes in Computer Science. 2025;15398 LNAI:132 – 145

  38. [38]

    A Framework for Efficient Development and Debug- ging of Role-Playing Agents with Large Language Models; 2025

    Takagi H, Moriya S, Sato T, Nagao M, Higuchi K. A Framework for Efficient Development and Debug- ging of Role-Playing Agents with Large Language Models; 2025. p. 70 – 88

  39. [39]

    Auto-scaling LLM-based multi-agent systems through dynamic integration of agents

    Perera R, Basnayake A, Wickramasinghe M. Auto-scaling LLM-based multi-agent systems through dynamic integration of agents. Frontiers in Artificial Intelligence. 2025;8

  40. [40]

    The Double-Edged Sword: Exploring Older Adults’ Inter- action and Imagination with an LLM-Enhanced Health Agent; 2026

    Munz LPM, Neef C, Preusser I, Richert A. The Double-Edged Sword: Exploring Older Adults’ Inter- action and Imagination with an LLM-Enhanced Health Agent; 2026. p. 95 – 104

  41. [41]

    AgentSense: Benchmarking Social Intelligence of Language Agents through Interactive Scenarios

    Mou X, Liang J, Lin J, Zhang X, Liu X, Yang S, et al. AgentSense: Benchmarking Social Intelligence of Language Agents through Interactive Scenarios. vol. 1; 2025. p. 4975 – 5001

  42. [42]

    Social agents

    Foo N. Social agents. Lecture Notes in Computer Science (including subseries Lecture Notes in Artifi- cial Intelligence and Lecture Notes in Bioinformatics). 2007;4830 LNAI:14

  43. [43]

    A Theory of Social Agency for Human-Robot Interaction

    Jackson RB, Williams T. A Theory of Social Agency for Human-Robot Interaction. Frontiers in Robotics and AI. 2021;8

  44. [44]

    BIO logical agents: Norms, beliefs, intentions in defeasible logic

    Governatori G, Rotolo A. BIO logical agents: Norms, beliefs, intentions in defeasible logic. Au- tonomous Agents and Multi-Agent Systems. 2008;17(1):36 – 69

  45. [45]

    Accounting for social order in multi-agent systems: Preliminary report; 2004

    Fasli M. Accounting for social order in multi-agent systems: Preliminary report; 2004. p. 204 – 210

  46. [46]

    Modelling social agents: Communication as action

    Dignum F, van Linder B. Modelling social agents: Communication as action. Lecture Notes in Com- puter Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioin- formatics). 2015;1193:205 – 218

  47. [47]

    From automata to animate beings: The scope and limits of attributing socialness to artificial agents

    Hortensius R, Cross ES. From automata to animate beings: The scope and limits of attributing socialness to artificial agents. Annals of the New York Academy of Sciences. 2018;1426(1):93 – 110

  48. [48]

    From robots to chatbots: unveiling the dynamics of human-AI interaction

    Lukasik A, Gut A. From robots to chatbots: unveiling the dynamics of human-AI interaction. Frontiers in Psychology. 2025;16

  49. [49]

    Social role as one explanatory link between individual and organizational levels in meso- ergonomic frameworks

    Sanders MJ. Social role as one explanatory link between individual and organizational levels in meso- ergonomic frameworks. vol. 2014-January; 2014. p. 1400 – 1404

  50. [50]

    A Hybrid Role-Based Reference Architecture for LLM-Enhanced Multi- Agent Systems

    Asici TZ, G ¨urcan ¨O, Kardas G. A Hybrid Role-Based Reference Architecture for LLM-Enhanced Multi- Agent Systems. In: Proceedings of the International Workshop on Engineering Multi-Agent Systems (EMAS 2026); 2026. To appear. July 202611

  51. [51]

    JSAN: A framework to implement normative agents

    Viana M, Alencar P, Guimar ˜aes E, Cunha F, Cowan D, Lucena C. JSAN: A framework to implement normative agents. vol. 2015-January; 2015. p. 660 – 665

  52. [52]

    An architectural model for autonomous normative agents

    Dos Santos Neto BF, da Silva VT, De Lucena CJP. An architectural model for autonomous normative agents. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 2012;7589:152 – 161

  53. [53]

    Commitments and the sense of joint agency

    Fern ´andez-Castro V , Pacherie E. Commitments and the sense of joint agency. Mind and Language. 2023;38(3):889 – 906

  54. [54]

    Coordinating Agents in Organizations Using Social Commitments

    Carabelea C, Boissier O. Coordinating Agents in Organizations Using Social Commitments. Electronic Notes in Theoretical Computer Science. 2006;150(3 SPEC. ISS.):73 – 91

  55. [55]

    On the relationship between roles and power: Preliminary report

    Fasli M. On the relationship between roles and power: Preliminary report. vol. 1; 2006. p. 313 – 318

  56. [56]

    One for all, all for one: Agents with social identities; 2013

    Dimas J, Lopes P, Prada R. One for all, all for one: Agents with social identities; 2013. p. 2195 – 2200

  57. [57]

    Machine Learning and Deep Learning frameworks and libraries for large-scale data mining: a survey

    Nguyen G, Dlugolinsky S, Bobak M, Tran V , Lopez Garc ´ıa A, Heredia I, et al. Machine Learning and Deep Learning frameworks and libraries for large-scale data mining: a survey. Artificial Intelligence Review. 2019;52(1):77 – 124

  58. [58]

    Guidelines to apply CBR in real-time multi-agent systems

    Navarro M, Heras S, Juli ´an V . Guidelines to apply CBR in real-time multi-agent systems. Journal of Physical Agents. 2009;3(3):39 – 47

  59. [59]

    On Becoming Decreasingly Reactive: Learning to Deliberate Minimally; 1991

    Chien SA, Gervasio MT, DeJong GF. On Becoming Decreasingly Reactive: Learning to Deliberate Minimally; 1991. p. 288 – 292

  60. [60]

    Hierarchical planning: Relating task and goal decomposition with task sharing

    Alford R, Shivashankar V , Roberts M, Frank J, Aha DW. Hierarchical planning: Relating task and goal decomposition with task sharing. vol. 2016-January; 2016. p. 3022 – 3028

  61. [61]

    Information-based planning and strategies

    Debenham J. Information-based planning and strategies. IFIP International Federation for Information Processing. 2008;276:45 – 54

  62. [62]

    A formal argumentation framework for deliber- ation dialogues

    Kok EM, Meyer JJC, Prakken H, Vreeswijk GAW. A formal argumentation framework for deliber- ation dialogues. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 2011;6614 LNAI:31 – 48

  63. [63]

    Argumentation-based dialogues for deliberation; 2005

    Tang Y , Parsons S. Argumentation-based dialogues for deliberation; 2005. p. 683 – 690

  64. [64]

    DynTaskMAS: A Dynamic Task Graph-driven Framework for Asynchronous and Parallel LLM-based Multi-Agent Systems

    Yu J, Ding Y , Sato H. DynTaskMAS: A Dynamic Task Graph-driven Framework for Asynchronous and Parallel LLM-based Multi-Agent Systems. vol. 35; 2025. p. 288 – 296

  65. [65]

    A pragmatic approach to build conversation protocols using social commit- ments

    Flores RA, Kremer RC. A pragmatic approach to build conversation protocols using social commit- ments. vol. 3; 2004. p. 1242 – 1243

  66. [66]

    Adapting conversational strategies in information-giving human- agent interaction

    Galland L, Pelachaud C, Pecune F. Adapting conversational strategies in information-giving human- agent interaction. Frontiers in Artificial Intelligence. 2022;5

  67. [67]

    Practical empathy: The duality of social and transactional roles of conversational agents in giving health advice; 2021

    Ghosh D, Faik I. Practical empathy: The duality of social and transactional roles of conversational agents in giving health advice; 2021

  68. [68]

    Challenges in exploiting conversational memory in human-agent interaction

    Campos J, Kennedy J, Lehman JF. Challenges in exploiting conversational memory in human-agent interaction. vol. 3; 2018. p. 1649 – 1657

  69. [69]

    ForgetMeNot: What and how users expect intelligent virtual agents to re- call and forget personal conversational content

    Richards D, Bransky K. ForgetMeNot: What and how users expect intelligent virtual agents to re- call and forget personal conversational content. International Journal of Human Computer Studies. 2014;72(5):460 – 476

  70. [70]

    Memory and the design of migrating virtual agents

    Aylett R, Kriegel M, Wallace I, Segura E, Mercurio J, Nylander S. Memory and the design of migrating virtual agents. vol. 2; 2013. p. 1311 – 1312

  71. [71]

    The Practical Approaches of Datasets in Machine Learning

    Shaheen H, Marimuthu P, Rashmi D. The Practical Approaches of Datasets in Machine Learning. Lecture Notes in Networks and Systems. 2024;1032 LNNS:221 – 227