Recognition: unknown
Memory-Augmented LLM-based Multi-Agent System for Automated Feature Generation on Tabular Data
Pith reviewed 2026-05-10 00:35 UTC · model grok-4.3
The pith
A multi-agent LLM system with procedural, feedback, and conceptual memory generates higher-quality and more diverse features for tabular data via iterative refinement.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
MALMAS decomposes the generation process into agents with distinct responsibilities, and a Router Agent activates an appropriate subset of agents per iteration, further broadening exploration of the feature space. We further integrate a memory module comprising procedural memory, feedback memory, and conceptual memory, enabling iterative refinement that adaptively guides subsequent feature generation and improves feature quality and diversity.
What carries the argument
The memory module (procedural memory for steps, feedback memory from learning objectives, conceptual memory for task semantics) together with the Router Agent that selects which specialized agents to run each iteration.
If this is right
- Feature generation shifts from fixed patterns to adaptive, objective-driven iteration.
- The explored feature space widens because the router activates different agent subsets each round.
- Feature quality rises as feedback memory stores and reuses signals from the learning objective.
- Diversity increases through conceptual memory that injects task-specific semantics.
- Downstream tabular models achieve better accuracy and generalization without manual intervention.
Where Pith is reading between the lines
- The same memory structure could be tested in other agent workflows such as automated data cleaning to see whether feedback memory reduces repeated errors.
- If the router learns which agent combinations work best for given data characteristics, the system might scale to very large feature spaces without exhaustive search.
- Conceptual memory could be extended to pull in external domain descriptions, turning the method into a lightweight knowledge-augmented generator.
- Direct coupling of the feedback memory to a model's training loss might create a tighter optimization loop than the current post-generation evaluation.
Load-bearing premise
That the LLM agents will reliably interpret and apply the three memory types to produce consistent improvements without hallucinations or erratic routing decisions limiting the gains.
What would settle it
Re-running the experiments on the same public datasets with the memory module disabled and observing no statistically significant drop in either feature diversity metrics or final model accuracy.
Figures
read the original abstract
Automated feature generation extracts informative features from raw tabular data without manual intervention and is crucial for accurate, generalizable machine learning. Traditional methods rely on predefined operator libraries and cannot leverage task semantics, limiting their ability to produce diverse, high-value features for complex tasks. Recent Large Language Model (LLM)-based approaches introduce richer semantic signals, but still suffer from a restricted feature space due to fixed generation patterns and from the absence of feedback from the learning objective. To address these challenges, we propose a Memory-Augmented LLM-based Multi-Agent System (\textbf{MALMAS}) for automated feature generation. MALMAS decomposes the generation process into agents with distinct responsibilities, and a Router Agent activates an appropriate subset of agents per iteration, further broadening exploration of the feature space. We further integrate a memory module comprising procedural memory, feedback memory, and conceptual memory, enabling iterative refinement that adaptively guides subsequent feature generation and improves feature quality and diversity. Extensive experiments on multiple public datasets against state-of-the-art baselines demonstrate the effectiveness of our approach. The code is available at https://github.com/fxdong24/MALMAS
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes MALMAS, a Memory-Augmented LLM-based Multi-Agent System for automated feature generation on tabular data. It decomposes the generation process into agents with distinct responsibilities activated by a Router Agent per iteration to broaden feature space exploration, and integrates a memory module with procedural, feedback, and conceptual components for iterative refinement to improve feature quality and diversity. The approach is evaluated via experiments on multiple public datasets against state-of-the-art baselines, with code released at a GitHub repository.
Significance. If the empirical claims hold under rigorous controls, the work could meaningfully advance automated feature engineering by combining LLM semantic reasoning with structured multi-agent decomposition and memory-driven iteration, addressing gaps in traditional operator-based methods and fixed-pattern LLM approaches. The open-source code release supports reproducibility and is a clear strength.
major comments (2)
- Abstract and Experiments section: The central claim that the memory-augmented multi-agent system produces higher-quality and more diverse features rests on experimental results, yet the manuscript provides no details on the specific baselines, evaluation metrics, statistical significance tests, number of runs, or controls for LLM stochasticity and hallucinations, leaving the effectiveness assertion with limited verifiable support.
- Memory module and Router Agent description: The claim that procedural, feedback, and conceptual memory enable adaptive iterative refinement is load-bearing for the novelty, but the manuscript lacks concrete implementation details on memory update/retrieval mechanisms, interaction with the Router Agent, or safeguards against inconsistent routing and hallucinations, making it difficult to assess whether the system reliably broadens exploration.
minor comments (2)
- Abstract: The acronym MALMAS is introduced in boldface, but its expansion should be restated on first use in the main body for reader clarity.
- Related work section: Ensure comprehensive citations to prior LLM-based feature generation and multi-agent systems to clearly delineate the incremental contribution.
Simulated Author's Rebuttal
We are grateful to the referee for the thoughtful and constructive review of our manuscript on MALMAS. We address each of the major comments below and outline the revisions we plan to implement to strengthen the paper.
read point-by-point responses
-
Referee: Abstract and Experiments section: The central claim that the memory-augmented multi-agent system produces higher-quality and more diverse features rests on experimental results, yet the manuscript provides no details on the specific baselines, evaluation metrics, statistical significance tests, number of runs, or controls for LLM stochasticity and hallucinations, leaving the effectiveness assertion with limited verifiable support.
Authors: We agree that the current level of detail is insufficient for full verifiability. In the revised manuscript, we will expand the Experiments section to specify all baselines with their configurations and references, the full evaluation metrics (AUC-ROC, F1-score, and others), the number of independent runs (10 per dataset), statistical significance testing via paired t-tests with reported p-values, and controls for LLM stochasticity including fixed temperature settings, seed values, and multi-sample generation with validation to reduce hallucinations. We will also add a brief mention of the evaluation rigor to the abstract. revision: yes
-
Referee: Memory module and Router Agent description: The claim that procedural, feedback, and conceptual memory enable adaptive iterative refinement is load-bearing for the novelty, but the manuscript lacks concrete implementation details on memory update/retrieval mechanisms, interaction with the Router Agent, or safeguards against inconsistent routing and hallucinations, making it difficult to assess whether the system reliably broadens exploration.
Authors: We recognize the need for greater specificity to substantiate the novelty. The revised paper will add a dedicated subsection with pseudocode describing the update and retrieval mechanisms for each memory type, the Router Agent's decision logic and its queries to memory, and implemented safeguards such as output parsing, consistency verification across agents, and feature validation steps to address hallucinations and routing inconsistencies. These details will clarify how the system supports reliable iterative refinement and broader feature exploration. revision: yes
Circularity Check
No significant circularity detected
full rationale
The paper proposes an empirical multi-agent architecture (MALMAS) with router and three memory modules for tabular feature generation. No mathematical derivations, equations, fitted parameters, or predictions appear in the abstract or described structure. Claims rest on external experiments against baselines rather than internal self-definitions or self-citation chains. The design is self-contained as an engineering proposal; any potential self-citations are not load-bearing for the core claims.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Large language models can leverage task semantics to generate diverse and high-value features when structured with agents and memory
invented entities (3)
-
MALMAS (Memory-Augmented LLM-based Multi-Agent System)
no independent evidence
-
Router Agent
no independent evidence
-
Procedural memory, feedback memory, and conceptual memory
no independent evidence
Reference graph
Works this paper leans on
-
[1]
2026 , eprint=
DynaDebate: Breaking Homogeneity in Multi-Agent Debate with Dynamic Path Generation , author=. 2026 , eprint=
2026
-
[2]
2026 , eprint=
PERMA: Benchmarking Personalized Memory Agents via Event-Driven Preference and Realistic Task Environments , author=. 2026 , eprint=
2026
-
[3]
The Fourteenth International Conference on Learning Representations , year=
From Single to Multi-Granularity: Toward Long-Term Memory Association and Selection of Conversational Agents , author=. The Fourteenth International Conference on Learning Representations , year=
-
[4]
2025 , eprint=
AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML , author=. 2025 , eprint=
2025
-
[5]
Proceedings of the 41st International Conference on Machine Learning , pages=
DS-agent: automated data science by empowering large language models with case-based reasoning , author=. Proceedings of the 41st International Conference on Machine Learning , pages=
-
[6]
2024 , eprint=
Large Language Models Synergize with Automated Machine Learning , author=. 2024 , eprint=
2024
-
[7]
2025 , eprint=
LLM-Select: Feature Selection with Large Language Models , author=. 2025 , eprint=
2025
-
[8]
2024 , eprint=
A Versatile Graph Learning Approach through LLM-based Agent , author=. 2024 , eprint=
2024
-
[9]
Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning , volume =
Nam, Jaehyun and Kim, Kyuyoung and Oh, Seunghyuk and Tack, Jihoon and Kim, Jaehyung and Shin, Jinwoo , booktitle =. Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning , volume =
-
[10]
2024 , eprint=
Large Language Models for Constructing and Optimizing Machine Learning Workflows: A Survey , author=. 2024 , eprint=
2024
-
[11]
Datenbank-Spektrum , volume=
Data Cleaning and AutoML: Would an optimizer choose to clean? , author=. Datenbank-Spektrum , volume=
-
[12]
2025 , eprint=
A Survey on Large Language Models with some Insights on their Capabilities and Limitations , author=. 2025 , eprint=
2025
-
[13]
Journal of Machine Learning Research , year =
Matthias Feurer and Katharina Eggensperger and Stefan Falkner and Marius Lindauer and Frank Hutter , title =. Journal of Machine Learning Research , year =
-
[14]
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining , pages=
Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms , author=. Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining , pages=
-
[15]
Efficient and Robust Automated Machine Learning , volume =
Feurer, Matthias and Klein, Aaron and Eggensperger, Katharina and Springenberg, Jost and Blum, Manuel and Hutter, Frank , booktitle =. Efficient and Robust Automated Machine Learning , volume =
-
[16]
Workshop on automatic machine learning , pages=
TPOT: A tree-based pipeline optimization tool for automating machine learning , author=. Workshop on automatic machine learning , pages=
-
[17]
Proceedings of the AutoML Workshop at ICML , volume=
H2o automl: Scalable automatic machine learning , author=. Proceedings of the AutoML Workshop at ICML , volume=
-
[18]
FLAML: A Fast and Lightweight AutoML Library , volume =
Wang, Chi and Wu, Qingyun and Weimer, Markus and Zhu, Erkang , booktitle =. FLAML: A Fast and Lightweight AutoML Library , volume =
-
[19]
Large Language Models for Automated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engineering , volume =
Hollmann, Noah and M\". Large Language Models for Automated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engineering , volume =. Advances in Neural Information Processing Systems , pages =
-
[20]
Joint European Conference on Machine Learning and Knowledge Discovery in Databases , pages=
The autofeat python library for automated feature engineering and selection , author=. Joint European Conference on Machine Learning and Knowledge Discovery in Databases , pages=
-
[21]
Deep feature synthesis: Towards automating data science endeavors , year=
Kanter, James Max and Veeramachaneni, Kalyan , booktitle=. Deep feature synthesis: Towards automating data science endeavors , year=
-
[22]
International Conference on Machine Learning , pages=
Openfe: Automated feature generation with expert-level performance , author=. International Conference on Machine Learning , pages=
-
[23]
Proceedings of the 33rd ACM International Conference on Information and Knowledge Management , pages=
ELF-Gym: Evaluating Large Language Models Generated Features for Tabular Prediction , author=. Proceedings of the 33rd ACM International Conference on Information and Knowledge Management , pages=
-
[24]
2025 , eprint=
LLM-FE: Automated Feature Engineering for Tabular Data with LLMs as Evolutionary Optimizers , author=. 2025 , eprint=
2025
-
[25]
Proceedings of the 36th annual acm symposium on user interface software and technology , pages=
Generative agents: Interactive simulacra of human behavior , author=. Proceedings of the 36th annual acm symposium on user interface software and technology , pages=
-
[26]
Richelieu: Self-Evolving LLM-Based Agents for AI Diplomacy , volume =
Guan, Zhenyu and Kong, Xiangyu and Zhong, Fangwei and Wang, Yizhou , booktitle =. Richelieu: Self-Evolving LLM-Based Agents for AI Diplomacy , volume =
-
[27]
2025 , eprint=
Many Heads Are Better Than One: Improved Scientific Idea Generation by A LLM-Based Multi-Agent System , author=. 2025 , eprint=
2025
-
[28]
2024 , eprint=
MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework , author=. 2024 , eprint=
2024
-
[29]
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence , pages=
AutoAgents: a framework for automatic agent generation , author=. Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence , pages=
-
[30]
AutoGen: Enabling Next-Gen
Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Beibin Li and Erkang Zhu and Li Jiang and Xiaoyun Zhang and Shaokun Zhang and Jiale Liu and Ahmed Hassan Awadallah and Ryen W White and Doug Burger and Chi Wang , booktitle=. AutoGen: Enabling Next-Gen. 2024 , url=
2024
-
[32]
Vicinagearth , volume=
A survey on LLM-based multi-agent systems: workflow, infrastructure, and challenges , author=. Vicinagearth , volume=
-
[33]
He, Junda and Treude, Christoph and Lo, David , title =. 2025 , issue_date =. doi:10.1145/3712003 , month =
-
[34]
2023 , eprint=
ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate , author=. 2023 , eprint=
2023
-
[35]
2024 , eprint=
GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion , author=. 2024 , eprint=
2024
-
[36]
Findings of the Association for Computational Linguistics:
Yunxuan Li and Yibing Du and Jiageng Zhang and Le Hou and Peter Grabowski and Yeqing Li and Eugene Ie , title =. Findings of the Association for Computational Linguistics:
-
[37]
Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate
Liang, Tian and He, Zhiwei and Jiao, Wenxiang and Wang, Xing and Wang, Yan and Wang, Rui and Yang, Yujiu and Shi, Shuming and Tu, Zhaopeng. Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024
2024
-
[38]
2025 , eprint=
AgentSociety: Large-Scale Simulation of LLM-Driven Generative Agents Advances Understanding of Human Behaviors and Society , author=. 2025 , eprint=
2025
-
[39]
2023 , eprint=
ReAct: Synergizing Reasoning and Acting in Language Models , author=. 2023 , eprint=
2023
-
[40]
Reflexion: language agents with verbal reinforcement learning , url =
Shinn, Noah and Cassano, Federico and Gopinath, Ashwin and Narasimhan, Karthik and Yao, Shunyu , booktitle =. Reflexion: language agents with verbal reinforcement learning , url =
-
[41]
2024 , eprint=
SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning , author=. 2024 , eprint=
2024
-
[42]
Proceedings of the 22nd
Tianqi Chen and Carlos Guestrin , title =. Proceedings of the 22nd
-
[43]
LightGBM:
Guolin Ke and Qi Meng and Thomas Finley and Taifeng Wang and Wei Chen and Weidong Ma and Qiwei Ye and Tie. LightGBM:. Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017 , pages =
2017
-
[44]
Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018 , pages =
Liudmila Ostroumova Prokhorenkova and Gleb Gusev and Aleksandr Vorobev and Anna Veronika Dorogush and Andrey Gulin , title =. Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018 , pages =
2018
-
[45]
Information Computing and Applications - Third International Conference,
Yanli Liu and Yourong Wang and Jian Zhang , title =. Information Computing and Applications - Third International Conference,
-
[46]
Dong, Haoyu and Wang, Zhiruo , title =. 2024 , isbn =. doi:10.1145/3626772.3661384 , pages =
-
[47]
Bouadi, Mohamed and Alavi, Arta and Benbernou, Salima and Ouziri, Mourad , title =. 2025 , isbn =. doi:10.1145/3696410.3714720 , booktitle =
-
[48]
Chisel: Sculpting Tabular and Non-Tabular Data on the Web , year =
Doleschal, Johannes and H\". Chisel: Sculpting Tabular and Non-Tabular Data on the Web , year =. doi:10.1145/3184558.3186963 , pages =
-
[49]
2018 , isbn =
Dong, Guozhu and Liu, Huan , title =. 2018 , isbn =
2018
-
[50]
Aho and Jeffrey D
Alfred V. Aho and Jeffrey D. Ullman , title =. 1972
1972
-
[51]
Publications Manual , year = "1983", publisher =
1983
-
[52]
Ashok K. Chandra and Dexter C. Kozen and Larry J. Stockmeyer , year = "1981", title =. doi:10.1145/322234.322243
-
[53]
Scalable training of
Andrew, Galen and Gao, Jianfeng , booktitle=. Scalable training of
-
[54]
Dan Gusfield , title =. 1997
1997
-
[55]
Tetreault , title =
Mohammad Sadegh Rasooli and Joel R. Tetreault , title =. Computing Research Repository , volume =. 2015 , url =
2015
-
[56]
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data , Volume =
Ando, Rie Kubota and Zhang, Tong , Issn =. A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data , Volume =. Journal of Machine Learning Research , Month = dec, Numpages =
-
[57]
Nikhil Abhyankar, Parshin Shojaee, and Chandan K. Reddy. 2025. https://arxiv.org/abs/2503.14434 Llm-fe: Automated feature engineering for tabular data with llms as evolutionary optimizers . Preprint, arXiv:2503.14434
work page internal anchor Pith review arXiv 2025
-
[58]
Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , pages 785--794
2016
-
[59]
Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter. 2022. http://jmlr.org/papers/v23/21-0992.html Auto-sklearn 2.0: Hands-free automl via meta-learning . Journal of Machine Learning Research, 23(261):1--61
2022
-
[60]
Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. 2015. Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc
2015
- [61]
-
[62]
Siyuan Guo, Cheng Deng, Ying Wen, Hechang Chen, Yi Chang, and Jun Wang. 2024. Ds-agent: automated data science by empowering large language models with case-based reasoning. In Proceedings of the 41st International Conference on Machine Learning, pages 16813--16848
2024
-
[63]
Noah Hollmann, Samuel M\" u ller, and Frank Hutter. 2023. Large language models for automated data science: Introducing caafe for context-aware automated feature engineering. In Advances in Neural Information Processing Systems, volume 36, pages 44753--44775
2023
-
[64]
Sirui Hong, Mingchen Zhuge, Jiaqi Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, and Jürgen Schmidhuber. 2024. https://arxiv.org/abs/2308.00352 Metagpt: Meta programming for a multi-agent collaborative framework . Preprint, arXiv:2308.00352
work page internal anchor Pith review arXiv 2024
-
[65]
Franziska Horn, Robert Pack, and Michael Rieger. 2019. The autofeat python library for automated feature engineering and selection. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 111--120
2019
-
[66]
Daniel P. Jeong, Zachary C. Lipton, and Pradeep Ravikumar. 2025. https://arxiv.org/abs/2407.02694 Llm-select: Feature selection with large language models . Preprint, arXiv:2407.02694
-
[67]
James Max Kanter and Kalyan Veeramachaneni. 2015. https://doi.org/10.1109/DSAA.2015.7344858 Deep feature synthesis: Towards automating data science endeavors . In 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), pages 1--10
-
[68]
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie - Yan Liu. 2017. Lightgbm: A highly efficient gradient boosting decision tree. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pages 3146--3154
2017
-
[69]
Erin LeDell and Sebastien Poirier. 2020. H2o automl: Scalable automatic machine learning. In Proceedings of the AutoML Workshop at ICML, volume 2020, page 24
2020
-
[70]
Xinyi Li, Sai Wang, Siqi Zeng, Yu Wu, and Yi Yang. 2024 a . A survey on llm-based multi-agent systems: workflow, infrastructure, and challenges. Vicinagearth, 1(1):9
2024
-
[71]
Yunxuan Li, Yibing Du, Jiageng Zhang, Le Hou, Peter Grabowski, Yeqing Li, and Eugene Ie. 2024 b . Improving multi-agent debate with sparse communication topology. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 7281--7294
2024
- [72]
-
[73]
Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Shuming Shi, and Zhaopeng Tu. 2024. Encouraging divergent thinking in large language models through multi-agent debate. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 17889--17904
2024
-
[74]
Shuochen Liu, Junyi Zhu, Long Shu, Junda Lin, Yuhao Chen, Haotian Zhang, Chao Zhang, Derong Xu, Jia Li, Bo Tang, Zhiyu Li, Feiyu Xiong, Enhong Chen, and Tong Xu. 2026. https://arxiv.org/abs/2603.23231 Perma: Benchmarking personalized memory agents via event-driven preference and realistic task environments . Preprint, arXiv:2603.23231
- [75]
-
[76]
Yanli Liu, Yourong Wang, and Jian Zhang. 2012. New machine learning algorithm: Random forest. In Information Computing and Applications - Third International Conference, ICICA 2012 , volume 7473, pages 246--252
2012
- [77]
-
[78]
Jaehyun Nam, Kyuyoung Kim, Seunghyuk Oh, Jihoon Tack, Jaehyung Kim, and Jinwoo Shin. 2024. Optimized feature generation for tabular data via llms with decision tree reasoning. In Advances in Neural Information Processing Systems, volume 37, pages 92352--92380. Curran Associates, Inc
2024
-
[79]
Randal S Olson and Jason H Moore. 2016. Tpot: A tree-based pipeline optimization tool for automating machine learning. In Workshop on automatic machine learning, pages 66--74
2016
-
[80]
Joon Sung Park, Joseph O'Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th annual acm symposium on user interface software and technology, pages 1--22
2023
-
[81]
Jinghua Piao, Yuwei Yan, Jun Zhang, Nian Li, Junbo Yan, Xiaochong Lan, Zhihong Lu, Zhiheng Zheng, Jing Yi Wang, Di Zhou, Chen Gao, Fengli Xu, Fang Zhang, Ke Rong, Jun Su, and Yong Li. 2025. https://arxiv.org/abs/2502.08691 Agentsociety: Large-scale simulation of llm-driven generative agents advances understanding of human behaviors and society . Preprint,...
work page internal anchor Pith review Pith/arXiv arXiv 2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.