pith. machine review for the scientific record. sign in

arxiv: 2510.17516 · v4 · submitted 2025-10-20 · 💻 cs.CL · cs.AI· cs.CY· cs.LG

Recognition: unknown

SimBench: Benchmarking the Ability of Large Language Models to Simulate Human Behaviors

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.CYcs.LG
keywords simulationhumanlargesimbenchabilitybehaviorsdiversefidelity
0
0 comments X
read the original abstract

Large language model (LLM) simulations of human behavior have the potential to revolutionize the social and behavioral sciences, if and only if they faithfully reflect real human behaviors. Current evaluations of simulation fidelity are fragmented, based on bespoke tasks and metrics, creating a patchwork of incomparable results. To address this, we introduce SimBench, the first large-scale, standardized benchmark for a robust, reproducible science of LLM simulation. By unifying 20 diverse datasets covering tasks from moral decision-making to economic choice across a large global participant pool, SimBench provides the necessary foundation to ask fundamental questions about when, how, and why LLM simulations succeed or fail. We show that the best LLMs today achieve meaningful but modest simulation fidelity (score: 40.80/100), with performance scaling log-linearly with model size but not with increased inference-time compute. We discover an alignment-simulation tradeoff: instruction tuning improves performance on low-entropy (consensus) questions but degrades it on high-entropy (diverse) ones. Models particularly struggle when simulating specific demographic groups. Finally, we demonstrate that simulation ability correlates most strongly with knowledge-intensive reasoning (MMLU-Pro, r = 0.939). By making progress measurable, we aim to accelerate the development of more faithful LLM simulators.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Towards Real-world Human Behavior Simulation: Benchmarking Large Language Models on Long-horizon, Cross-scenario, Heterogeneous Behavior Traces

    cs.CL 2026-04 unverdicted novelty 7.0

    OmniBehavior benchmark demonstrates that LLMs simulating real human behavior converge on hyper-active positive average personas, losing long-tail individual differences.

  2. PrivacySIM: Evaluating LLM Simulation of User Privacy Behavior

    cs.CR 2026-05 unverdicted novelty 6.0

    PrivacySIM shows that conditioning LLMs on user personas like demographics and attitudes improves simulation of privacy choices but reaches only 40.4% accuracy against real responses from 1,000 users.

  3. LLM-Based Educational Simulation: Evaluating Temporal Student Persona Stability Across ADHD Profiles

    cs.HC 2026-05 unverdicted novelty 5.0

    LLM-simulated ADHD student personas show stable self-reported traits but behavioral drift in unscripted interactions that explicit task prompts fully eliminate.

  4. The $\textit{Silicon Society}$ Cookbook: Design Space of LLM-based Social Simulations

    cs.MA 2026-04 unverdicted novelty 5.0

    The base LLM choice dominates simulation outcomes in LLM-based social networks, while other design parameters show either additive or complex interactive effects.