pith. machine review for the scientific record. sign in

arxiv: 2602.11354 · v2 · submitted 2026-02-11 · 💻 cs.AI · cs.CL

Recognition: unknown

ReplicatorBench: Benchmarking LLM Agents for Replicability in Social and Behavioral Sciences

Adam Gill, Anna Szabelska, Bang Nguyen, Dominik So\'os, Jian Wu, Meng Jiang, Qian Ma, Rochana R. Obadage, Sai Koneru, Sarah Rajtmajer, Shakhlo Nematova, Timothy M. Errington, Zack Ranjan

Authors on Pith no claims yet
classification 💻 cs.AI cs.CL
keywords agentsdatareplicationresearchcodecomputationalevaluatereplicatorbench
0
0 comments X
read the original abstract

The literature has witnessed an emerging interest in AI agents for automated assessment of scientific papers. Existing benchmarks focus primarily on the computational aspect of this task, testing agents' ability to reproduce or replicate research outcomes when having access to the code and data. This setting, while foundational, (1) fails to capture the inconsistent availability of new data for replication as opposed to reproduction, and (2) lacks ground-truth diversity by focusing only on reproducible papers, thereby failing to evaluate an agent's ability to identify non-replicable research. Furthermore, most benchmarks only evaluate outcomes rather than the replication process. In response, we introduce ReplicatorBench, an end-to-end benchmark, including human-verified replicable and non-replicable research claims in social and behavioral sciences for evaluating AI agents in research replication across three stages: (1) extraction and retrieval of replication data; (2) design and execution of computational experiments; and (3) interpretation of results, allowing a test of AI agents' capability to mimic the activities of human replicators in real world. To set a baseline of AI agents' capability, we develop ReplicatorAgent, an agentic framework equipped with necessary tools, like web search and iterative interaction with sandboxed environments, to accomplish tasks in ReplicatorBench. We evaluate ReplicatorAgent across four underlying large language models (LLMs), as well as different design choices of programming language and levels of code access. Our findings reveal that while current LLM agents are capable of effectively designing and executing computational experiments, they struggle with retrieving resources, such as new data, necessary to replicate a claim. All code and data are publicly available at https://github.com/CenterForOpenScience/llm-benchmarking.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. ARA: Agentic Reproducibility Assessment For Scalable Support Of Scientific Peer-Review

    cs.DL 2026-05 unverdicted novelty 6.0

    ARA extracts workflow graphs from papers and scores reproducibility, reaching 61% accuracy on 213 ReScience C articles and outperforming priors on ReproBench and GoldStandardDB.