pith. machine review for the scientific record. sign in

arxiv: 1802.03311 · v1 · submitted 2018-02-09 · 💻 cs.DL

Recognition: unknown

Terminologies for Reproducible Research

Authors on Pith no claims yet
classification 💻 cs.DL
keywords sameterminologiesacrosscallsdatadifferentdisciplinesgroup
0
0 comments X
read the original abstract

Reproducible research---by its many names---has come to be regarded as a key concern across disciplines and stakeholder groups. Funding agencies and journals, professional societies and even mass media are paying attention, often focusing on the so-called "crisis" of reproducibility. One big problem keeps coming up among those seeking to tackle the issue: different groups are using terminologies in utter contradiction with each other. Looking at a broad sample of publications in different fields, we can classify their terminology via decision tree: they either, A---make no distinction between the words reproduce and replicate, or B---use them distinctly. If B, then they are commonly divided in two camps. In a spectrum of concerns that starts at a minimum standard of "same data+same methods=same results," to "new data and/or new methods in an independent study=same findings," group 1 calls the minimum standard reproduce, while group 2 calls it replicate. This direct swap of the two terms aggravates an already weighty issue. By attempting to inventory the terminologies across disciplines, I hope that some patterns will emerge to help us resolve the contradictions.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Automating Computational Reproducibility in Social Science: Comparing Prompt-Based and Agent-Based Approaches

    cs.SE 2026-02 unverdicted novelty 6.0

    Agent-based AI workflows repair injected reproducibility failures in R social-science code at 69-96% success, substantially outperforming prompt-based LLM approaches at 31-79%.