pith. machine review for the scientific record. sign in

arxiv: 2604.23827 · v1 · submitted 2026-04-26 · 💻 cs.DL

Recognition: unknown

Are Digital Humanities really committed to open? An exploratory study on the availability of methodological workflows and open peer review practices

Authors on Pith no claims yet

Pith reviewed 2026-05-08 04:42 UTC · model grok-4.3

classification 💻 cs.DL
keywords digital humanitiesopen scienceopen peer reviewresearch data managementmethodological workflowsscholarly communicationtransparency
0
0 comments X

The pith

Digital humanities shows limited openness in data workflow documentation and peer review practices.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines whether digital humanities research applies open science principles to its methods and evaluation processes rather than only to final outputs. It checks how often articles explicitly point to reusable external records of how data were created and managed, and it surveys the review models used by DH conferences and journals. The analysis finds that only a small share of papers supply such documentation and none mention formal data management plans. At the same time, nearly all venues rely on traditional blind review, with open peer review appearing in just a handful of cases. These patterns matter because they test whether the field's stated commitment to openness extends to the steps that produce and validate new knowledge.

Core claim

The exploratory study finds limited adoption of open methodological practices in digital humanities. Only a small fraction of the analysed articles provided explicit, reusable documentation of data creation workflows, and no references to data management plans or formal research data management documentation were found. An even more critical picture emerges from the analysis of peer review practices: the vast majority of DH venues continue to rely on traditional single- or double-blind review models, with open peer review adopted in only a few isolated cases.

What carries the argument

Exploratory analysis of explicit external references to data creation and management workflows in DH articles, paired with a survey of peer review models across DH conferences and journals.

If this is right

  • Reproducibility of DH research is reduced when data creation steps lack reusable documentation.
  • Open science commitments in the field remain focused on outputs rather than on the full research process.
  • Scholarly assessment in DH continues to use opaque review models in most venues.
  • The community may need to introduce requirements or incentives for workflow documentation and open review.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the pattern holds, DH openness may be narrower than in fields that already mandate data management plans or open review.
  • One testable extension is to check whether DH papers that do provide workflow links receive higher rates of reuse or citation.
  • Publishers and funders in the field could require workflow documentation as a condition for publication to close the observed gap.

Load-bearing premise

The selected sample of DH publications and venues is representative of the broader field and that the absence of explicit external references reliably indicates a lack of open practices.

What would settle it

A larger or randomly chosen set of DH articles that contains explicit, reusable links to data creation documentation in more than half the cases, or a survey showing open peer review policies in a majority of major DH venues, would contradict the reported pattern of limited adoption.

read the original abstract

Open Science has become a central framework for promoting transparency, accessibility, and inclusiveness in scholarly research. While the Digital Humanities (DH) community has long embraced openness in terms of research outputs, less attention seems to have been paid to the openness of the methodological and evaluative processes underlying knowledge production. This paper presents an exploratory study that investigates the current state of openness in DH research practices, focusing specifically on research data management documentation and peer review processes. In particular, this study addresses two research questions: (1) to what extent DH publications that describe data explicitly reference external documentation detailing data creation and management processes; and (2) how widely open peer review practices are adopted across DH conferences and journals. The results revealed a limited adoption of open methodological practices. Only a small fraction of the analysed articles provided explicit, reusable documentation of data creation workflows, and no references to data management plans or formal research data management documentation were found. An even more critical picture emerges from the analysis of peer review practices: the vast majority of DH venues continue to rely on traditional single- or double-blind review models, with open peer review adopted in only a few isolated cases.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper presents an exploratory study investigating openness in Digital Humanities research practices, addressing two questions: (1) the extent to which DH publications that describe data explicitly reference external documentation for data creation and management processes, and (2) the adoption of open peer review across DH conferences and journals. It concludes that adoption is limited, with only a small fraction of analyzed articles providing explicit reusable documentation of workflows, no references to data management plans or formal RDM documentation, and the vast majority of venues relying on traditional single- or double-blind review with open peer review occurring in only isolated cases.

Significance. If the empirical findings hold after methodological details are supplied, the study would offer a useful baseline on gaps between DH's general embrace of open outputs and its practices around methodological transparency and evaluation. This could inform community efforts to strengthen open science in the field by identifying specific areas (workflow documentation and peer review models) where adoption lags.

major comments (2)
  1. [Methods] The manuscript provides no information on sample size, selection criteria, inclusion/exclusion rules, search protocol, coding scheme, or inter-rater reliability for the analysis of articles and venues. This information belongs in the Methods section and is load-bearing: without it, the quantitative claims (small fraction, no references, vast majority) cannot be evaluated for representativeness or reliability, undermining the inference from textual absence to non-adoption of open practices.
  2. [Results and Discussion] The central inference equates absence of explicit external references (to DMPs, reusable workflows, or open review) with limited adoption of open methodological practices. This proxy is fragile because many legitimate open practices (internal version control, ad-hoc sharing on request, non-textual documentation) would not generate the searched textual markers; the paper should either justify the proxy or supplement it with additional evidence.
minor comments (2)
  1. [Abstract] The abstract would be strengthened by reporting at least the number of articles and venues examined, giving readers an immediate sense of scale for the reported fractions.
  2. [Throughout] Claims about 'the analysed articles' and 'DH venues' should be consistently qualified with the exploratory scope and any limitations on generalizability to avoid implying field-wide coverage.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive feedback. We agree that methodological transparency is essential and that the interpretation of our findings requires careful qualification. We will revise the manuscript to strengthen both areas while preserving the exploratory character of the study. Our point-by-point responses follow.

read point-by-point responses
  1. Referee: [Methods] The manuscript provides no information on sample size, selection criteria, inclusion/exclusion rules, search protocol, coding scheme, or inter-rater reliability for the analysis of articles and venues. This information belongs in the Methods section and is load-bearing: without it, the quantitative claims (small fraction, no references, vast majority) cannot be evaluated for representativeness or reliability, undermining the inference from textual absence to non-adoption of open practices.

    Authors: We acknowledge that the current manuscript lacks an explicit Methods section with these details. We will add a dedicated Methods section that reports the total number of publications and venues examined, the criteria used to select DH conferences and journals, inclusion and exclusion rules applied to articles that mention data, the search strategy and databases consulted, the coding protocol for identifying workflow documentation and peer-review models, and any steps taken to ensure coding consistency. revision: yes

  2. Referee: [Results and Discussion] The central inference equates absence of explicit external references (to DMPs, reusable workflows, or open review) with limited adoption of open methodological practices. This proxy is fragile because many legitimate open practices (internal version control, ad-hoc sharing on request, non-textual documentation) would not generate the searched textual markers; the paper should either justify the proxy or supplement it with additional evidence.

    Authors: We agree that textual absence is an imperfect proxy and cannot rule out unobservable practices such as internal repositories or sharing on request. Our study deliberately targeted explicit, reusable, externally referenced documentation because this form directly supports the reusability and transparency goals of open science. In the revision we will (a) state this rationale more clearly in the Discussion, (b) explicitly list the proxy’s limitations, and (c) frame the results as evidence of a gap in documented, shareable workflows rather than a claim of zero openness. We will not add new empirical data in this revision but will position the work as a baseline for future studies that could employ surveys or direct observation. revision: partial

Circularity Check

0 steps flagged

No circularity: direct empirical counts with no derivations or self-referential modeling

full rationale

This is an exploratory survey study reporting observed frequencies from manual analysis of DH publications and venues. No equations, parameters, predictions, or derivations exist in the provided text or abstract. Claims rest on explicit textual markers found (or not found) in the sample, without any reduction of outputs to inputs by construction. No self-citation chains, uniqueness theorems, or ansatzes are invoked to support central results. The work is self-contained against external benchmarks as a descriptive count-based report.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is an empirical exploratory survey study. It introduces no mathematical models, free parameters, or new postulated entities. The central claims rest entirely on the unstated sampling and measurement assumptions of the survey design.

pith-pipeline@v0.9.0 · 5502 in / 1165 out tokens · 39012 ms · 2026-05-08T04:42:09.268904+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

2 extracted references · 1 canonical work pages

  1. [1]

    From these, 17 articles matched at least one of the searched words, distributed as shown in the left graph in Figure 1

    RESULTS From JOHD, we downloaded 48 Data Papers, all published in 2025. From these, 17 articles matched at least one of the searched words, distributed as shown in the left graph in Figure 1. The terms data management plan , DMP , research data management , and RDM were not mentioned in any article. Out of these 17 articles, only 5 articles (around the 10...

  2. [2]

    people and purpose, rather than data and technology

    DISCUSSIONS AND CONCLUSIONS The results introduced by our exploratory study seem to suggest that there is still work to do by the DH community to properly address the openness of the Open Science area dedicated to the processes that regulate knowledge creation and evaluation. Indeed, looking with more detail at the results obtained by analysing the JOHD a...