Recognition: unknown
The State of Scientific Poster Sharing and Reuse
Pith reviewed 2026-05-09 21:55 UTC · model grok-4.3
The pith
Scientific posters are shared on 86 platforms but total only about 150,000 as of 2024, with many lacking persistent identifiers and structured metadata.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Posters are one of the most common forms of scholarly communication and contain early-stage insights with potential to accelerate scientific discovery. The study identified 86 platforms hosting posters, with many not assigning persistent identifiers. A total of 150k posters are shared as of 2024 on the 43 platforms where counts were possible. Repositories are not always supporting structured metadata critical for poster discovery, such as conference information, and researchers are not providing such metadata even when it is supported. There is some engagement in terms of views and downloads, but citing posters is not yet a common practice. The recommendation is for the scientific community,
What carries the argument
A survey that locates and counts posters across 86 hosting platforms, followed by targeted metadata audits on Zenodo and Figshare to check persistent identifier use, metadata completeness, and reuse signals.
Load-bearing premise
The 86 platforms represent a comprehensive and unbiased sample of all poster sharing locations, and the metadata analysis on Zenodo and Figshare accurately reflects researcher practices without significant selection or counting errors.
What would settle it
A complete census that finds substantially more than 150,000 posters across all platforms or a literature search showing frequent citations of posters would indicate that current sharing and reuse levels are higher than reported.
read the original abstract
Scientific posters are one of the most common forms of scholarly communication and contain early-stage insights with potential to accelerate scientific discovery. We investigated where posters are shared, to what extent their sharing aligns with the FAIR principles, and how commonly they are reused. We identified 86 platforms hosting posters, with many not assigning persistent identifiers. A total of 150k posters are shared as of 2024 on the 43 platforms where we were able to count, which is relatively low. Looking in more detail at posters shared on Zenodo and Figshare, we found that repositories are not always supporting structured metadata critical for poster discovery, like conference information, and that researchers are not providing such metadata even if they are supported. We also observed that while there is some engagement with posters in terms of views and downloads, citing posters is not yet a common practice. Our recommendations are for the scientific community to encourage poster sharing and reuse and establish clear guidelines to make posters FAIR.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper surveys the landscape of scientific poster sharing by identifying 86 hosting platforms, reporting a cumulative total of approximately 150,000 posters across the 43 platforms where counts could be obtained, and conducting a detailed metadata and reuse analysis on Zenodo and Figshare. It finds limited use of persistent identifiers, incomplete structured metadata (e.g., conference details), modest engagement via views/downloads, and rare citation of posters, leading to recommendations for community guidelines to improve FAIR compliance for posters.
Significance. If the platform identification and counting methods can be made transparent and reproducible, the work supplies a useful empirical baseline on poster dissemination volume and metadata practices. This could inform repository policies and scholarly communication standards, particularly by quantifying gaps in persistent identification and reuse that are not well documented elsewhere.
major comments (3)
- [Methods] Methods section: The description of how the 86 platforms were identified provides no explicit search strategy, inclusion/exclusion criteria, data sources, or handling of duplicates/niche venues, which directly undermines confidence in the completeness of the sample and the 150k count on the 43 countable platforms.
- [Results] Results section (platform counts and 'relatively low' claim): The headline total of 150k posters is presented without any external baseline (e.g., estimated annual poster output from major conferences or societies), making the qualitative judgment that this volume is 'relatively low' unsupported and non-falsifiable.
- [Zenodo and Figshare analysis] Zenodo/Figshare analysis subsection: The metadata completeness statistics and reuse metrics (views, downloads, citations) are reported without details on sampling frames, query dates, or error rates in automated extraction, leaving open the possibility of selection or measurement bias in the claim that repositories and researchers fail to supply conference metadata.
minor comments (2)
- [Abstract and Results] The abstract states 'many not assigning persistent identifiers' but the main text should clarify the exact fraction and definition used (e.g., DOI vs. other PIDs) for consistency.
- [Results] Figure or table presenting the 86 platforms would benefit from an explicit column or note on how counts were obtained (API, manual scrape, etc.) to aid reproducibility.
Simulated Author's Rebuttal
We thank the referee for their constructive comments, which have identified key areas where greater methodological transparency and support for claims will strengthen the manuscript. We address each major comment below and will incorporate the suggested improvements in the revised version.
read point-by-point responses
-
Referee: [Methods] Methods section: The description of how the 86 platforms were identified provides no explicit search strategy, inclusion/exclusion criteria, data sources, or handling of duplicates/niche venues, which directly undermines confidence in the completeness of the sample and the 150k count on the 43 countable platforms.
Authors: We agree that the Methods section lacks sufficient detail on platform identification. In the revision we will add an explicit subsection describing the process: primary web searches using terms such as 'scientific poster repository' and 'academic poster sharing platform', consultation of known lists from scholarly communication resources and conference websites, inclusion criteria limited to platforms that host and allow sharing of scientific posters (excluding general social media and personal sites), exclusion of non-academic or duplicate entries verified by URL and name cross-checks, and the search timeframe (early 2024). The full list of 86 platforms will be provided as supplementary material to support reproducibility. revision: yes
-
Referee: [Results] Results section (platform counts and 'relatively low' claim): The headline total of 150k posters is presented without any external baseline (e.g., estimated annual poster output from major conferences or societies), making the qualitative judgment that this volume is 'relatively low' unsupported and non-falsifiable.
Authors: The referee is correct that the 'relatively low' characterization is unsupported without a baseline. We will revise the Results section to remove this qualitative judgment and present the 150,000 figure strictly as an empirical count of documented posters on the 43 platforms with available data. In the Discussion and Limitations we will explicitly note the absence of centralized statistics on total annual poster production and the resulting difficulty in establishing comparative baselines, while adding any feasible rough contextual estimates drawn from reported attendance and poster numbers at large conferences. revision: yes
-
Referee: [Zenodo and Figshare analysis] Zenodo/Figshare analysis subsection: The metadata completeness statistics and reuse metrics (views, downloads, citations) are reported without details on sampling frames, query dates, or error rates in automated extraction, leaving open the possibility of selection or measurement bias in the claim that repositories and researchers fail to supply conference metadata.
Authors: We acknowledge the need for greater detail on the Zenodo and Figshare analysis. The revised subsection will specify the exact query dates in 2024, the search terms and filters applied (including any API or web-interface queries for posters), whether the collection was exhaustive or sampled, and the total numbers of records examined. We will also report results from a manual validation exercise on a random subset of 100 posters, including measured error rates for automated extraction of fields such as conference name, date, and persistent identifiers. These additions will allow readers to assess potential bias in the metadata-completeness findings. revision: yes
Circularity Check
No circularity: purely observational data collection with no derivations or self-referential claims
full rationale
The paper reports an empirical survey: identification of 86 platforms via external search and manual counting of posters on 43 of them (totaling 150k), plus metadata inspection on Zenodo/Figshare. No equations, predictions, fitted parameters, uniqueness theorems, or ansatzes appear. Platform discovery and counts are presented as direct observations from external sources, not derived from prior results by the authors or reduced to self-citation. The 'relatively low' qualifier is a qualitative judgment but does not create a circular loop because it is not used to derive any quantitative claim. Self-citations, if present, are not load-bearing for the central counts. This matches the default expectation of a non-circular empirical study.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
On platforms without a formal date filter, we attempted several workarounds, such as editing the date directly in the search bar or using the query string
Where available, we used the platform’s search interface, applying filters to only keep posters and set the creation date (on or before December 31, 2024). On platforms without a formal date filter, we attempted several workarounds, such as editing the date directly in the search bar or using the query string
2024
-
[2]
For platforms without relevant filters, we manually counted posters when it was reasonable (fewer than 100 posters)
-
[3]
When none of these approaches were possible for a given platform, we did not count. Note that for DOI-issuing platforms, we attempted to use the DataCite API to count posters, but the inconsistencies in linkage between different versions of a poster made it difficult to count each poster uniquely, so we excluded this approach. During our search, we identi...
-
[4]
pandas development team, pandas-dev/pandas: Pandas (Feb
Arrow@TU Dublin - the research repository of Technological University Dublin. https://arrow.tudublin.ie/. 20. DigitalCommons@USU. https://digitalcommons.usu.edu. 21. ODU Digital Commons. https://digitalcommons.odu.edu. 22. The pandas development team. Pandas-Dev/pandas: Pandas . (Zenodo, 2024). doi:10.5281/ZENODO.3509134. 23. Hunter, J. D. Matplotlib: A 2...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.