LLM popularity judgments align more closely with pretraining data exposure counts than with Wikipedia popularity, with stronger effects in pairwise comparisons and larger models.
citation dossier
untitled work
1Pith papers citing it
1reference links
cs.CLtop field · 1 papers
UNVERDICTEDtop verdict bucket · 1 papers
why this work matters in Pith
Pith has found this work in 1 reviewed paper. Its strongest current cluster is cs.CL (1 papers). The largest review-status bucket among citing papers is UNVERDICTED (1 papers). For highly cited works, this page shows a dossier first and a bounded explorer second; it never tries to render every citing paper at once.
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Pretraining Exposure Explains Popularity Judgments in Large Language Models
LLM popularity judgments align more closely with pretraining data exposure counts than with Wikipedia popularity, with stronger effects in pairwise comparisons and larger models.