pith. machine review for the scientific record. sign in

arxiv: 1905.00075 · v1 · submitted 2019-04-30 · 💻 cs.IR · cs.LG· cs.SI· physics.soc-ph

Recognition: unknown

On the Use of ArXiv as a Dataset

Authors on Pith no claims yet
classification 💻 cs.IR cs.LGcs.SIphysics.soc-ph
keywords arxivgrapharticlescitationexcitingfeaturesmillionmodels
0
0 comments X
read the original abstract

The arXiv has collected 1.5 million pre-print articles over 28 years, hosting literature from scientific fields including Physics, Mathematics, and Computer Science. Each pre-print features text, figures, authors, citations, categories, and other metadata. These rich, multi-modal features, combined with the natural graph structure---created by citation, affiliation, and co-authorship---makes the arXiv an exciting candidate for benchmarking next-generation models. Here we take the first necessary steps toward this goal, by providing a pipeline which standardizes and simplifies access to the arXiv's publicly available data. We use this pipeline to extract and analyze a 6.7 million edge citation graph, with an 11 billion word corpus of full-text research articles. We present some baseline classification results, and motivate application of more exciting generative graph models.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 5 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Combining On-Policy Optimization and Distillation for Long-Context Reasoning in Large Language Models

    cs.CL 2026-05 unverdicted novelty 7.0

    dGRPO merges outcome-based policy optimization with dense teacher guidance from on-policy distillation, yielding more stable long-context reasoning on the new LongBlocks synthetic dataset.

  2. PaperMind: Benchmarking Agentic Reasoning and Critique over Scientific Papers in Multimodal LLMs

    cs.IR 2026-04 unverdicted novelty 7.0

    PaperMind is a new benchmark that evaluates integrated multimodal reasoning and critique over scientific papers through four complementary task families across seven domains.

  3. Hidden Secrets in the arXiv: Discovering, Analyzing, and Preventing Unintentional Information Disclosure in Source Files of Scientific Preprints

    cs.CR 2026-04 unverdicted novelty 7.0

    Nearly every arXiv submission leaks hidden sensitive information through its source files, existing cleaners fail, and ALC-NG provides a more reliable fix.

  4. Can We Still Hear the Accent? Investigating the Resilience of Native Language Signals in the LLM Era

    cs.CL 2026-03 unverdicted novelty 5.0

    NLI accuracy on research papers declined steadily over time, with Chinese and French showing unexpected resistance while Japanese and Korean declined more sharply in the post-LLM era.

  5. A Survey of Large Language Models

    cs.CL 2023-03 accept novelty 3.0

    This survey reviews the background, key techniques, and evaluation methods for large language models, emphasizing emergent abilities that appear at large scales.