pith. machine review for the scientific record. sign in

arxiv: 1508.05326 · v1 · submitted 2015-08-21 · 💻 cs.CL

Recognition: unknown

A large annotated corpus for learning natural language inference

Christopher D. Manning, Christopher Potts, Gabor Angeli, Samuel R. Bowman

Authors on Pith no claims yet
classification 💻 cs.CL
keywords inferencelanguagenaturalentailmentallowscontradictioncorpuslearning
0
0 comments X
read the original abstract

Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. C-Pack: Packed Resources For General Chinese Embeddings

    cs.CL 2023-09 accept novelty 7.0

    C-Pack releases a new Chinese embedding benchmark, large training dataset, and optimized models that outperform priors by up to 10% on C-MTEB while also delivering English SOTA results.

  2. From Articles to Premises: Building PrimeFacts, an Extraction Methodology and Resource for Fact-Checking Evidence

    cs.CL 2026-05 unverdicted novelty 6.0

    PrimeFacts extracts decontextualized premises from fact-check articles, raising evidence retrieval MRR by up to 30% and verdict prediction Macro-F1 by 10-20 points over baselines.

  3. Is Textual Similarity Invariant under Machine Translation? Evidence Based on the Political Manifesto Corpus

    cs.CL 2026-05 unverdicted novelty 6.0

    Machine translation preserves embedding similarity structure for ten languages but distorts it for four in the Manifesto Corpus, via a new non-inferiority testing framework.