pith. machine review for the scientific record. sign in

Automatic detection of generated text is easiest when humans are fooled

1 Pith paper cite this work. Polarity classification is still indexing.

1 Pith paper citing it

fields

cs.CL 1

years

2020 1

verdicts

ACCEPT 1

representative citing papers

Language Models are Few-Shot Learners

cs.CL · 2020-05-28 · accept · novelty 8.0

GPT-3 shows that scaling an autoregressive language model to 175 billion parameters enables strong few-shot performance across diverse NLP tasks via in-context prompting without fine-tuning.

citing papers explorer

Showing 1 of 1 citing paper.

  • Language Models are Few-Shot Learners cs.CL · 2020-05-28 · accept · none · ref 26

    GPT-3 shows that scaling an autoregressive language model to 175 billion parameters enables strong few-shot performance across diverse NLP tasks via in-context prompting without fine-tuning.