CBOW and Skip-gram models learn high-quality word embeddings from billion-word datasets with far lower training cost than previous neural approaches while delivering state-of-the-art syntactic and semantic similarity performance.
Title resolution pending
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2013 1verdicts
ACCEPT 1representative citing papers
citing papers explorer
-
Efficient Estimation of Word Representations in Vector Space
CBOW and Skip-gram models learn high-quality word embeddings from billion-word datasets with far lower training cost than previous neural approaches while delivering state-of-the-art syntactic and semantic similarity performance.