2017 LearnedinTranslationContextuali
- (McCann et al., 2017) ⇒ Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. (2017). “Learned in Translation: Contextualized Word Vectors.” In: Advances in Neural Information Processing Systems.
Subject Headings:
Notes
Cited By
- http://scholar.google.com/scholar?q=%222017%22+Learned+in+Translation%3A+Contextualized+Word+Vectors
Quotes
Abstract
Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and [[question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2017 LearnedinTranslationContextuali | Richard Socher James Bradbury Caiming Xiong Bryan McCann | Learned in Translation: Contextualized Word Vectors | 2017 |