2018 WhenandWhyArePreTrainedWordEmbe
Jump to navigation
Jump to search
- (Qi et al., 2018) ⇒ Ye Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. (2018). “When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?.” In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018) Volume 2 (Short Papers). DOI:10.18653/v1/N18-2084 .
Subject Headings: Word Embeddings; Neural Machine Translation.
Notes
- Computing Resource(s):
- Other Link(s):
Cited By
- Google Scholar: ~ 72 Citations, Retrieved:2020-06-07.
- Semantic Scholar: ~ 89 Citations, Retrieved:2020-06-07.
- MS Academic: ~ 63 Citations, Retrieved:2020-06-07.
Quotes
Abstract
The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained. Pre-trained word embeddings have proven to be invaluable for improving performance in natural language analysis tasks, which often suffer from paucity of data. However, their utility for NMT has not been extensively explored. In this work, we perform five sets of experiments that analyze when we can expect pre-trained word embeddings to help in NMT tasks. We show that such embeddings can be surprisingly effective in some cases â providing gains of up to 20 BLEU points in the most favorable setting.
References
BibTeX
@inproceedings{2018_WhenandWhyArePreTrainedWordEmbe, author = {Ye Qi and Devendra Singh Sachan and Matthieu Felix and Sarguna Padmanabhan and Graham Neubig}, editor = {Marilyn A. Walker and Heng Ji and Amanda Stent}, title = {When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?}, booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (NAACL-HLT 2018) Volume 2 (Short Papers), New Orleans, Louisiana, USA, June 1-6, 2018,}, pages = {529--535}, publisher = {Association for Computational Linguistics}, year = {2018}, url = {https://doi.org/10.18653/v1/n18-2084}, doi = {10.18653/v1/n18-2084}, }
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2018 WhenandWhyArePreTrainedWordEmbe | Graham Neubig Devendra Singh Sachan Ye Qi Matthieu Felix Sarguna Padmanabhan | When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation? | 2018 |