2017 AttentiveLanguageModels
Jump to navigation
Jump to search
- (Salton et al., 2017) ⇒ Giancarlo D. Salton, Robert J. Ross, and John D. Kelleher. (2017). “Attentive Language Models.” In: Proceedings of the 8th International Joint Conference on Natural Language Processing.
Subject Headings: Attention Mechanism, Neural Language Model.
Notes
Cited By
Quotes
Abstract
In this paper, we extend Recurrent Neural Network Language Models (RNN-LMs) with an attention mechanism. We show that an Attentive RNN-LM (with 14.5M parameters) achieves a better perplexity than larger RNN-LMs (with 66M parameters) and achieves performance comparable to an ensemble of 10 similar sized RNN-LMs. We also show that an Attentive RNN-LM needs less contextual information to achieve similar results to the state-of-the-art on the wikitext2 dataset.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2017 AttentiveLanguageModels | Giancarlo D. Salton Robert J. Ross John D. Kelleher | Attentive Language Models |