LSTM-based Language Model (LM) Training Algorithm
(Redirected from LSTM language model)
Jump to navigation
Jump to search
An LSTM-based Language Model (LM) Training Algorithm is a RNN-based LM algorithm that is based on LSTM networks.
- Context:
- It can be implemented by an LSTM-based LM System.
- …
- Counter-Example(s):
- See: RNN-based LM Algorithm.
References
2017b
- (Yang, Hu et al., 2017) ⇒ Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. (2017). “Improved Variational Autoencoders for Text Modeling Using Dilated Convolutions.” In: Proceedings of the 34th International Conference on Machine Learning (ICML-2017).
- QUOTE: Recent work on generative modeling of text has found that variational auto-encoders (VAE) incorporating LSTM decoders perform worse than simpler LSTM language models (Bowman et al., 2015). This negative result is so far poorly understood, but has been attributed to the propensity of LSTM decoders to ignore conditioning information from the encoder. …
2015
- (Bowman et al., 2015) ⇒ Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. (2015). “Generating Sentences from a Continuous Space.” arXiv preprint arXiv:1511.06349