Neural Decoding Network
(Redirected from decoding layer)
Jump to navigation
Jump to search
A Neural Decoding Network is a Recurrent Neural Network that is trained to predict and generate the output sequence.
- AKA: Decoder Neural Network.
- Context:
- It can be produced by an Decoder Neural Network Training System that implements an Decoder Neural Network Training Algorithm to solve an Decoder Neural Network Training Task.
- It can range from being a Decoding Recurrent Neural Network to being a Decoder Convolutional Neural Network.
- Example(s):
- an Decoder GRU-RNN,
- an Decoder LSTM-RNN,
- an Attention Decoder RNN,
- a Neural Auto-Decoding Network.
- …
- Counter-Example(s):
- See: Neural Machine Translation System, Text Error Correction System, WikiText Error Correction System, Seq2Seq Neural Network, Natural Language Processing System.
References
2016
- (Xie et al., 2016) ⇒ Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, and Andrew Y. Ng. (2016). “Neural Language Correction with Character-Based Attention.” In: CoRR, abs/1603.09727.
2014
- (Cho et al., 2014a) ⇒ Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. (2014). “Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation”. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, (EMNLP-2014). arXiv:1406.1078
2013
2011
- (Glorot et al., 2011a) ⇒ Xavier Glorot, Antoine Bordes, and Yoshua Bengio. (2011). “Domain Adaptation for Large-scale Sentiment Classification: A Deep Learning Approach.” In: Proceedings of the 28th International Conference on Machine Learning (ICML-11).