Recurrent Neural Network Training Task
(Redirected from train RNNs)
Jump to navigation
Jump to search
A Recurrent Neural Network Training Task is a neural network training task that accepts a recurrent neural network model.
- Context:
- It can be solved by a Recurrent Neural Network Training System (that applies an RNN training algorithm).
- …
- Example(s):
- …
- Counter-Example(s):
- See: Parametrized Dynamical System, RNN Language Model Training.
References
2007
- Yoshua Bengio. http://www.iro.umontreal.ca/~bengioy/yoshua_en/research.html
- One of the highest impact results I obtained was about the difficulty of learning sequential dependencies, either in recurrent neural networks or in dynamical graphical models (such as Hidden Markov Models). The paper below suggests that with parametrized dynamical systems (such as a recurrent neural network), the error gradient propagated through many time steps is a poor source of information for learning to capture statistical dependencies that are temporally remote. The mathematical result is that either information is not easily transmitted (it is lost exponentially fast when trying to propagate it from the past to the future through a context variable, or it is vulnerable to perturbations and noise), or the gradients relating temporally remote events becomes exponentially small for larger temporal differences.