Bidirectional Recurrent Neural Network (BiRNN) Training Algorithm
(Redirected from BiRNN algorithm)
Jump to navigation
Jump to search
A Bidirectional Recurrent Neural Network (BiRNN) Training Algorithm is a bidirectional sequence labeling algorithm uses an RNN training algorithm (to train a bidirectional RNN network).
- Context:
- It can be trained by a BiRNN Training System (that solves a BiRNN training task).
- Example(s):
- Counter-Example(s):
- See: Multilayer Perceptron, Time Delay Neural Network, BiLSTM/CRF.
References
2018
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Bidirectional_recurrent_neural_networks Retrieved:2018-1-4.
- Bidirectional Recurrent Neural Networks (BRNN) were invented in 1997 by Schuster and Paliwal.[1] BRNNs were introduced to increase the amount of input information available to the network. For example, multilayer perceptron (MLPs) and time delay neural network (TDNNs) have limitations on the input data flexibility, as they require their input data to be fixed. Standard recurrent neural network (RNNs) also have restrictions as the future input information cannot be reached from the current state. On the contrary, BRNNs do not require their input data to be fixed. Moreover, their future input information is reachable from the current state. The basic idea of BRNNs is to connect two hidden layers of opposite directions to the same output. By this structure, the output layer can get information from past and future states.
BRNN are especially useful when the context of the input is needed. For example, in handwriting recognition, the performance can be enhanced by knowledge of the letters located before and after the current letter.
- Bidirectional Recurrent Neural Networks (BRNN) were invented in 1997 by Schuster and Paliwal.[1] BRNNs were introduced to increase the amount of input information available to the network. For example, multilayer perceptron (MLPs) and time delay neural network (TDNNs) have limitations on the input data flexibility, as they require their input data to be fixed. Standard recurrent neural network (RNNs) also have restrictions as the future input information cannot be reached from the current state. On the contrary, BRNNs do not require their input data to be fixed. Moreover, their future input information is reachable from the current state. The basic idea of BRNNs is to connect two hidden layers of opposite directions to the same output. By this structure, the output layer can get information from past and future states.
- ↑ Schuster, Mike, and Kuldip K. Paliwal. “Bidirectional recurrent neural networks." Signal Processing, IEEE Transactions on 45.11 (1997): 2673-2681.2. Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan
2005
- (Graves & Schmidhuber, 2005) ⇒ Alex Graves, and Jürgen Schmidhuber. (2005). “Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures.” In: Neural Networks Journal, 18(5-6). doi:10.1016/j.neunet.2005.06.042