Unsupervised Pre-Training Algorithm
Jump to navigation
Jump to search
An Unsupervised Pre-Training Algorithm is a pre-training algorithm that is an unsupervised algorithm.
- Context:
- It can be used for Neural Network Weight Initialization.
- …
- Counter-Example(s):
- See: Deep Learning Algorithm, Feature Learning Algorithm.
References
2012
- (Dahl et al., 2012) ⇒ George E. Dahl, Dong Yu, Li Deng, and Alex Acero. (2012). “Context-Dependent Pre-trained Deep Neural Networks for Large-Vocabulary Speech Recognition.” In: IEEE Transactions on Audio, Speech, and Language Processing, 20(1). doi:10.1109/TASL.2011.2134090
- QUOTE: Recently, a major advance has been made in training densely connected, directed belief nets with many hidden layers. The resulting deep belief nets learn a hierarchy of nonlinear feature detectors that can capture complex statistical patterns in data. The deep belief net training algorithm suggested in [24] first initializes the weights of each layer individually in a purely unsupervised[1] way and then fine-tunes the entire network using labeled data. This semi-supervised approach using deep models has proved effective in a number of applications, including coding and classification for speech, audio, text, and image data ([25]–[29]). T
2010
- (Erhan et al., 2010) ⇒ Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. (2010). “Why Does Unsupervised Pre-training Help Deep Learning?.” In: The Journal of Machine Learning Research, 11.
- QUOTE: Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks and stacks of auto-encoder variants, with impressive results obtained in several areas, mostly on vision and language data sets. The best results obtained on supervised learning tasks involve an unsupervised learning component, usually in an unsupervised pre-training phase. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem.