Feature Transformation Operation
(Redirected from feature normalization)
Jump to navigation
Jump to search
A Feature Transformation Operation is a transformation operation on a predictor feature.
References
2016
- (Cheng et al., 2016) ⇒ Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. (2016). “Wide & Deep Learning for Recommender Systems.” In: Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ISBN:978-1-4503-4795-2 doi:10.1145/2988450.2988454
- QUOTE: Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort.
2013
- (Yu et al., 2013) ⇒ Dong Yu, Michael L. Seltzer, Jinyu Li, Jui-Ting Huang, and Frank Seide. (2013). “Feature Learning in Deep Neural Networks-studies on Speech Recognition Tasks.” arXiv preprint arXiv:1301.3605
- ABSTRACT: Recent studies have shown that deep neural networks (DNNs) perform significantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper, we argue that the improved accuracy achieved by the DNNs is the result of their ability to extract discriminative internal representations that are robust to the many sources of variability in speech signals. We show that these representations become increasingly insensitive to small perturbations in the input with increasing network depth, which leads to better speech recognition performance with deeper networks. We also show that DNNs cannot extrapolate to test samples that are substantially different from the training examples. If the training data are sufficiently representative, however, internal features learned by the DNN are relatively stable with respect to speaker differences, bandwidth differences, and environment distortion. This enables DNN-based recognizers to perform as well or better than state-of-the-art systems based on GMMs or shallow networks without the need for explicit model adaptation or feature normalization.
2001
- (Kusiak, 2001) ⇒ Andrew Kusiak. (2001). “Feature Transformation Methods in Data Mining.” IEEE Transactions on Electronics packaging manufacturing 24, no. 3