Feature Learning Task
A Feature Learning Task is a Machine Learning Task that learns data representations from input data.
- AKA: Representation Learning Task.
- Context:
- Task Input: Raw Data.
- Task Output: Data Representation.
- Task Requirements: Feature Learning Neural Network Model; or Feature Learning Directed Graphical Model, or ...
- It can be solved by a Feature Learning System (that implements a feature learning algorithm).
- It can range from an Unsupervised Feature Learning Task to being a Supervised Feature Learning Task.
- It can be built to specifically to learn features from labeled data.
- …
- Example(s):
- Counter-Example(s):
- See: Speech Recognition Task, Signal Processing Task, Natural Language Processing Deep Learning Algorithm, Feature Detector, Derived Feature, Cluster Analysis, Multilayer Perceptron.
References
2021
- (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Feature_learning Retrieved:2021-5-29.
- In machine learning, feature learning or representation learning (Bengio et al., 2013) is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.
Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensor data has not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms.
Feature learning can be either supervised or unsupervised.
- In supervised feature learning, features are learned using labeled input data. Examples include supervised neural networks, multilayer perceptron and (supervised) dictionary learning.
- In unsupervised feature learning, features are learned with unlabeled input data. Examples include dictionary learning, independent component analysis, autoencoders, matrix factorization[1] and various forms of clustering.[2][3] [4]
- In machine learning, feature learning or representation learning (Bengio et al., 2013) is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.
- ↑ Nathan Srebro; Jason D. M. Rennie; Tommi S. Jaakkola (2004). Maximum-Margin Matrix Factorization. NIPS.
- ↑ Coates, Adam; Lee, Honglak; Ng, Andrew Y. (2011). An analysis of single-layer networks in unsupervised feature learning (PDF). Int'l Conf. on AI and Statistics (AISTATS). Archived from the original (PDF) on 2017-08-13. Retrieved 2014-11-24.
- ↑ Csurka, Gabriella; Dance, Christopher C.; Fan, Lixin; Willamowski, Jutta; Bray, Cédric (2004). Visual categorization with bags of keypoints (PDF). ECCV Workshop on Statistical Learning in Computer Vision.
- ↑ Daniel Jurafsky; James H. Martin (2009). Speech and Language Processing. Pearson Education International. pp. 145–146.
2013
- (Bengio et al., 2013) ⇒ Yoshua Bengio, Aaron Courville, and Pascal Vincent. (2013). “Representation Learning: A Review and New Perspectives.” In: IEEE Transactions on Pattern Analysis and Machine Intelligence Journal, 35(8). doi:10.1109/TPAMI.2013.50
- QUOTE: This paper is about representation learning, i.e., learning representations of the data that make it easier to extract useful information when building classifiers or other predictors.
In the case of probabilistic models, a good representation is often one that captures the posterior distribution of the underlying explanatory factors for the observed input. A good representation is also one that is useful as input to a supervised predictor. Among the various ways of learning representations, this paper focuses on deep learning methods: those that are formed by the composition of multiple non-linear transformations, with the goal of yielding more abstract- and ultimately more useful - representations.
2008
- (Argyriou et al., 2008) ⇒ Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. (2008). “Convex Multi-task Feature Learning.” In: Machine Learning Journal, 73(3). doi:10.1007/s10994-007-5040-8
- QUOTE: We present a method for learning sparse representations shared across multiple tasks. … Learning common sparse representations across multiple tasks or datasets may also be of interest for example for data compression. While the problem of learning (or selecting) sparse representations has been extensively studied either for single-task supervised learning (e.g., using 1-norm regularization) or for unsupervised learning (e.g., using principal component analysis (PCA) or independent component analysis (ICA)), there has been only limited work [3, 9, 31, 48] in the multi-task supervised learning setting.