Feature Space
A Feature Space is a formal space defined by fixed set of predictor features.
- Context:
- It can range from being a Numeric Feature Space (with feature vectors) to being a Mixed Feature Space (with feature tuples).
- It can range from being a Low-Dimensional Feature Space to being a High-Dimensional Feature Space.
- It can (typically) be referenced by a Feature Record.
- It can be decomposed into Feature Subspaces.
- Example(s):
- a Text-Token Feature Space, used for task X by system Xz.
- a Text Item Feature Space, used for task Y by system Yz.
- an Image Feature Space, used for task Z by system Sz.
- …
- Counter-Example(s):
- a Metric Space.
- a Search Space.
- an Abstract Space.
- See: Feature Space Transformation Task, Feature Selection Task, Curse of Dimensionality, Dimensionality Reduction.
References
2018
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/feature_(machine_learning)#Extensions Retrieved:2018-4-3.
- In pattern recognition and machine learning, a feature vector is an n-dimensional vector of numerical features that represent some object. Many algorithms in machine learning require a numerical representation of objects, since such representations facilitate processing and statistical analysis. When representing images, the feature values might correspond to the pixels of an image, while when representing texts the features might be the frequencies of occurrence of textual terms. Feature vectors are equivalent to the vectors of explanatory variables used in statistical procedures such as linear regression. Feature vectors are often combined with weights using a dot product in order to construct a linear predictor function that is used to determine a score for making a prediction.
The vector space associated with these vectors is often called the feature space. In order to reduce the dimensionality of the feature space, a number of dimensionality reduction techniques can be employed.
Higher-level features can be obtained from already available features and added to the feature vector; for example, for the study of diseases the feature 'Age' is useful and is defined as Age = 'Year of death' minus 'Year of birth' . This process is referred to as feature construction.[1] [2] Feature construction is the application of a set of constructive operators to a set of existing features resulting in construction of new features. Examples of such constructive operators include checking for the equality conditions {=, ≠}, the arithmetic operators {+,−,×, /}, the array operators {max(S), min(S), average(S)} as well as other more sophisticated operators, for example count(S,C)[3] that counts the number of features in the feature vector S satisfying some condition C or, for example, distances to other recognition classes generalized by some accepting device. Feature construction has long been considered a powerful tool for increasing both accuracy and understanding of structure, particularly in high-dimensional problems.[4] Applications include studies of disease and emotion recognition from speech.[5]
- In pattern recognition and machine learning, a feature vector is an n-dimensional vector of numerical features that represent some object. Many algorithms in machine learning require a numerical representation of objects, since such representations facilitate processing and statistical analysis. When representing images, the feature values might correspond to the pixels of an image, while when representing texts the features might be the frequencies of occurrence of textual terms. Feature vectors are equivalent to the vectors of explanatory variables used in statistical procedures such as linear regression. Feature vectors are often combined with weights using a dot product in order to construct a linear predictor function that is used to determine a score for making a prediction.
2007
- (Jiang & Zhai, 2007) ⇒ Jing Jiang, and ChengXiang Zhai. (2007). “A Systematic Exploration of the Feature Space for Relation Extraction.” In: Proceedings of NAACL/HLT Conference (NAACL/HLT 2007).
- QUOTE: .. .we systematically explore a large space of features for relation extraction and evaluate the effectiveness of different feature subspaces. We present a general definition of feature spaces based on a graphic representation of relation instances
- ↑ Liu, H., Motoda H. (1998) Feature Selection for Knowledge Discovery and Data Mining., Kluwer Academic Publishers. Norwell, MA, USA. 1998.
- ↑ Piramuthu, S., Sikora R. T. Iterative feature construction for improving inductive learning algorithms. In Journal of Expert Systems with Applications. Vol. 36 , Iss. 2 (March 2009), pp. 3401-3406, 2009
- ↑ Bloedorn, E., Michalski, R. Data-driven constructive induction: a methodology and its applications. IEEE Intelligent Systems, Special issue on Feature Transformation and Subset Selection, pp. 30-37, March/April, 1998
- ↑ Breiman, L. Friedman, T., Olshen, R., Stone, C. (1984) Classification and regression trees, Wadsworth
- ↑ Sidorova, J., Badia T. Syntactic learning for ESEDA.1, tool for enhanced speech emotion detection and analysis. Internet Technology and Secured Transactions Conference 2009 (ICITST-2009), London, November 9–12. IEEE