Supervised Class Prediction Algorithm
(Redirected from Classifier Training Algorithm)
Jump to navigation
Jump to search
A supervised class prediction algorithm is a data-driven classification algorithm that is a supervised predictive algorithm and can be applied by a supervised classification system to solve a supervised classification task.
- AKA: Supervised Class Prediction Technique.
- Context:
- It can range from being an Eager Supervised Classification Algorithm to being a Lazy Supervised Classification Algorithm.
- It can range from being a Model-based Supervised Classification Algorithm to being an Instance-based Supervised Classification Algorithm.
- It can range from being a Fully-Supervised Classification Algorithm to being a Semi-Supervised Classification Algorithm.
- It can range from being a Univariate Supervised Classification Algorithm to being a Multivariate Supervised Classification Algorithm.
- It can range from being a Supervised Binary Classification Algorithm to being a Supervised Multi-Class Classification Algorithm.
- It can range from being a Supervised Single-Label Classification Algorithm to being a Supervised Multi-Label Classification Algorithm.
- It can range from being a Generative Classification Algorithm to being a Discriminative Classification Algorithm.
- It can range from being an Imbalanced Supervised Classification Algorithm to being a Balanced Supervised Classification Algorithm (depending on whether it can handle an Imbalanced Training Dataset).
- It can range from being a Supervised Tuple Classification Algorithm.
- to being a Supervised Sequence Tagging Algorithm.
- to being a Supervised Segment Classification Algorithm.
- to being a Supervised Graph Classification Algorithm.
- Example(s):
- an Eager Learning Algorithm, such as: Decision Tree Learning Algorithm, Neural Network Training Algorithm, Logistic Regression Algorithm, Naive-Bayes Algorithm, Hidden Markov Model Training Algorithm, Support Vector Machine Algorithm.
- a Lazy Learning Algorithm, such as ID1 Algorithm.
- an Instance-based Classification Algorithm, such as a k-Nearest Neighbor Classification Algorithm.
- a Model-based Classification Algorithms, such as SVMs, regularized least-squares, NNets, ...
- a Domain-Specific Supervised Classification Algorithm, such as a: Text Categorization Algorithm.
- …
- Counter-Example(s):
- See: Detection Algorithm, Classification Model.
References
2009
- (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Supervised_learning
- Supervised learning is a machine learning technique for deducing a function from training data. The training data consist of pairs of input objects (typically vectors), and desired outputs. The output of the function can be a continuous value (called regression), or can predict a class label of the input object (called classification). The task of the supervised learner is to predict the value of the function for any valid input object after having seen a number of training examples (i.e. pairs of input and target output). To achieve this, the learner has to generalize from the presented data to unseen situations in a "reasonable" way (see inductive bias).
(Compare with unsupervised learning.) The parallel task in human and animal psychology is often referred to as concept learning.
- Supervised learning is a machine learning technique for deducing a function from training data. The training data consist of pairs of input objects (typically vectors), and desired outputs. The output of the function can be a continuous value (called regression), or can predict a class label of the input object (called classification). The task of the supervised learner is to predict the value of the function for any valid input object after having seen a number of training examples (i.e. pairs of input and target output). To achieve this, the learner has to generalize from the presented data to unseen situations in a "reasonable" way (see inductive bias).
2006
- (Caruana & Niculescu-Mizil, 2006) ⇒ Rich Caruana, and Alexandru Niculescu-Mizil. (2006). “An Empirical Comparison of Supervised Learning Algorithms.” In: Proceedings of the 23rd International Conference on Machine learning. ISBN:1-59593-383-2 doi:10.1145/1143844.1143865
- QUOTE: This paper presents results of a large-scale empirical comparison of ten supervised learning algorithms using eight performance criteria. We evaluate the performance of SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps on eleven binary classification problems using a variety of performance metrics: accuracy, F-score, Lift, ROC Area, average precision, precision/recall break-even point, squared error, and cross-entropy.
2000
- (Gildea & Jurafsky, 2000) ⇒ Daniel Gildea, and Daniel Jurafsky. (2000). “Automatic Lbeling of Semantic Roles.” In: Proceedings of ACL 2000.
- QUOTE:We apply statistical techniques that have been successful for these tasks, including probabilistic parsing and statistical classification. Our statistical algorithms are trained on a hand labeled dataset the FrameNet database (Baker et al, 1998).
1999
- (Jaakkola & Haussler, 1999) ⇒ Tommi S. Jaakkola, and David Haussler. (1999). “Exploiting Generative Models in Discriminative Classifiers.” In: Proceedings of the 1998 conference on Advances in Neural Information Processing Systems II. ISBN:0-262-11245-0
- QUOTE: Generative probability models such as hidden Markov models provide a principled way of treating missing information and dealing with variable length sequences. On the other hand, discriminative methods such as support vector machines enable us to construct flexible decision boundaries and often result in classification performance superior to that of the model based approaches. An ideal classifier should combine these two complementary approaches.
1995
- (Kohavi, 1995) ⇒ Ron Kohavi. (1995). “A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection.” In: Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI 1995).
- QUOTE: A classifier is a function that maps an unlabelled instance to a label using internal data structures. An inducer or an induction algorithm builds a classifier from a given dataset. CART and C4.5 (Brennan, Friedman Olshen & Stone 1984, Quinlan 1993) are decision tree inducers that build decision tree classifiers. In this paper we are not interested in the specific method for inducing classifiers, but assume access to a dataset and an inducer of interest.