Supervised Learning Algorithm
Jump to navigation
Jump to search
A Supervised Learning Algorithm is a learning algorithm that can be implemented into a supervised learning system (to solve a supervised learning task).
- Context:
- It can range from being a Supervised Categorical Prediction Algorithm to being a Supervised Number Prediction Algorithm (such as a supervised rank prediction algorithm to being a supervised continuous-prediction method).
- It can range from being an Eager Learning Algorithm to being a Lazy Learning Algorithm (that can quickly produce a first prediction).
- It can range from being a Model-based Supervised Learning Algorithm to being an Instance-based Supervised Learning Algorithm.
- It can range from being a Parametric Supervised Learning Algorithm to being an Non-Parametric Supervised Learning Algorithm.
- It can range from being a Fully-Supervised Learning Algorithm to being a Semi-Supervised Learning Algorithm.
- It can range from being an Offline Supervised Learning Algorithm to being an Online Supervised Learning Algorithm.
- It can range from being a General-Purpose Supervised Learning Algorithm to being a Domain-specific Supervised Learning Algorithm.
- It can range from being a Single-Model Supervised Learning Algorithm to being an Ensemble-based Supervised Learning Algorithm.
- It can range from being a Discriminant Learning Algorithm to being a Generative Learning Algorithm.
- ...
- Example(s):
- a Supervised Decision Tree Learning Algorithm, such as a C4.5 algorithm.
- a k-Nearest Neighbor Algorithm.
- a Statistical Regression Algorithm, such as logistic regression or linear regression.
- an Inductive Learning Algorithm.
- a Domain-specific Supervised Learning Algorithm, such as Supervised Named Entity Recognition Algorithm.
- …
- Counter-Example(s):
- See: Model Selection Task, Function Fitting Algorithm, Target Attribute, Discriminatively Trained Model, Generatively Trained Model.
References
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Gradient_boosting#Algorithm Retrieved:2017-1-21.
- In many supervised learning problems one has an output variable and a vector of input variables connected together via a joint probability distribution [math]\displaystyle{ P(x,y) }[/math] . Using a training set [math]\displaystyle{ \{ (x_1,y_1), \dots , (x_n,y_n) \} }[/math] of known values of and corresponding values of, the goal is to find an approximation [math]\displaystyle{ \hat{F}(x) }[/math] to a function [math]\displaystyle{ F^*(x) }[/math] that minimizes the expected value of some specified loss function [math]\displaystyle{ L(y, F(x)) }[/math] : : [math]\displaystyle{ \hat{F} = \underset{F}{\arg\min} \, \mathbb{E}_{x,y}[L(y, F(x))] }[/math] .
=== 2009 ===
- In many supervised learning problems one has an output variable and a vector of input variables connected together via a joint probability distribution [math]\displaystyle{ P(x,y) }[/math] . Using a training set [math]\displaystyle{ \{ (x_1,y_1), \dots , (x_n,y_n) \} }[/math] of known values of and corresponding values of, the goal is to find an approximation [math]\displaystyle{ \hat{F}(x) }[/math] to a function [math]\displaystyle{ F^*(x) }[/math] that minimizes the expected value of some specified loss function [math]\displaystyle{ L(y, F(x)) }[/math] : : [math]\displaystyle{ \hat{F} = \underset{F}{\arg\min} \, \mathbb{E}_{x,y}[L(y, F(x))] }[/math] .
- http://www.nature.com/nrc/journal/v5/n11/glossary/nrc1739_glossary.html
- SUPERVISED ALGORITHM: A method of statistical or machine learning in which a model is fitted to observations. The algorithm, in effect, learns by example.
2006
- (Caruana & Niculescu-Mizil, 2006) ⇒ Rich Caruana, and Alexandru Niculescu-Mizil. (2006). “An Empirical Comparison of Supervised Learning Algorithms.” In: Proceedings of the 23rd International Conference on Machine learning. ISBN:1-59593-383-2 doi:10.1145/1143844.1143865
- QUOTE: A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance.
1998
- (Kohavi & Provost, 1998) ⇒ Ron Kohavi, and Foster Provost. (1998). “Glossary of Terms.” In: Machine Leanring 30(2-3).
- Supervised learning: Techniques used to learn the relationship between independent attributes and a designated dependent attribute (the label). Most induction algorithms fall into the supervised learning category.