Supervised Scalar Prediction Algorithm
A supervised scalar prediction algorithm is a numerical prediction algorithm that is a supervised learning algorithm.
- Context:
- It can range from being a Parametric Regression Algorithm to being a Nonparametric Regression Algorithm (e.g. one that uses a Generalized Linear Model as its Hypothesis Space).
- It can range from being a Binomial Regression Algorithm, to being a Multinomial Regression Algorithm, or to being a Continuous Regression Algorithm.
- It can range from being a Univariate Regression Algorithm to being a Multivariate Regression Algorithm.
- It can range from being an Eager Regression Algorithm to being a Lazy Regression Algorithm.
- It can range from being a Model-based Regression Algorithm to being an Instance-based Regression Algorithm.
- It can be applied by a Supervised Point Estimation System (to solve a supervised point estimation task).
- It can range from being a Boosted Supervised Point Estimation Algorithm to being an Unboosted Supervised Point Estimation Algorithm.
- Example(s):
- an Eager Model-based Regression Algorithm, such as:
- a Lazy Model-based Regression Algorithm, such as:
- an Eager Instance-based Regression Algorithm.
- an Lazy Model-based Regression Algorithm.
- a Direct Point-Estimation Algorithm:
- a Bayesian-based Point-Estimation Algorithm:
- Minimum Mean Squared Error.
- a Minimum Variance Unbiased Estimator.
- a Best Linear Unbiased Estimator.
- Counter-Example(s):
- See: Stepwise Regression, Resampling Algorithm, Kalman Filter, Cramér–Rao Bound.
References
2011
- http://en.wikipedia.org/wiki/Regression_analysis
- In statistics, regression analysis includes any techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables — that is, the average value of the dependent variable when the independent variables are held fixed. Less commonly, the focus is on a quantile, or other location parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function, which can be described by a probability distribution.
Regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, and to explore the forms of these relationships. In restricted circumstances, regression analysis can be used to infer causal relationships between the independent and dependent variables.
A large body of techniques for carrying out regression analysis has been developed. Familiar methods such as linear regression and ordinary least squares regression are parametric, in that the regression function is defined in terms of a finite number of unknown parameters that are estimated from the data. Nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional.
The performance of regression analysis methods in practice depends on the form of the data generating process, and how it relates to the regression approach being used. Since the true form of the data-generating process is in general not known, regression analysis often depends to some extent on making assumptions about this process. These assumptions are sometimes (but not always) testable if a large amount of data is available. Regression models for prediction are often useful even when the assumptions are moderately violated, although they may not perform optimally. However, in many applications, especially with small effects or questions of causality based on observational data, regression methods give misleading results.[1][2]
- In statistics, regression analysis includes any techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables — that is, the average value of the dependent variable when the independent variables are held fixed. Less commonly, the focus is on a quantile, or other location parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function, which can be described by a probability distribution.
- ↑ David A. Freedman, Statistical Models: Theory and Practice, Cambridge University Press (2005)
- ↑ R. Dennis Cook; Sanford Weisberg Criticism and Influence Analysis in Regression, Sociological Methodology, Vol. 13. (1982), pp. 313-361
2008
- (Yang et al., 2008) ⇒ Yi-Hsuan Yang, Yu-Ching Lin, Ya-Fan Su, and H.H. Chen. (2008). “A Regression Approach to Music Emotion Recognition.” In: IEEE Transactions on Audio, Speech, and Language Processing, 16(2). doi:10.1109/TASL.2007.911513.
- QUOTE: ... vector for the the input sample, and (denotes a set of real values) is the real value to be predicted for the sample, the regression system trains a regression algorithm (regressor) such that the mean squared error is minimized [17]:
2007
- (Caponnetto & De Vito, 2007) ⇒ Andrea Caponnetto, and Ernesto De Vito. (2007). “Optimal Rates for the Regularized Least-Squares Algorithm." Foundations of Computational Mathematics. doi:10.1007/s10208-006-0196-8
- QUOTE: ... The aim of a regression algorithm is estimating a particular invariant of the unknown distribution: the regression function, using only the available empirical samples.
2005
- (Dekel et al., 2005) ⇒ Ofer Dekel, Shai Shalev-Shwartz, and Yoram Singer. (2005). “Smooth ε-Insensitive Regression by Loss Symmetrization.” In: The Journal of Machine Learning Research, 6. doi:10.1007/978-3-540-45167-9_32
- QUOTE: The focus of this paper is supervised learning of real-valued functions. We observe a sequence [math]\displaystyle{ S = {(x_1,y_1),...,(x_m,y_m)} }[/math] of instance-target pairs, where the instances are vectors in [math]\displaystyle{ \R^n }[/math] and the targets are real-valued scalars, [math]\displaystyle{ y_i \in \mathbb{R} }[/math]. Our goal is to learn a function [math]\displaystyle{ f : \mathbb{R}^n \rightarrow \mathbb{R} }[/math] which provides a good approximation of the target values from their corresponding instance vectors. Such a function is often referred to as a regression function or a regressor for short.
2003
- (Davison, 2003) ⇒ Anthony C. Davison. (2003). “Statistical Models." Cambridge University Press. ISBN:0521773393
1998
- (Kohavi & Provost, 1998) ⇒ Ron Kohavi, and Foster Provost. (1998). “Glossary of Terms.” In: Machine Leanring 30(2-3).
- Regressor: A mapping from unlabeled instances to a value within a predefined metric space (e.g., a continuous range).