Model-based Supervised Numeric-Value Prediction Task
(Redirected from supervised Regression)
Jump to navigation
Jump to search
A Model-based Supervised Numeric-Value Prediction Task is a supervised numeric-value prediction task that is a model-based supervised learning task.
- AKA: Continuous Function Regression, Supervised Regression Task.
- Context:
- Input:
- output: Fitted Real-Value Function.
- It can range from being a Model-based Parametric Estimation Task to being a Model-based Non-Parametric Point Estimation Task.
- It can range from being a Fully-Supervised Model-based Regression Task to being a Semi-Supervised Model-based Regression Task.
- It can range from being a Linear Regression Task to being a Non-Linear Regression Task.
- It can be solved by a Model-based Supervised Numeric-Value Prediction System (that implements a model-based supervised numerical-value prediction algorithm).
- Example(s):
- a Statistical Regression Task.
- a model-based Supervised House Price Prediction Task, such as: [1]
- a Least-Squares Point Estimation Task.
- a Regression Tree Training Task.
- a Maximum Entropy-based Regression Task.
- …
- Counter-Example(s):
- See: Regression Model, Supervised Machine Learning Task, Supervised Machine Learning Task, Dynamic Programming.
References
2011
- http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Motivational_example
- As a result of an experiment, four [math]\displaystyle{ (x, y) }[/math] data points were obtained, [math]\displaystyle{ (1, 6), }[/math] [math]\displaystyle{ (2, 5), }[/math] [math]\displaystyle{ (3, 7), }[/math] and [math]\displaystyle{ (4, 10) }[/math] (shown in red in the picture on the right). It is desired to find a line [math]\displaystyle{ y=\beta_1+\beta_2 x }[/math] that fits "best" these four points. In other words, we would like to find the numbers [math]\displaystyle{ \beta_1 }[/math] and [math]\displaystyle{ \beta_2 }[/math] that approximately solve the overdetermined linear system [math]\displaystyle{ \begin{alignat}{3} \beta_1 + 1\beta_2 &&\; = \;&& 6 & \\ \beta_1 + 2\beta_2 &&\; = \;&& 5 & \\ \beta_1 + 3\beta_2 &&\; = \;&& 7 & \\ \beta_1 + 4\beta_2 &&\; = \;&& 10 & \\ \end{alignat} }[/math] of four equations in two unknowns in some "best" sense.
2006
- (Tibshirani, 1996) ⇒ Robert Tibshirani. (1996). “Regression Shrinkage and Selection via the Lasso.” In: Journal of the Royal Statistical Society, Series B, 58(1).
- QUOTE: Consider the usual regression situation: we have data [math]\displaystyle{ (\mathbf{x}^i, y^i), i=1,2,...,N \ , }[/math] where [math]\displaystyle{ \mathbf{x}^i=(x_{i1},..., x_{ip})^T }[/math] and [math]\displaystyle{ y_i }[/math] are the regressors and response for the ith observation. The ordinary least squares (OLS) estimates are obtained by minimizing the residual squared error.