Ordinary Least Squares Estimate
Jump to navigation
Jump to search
An Ordinary Least Squares Estimate is an least squares estimate developed by an un-regularized least squares algorithm.
- AKA: OLS Estimate.
- See: Least Squares Estimate.
References
2013
- (Dhillon et al., 2013) ⇒ Paramveer S. Dhillon, Dean P. Foster, Sham M. Kakade, and Lyle H. Ungar. (2013). “A Risk Comparison of Ordinary Least Squares Vs Ridge Regression.” In: The Journal of Machine Learning Research, 14(1).
- QUOTE: Consider the following ordinary least squares estimator on the “top” PCA subspace — it uses the least squares estimate on coordinate j if lj = l and 0 otherwise
1996
- (Tibshirani, 1996) ⇒ Robert Tibshirani. (1996). “Regression Shrinkage and Selection via the Lasso.” In: Journal of the Royal Statistical Society, Series B, 58(1).
- QUOTE: Consider the usual regression situation: we have data (xi, yi), [math]\displaystyle{ i }[/math] = 1, 2, . . ., [math]\displaystyle{ N }[/math], where xi = (xi1,..., xip) T and yi are the regressors and response for the ith observation. The ordinary least squares (OLS) estimates are obtained by minimizing the residual squared error. There are two reasons why the data analyst is often not satisfied with the OLS estimates.