Generalized Linear Least-Squares Algorithm
(Redirected from Generalized Least Squares Algorithm)
Jump to navigation
Jump to search
A Generalized Linear Least-Squares Algorithm is a linear least-squares algorithm that estimates the unknown function parameters using linear regression.
- AKA: GLS, Generalized Least Squares.
- …
- Example(s):
- Counter-Example(s):
- See: Lasso Algorithm.
References
2013
- http://en.wikipedia.org/wiki/Generalized_least_squares
- In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model. The GLS is applied when the variances of the observations are unequal (heteroscedasticity), or when there is a certain degree of correlation between the observations. In these cases ordinary least squares can be statistically inefficient, or even give misleading inferences.
- http://en.wikipedia.org/wiki/Generalized_least_squares#Method_outline
- In a typical linear regression model we observe data [math]\displaystyle{ \{y_i,x_{ij}\}_{i=1..n,j=1..p} }[/math] on n statistical units. The response values are placed in a vector Y = (y1, ..., yn)′, and the predictor values are placed in the design matrix X = [[xij]], where xij is the value of the jth predictor variable for the ith unit. The model assumes that the conditional mean of Y given X is a linear function of X, whereas the conditional variance of the error term given X is a known matrix Ω. This is usually written as : [math]\displaystyle{
Y = X\beta + \varepsilon, \qquad \mathrm{E}[\varepsilon|X]=0,\ \operatorname{Var}[\varepsilon|X]=\Omega.
}[/math] Here β is a vector of unknown “regression coefficients” that must be estimated from the data.
Suppose b is a candidate estimate for β. Then the residual vector for b will be Y − Xb. Generalized least squares method estimates β by minimizing the squared Mahalanobis length of this residual vector: : [math]\displaystyle{ \hat\beta = \underset{b}{\rm arg\,min}\,(Y-Xb)'\,\Omega^{-1}(Y-Xb), }[/math] Since the objective is a quadratic form in b, the estimator has an explicit formula: : [math]\displaystyle{ \hat\beta = (X'\Omega^{-1}X)^{-1} X'\Omega^{-1}Y. }[/math]
- In a typical linear regression model we observe data [math]\displaystyle{ \{y_i,x_{ij}\}_{i=1..n,j=1..p} }[/math] on n statistical units. The response values are placed in a vector Y = (y1, ..., yn)′, and the predictor values are placed in the design matrix X = [[xij]], where xij is the value of the jth predictor variable for the ith unit. The model assumes that the conditional mean of Y given X is a linear function of X, whereas the conditional variance of the error term given X is a known matrix Ω. This is usually written as : [math]\displaystyle{
Y = X\beta + \varepsilon, \qquad \mathrm{E}[\varepsilon|X]=0,\ \operatorname{Var}[\varepsilon|X]=\Omega.
}[/math] Here β is a vector of unknown “regression coefficients” that must be estimated from the data.