Ordinary Kriging Regression Task
An Ordinary Kriging Regression Task is a [[]] that ...
- AKA: Ordinary Kriging.
- See:.
References
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Kriging#Ordinary_kriging Retrieved:2017-9-3.
- The unknown value [math]\displaystyle{ Z(x_0) }[/math] is interpreted as a random variable located in [math]\displaystyle{ x_0 }[/math] , as well as the values of neighbors samples [math]\displaystyle{ Z(x_i), i=1,\cdots ,N }[/math] . The estimator [math]\displaystyle{ \hat{Z}(x_0) }[/math] is also interpreted as a random variable located in [math]\displaystyle{ x_0 }[/math] , a result of the linear combination of variables.
In order to deduce the kriging system for the assumptions of the model, the following error committed while estimating [math]\displaystyle{ Z(x) }[/math] in [math]\displaystyle{ x_0 }[/math] is declared: : [math]\displaystyle{ \epsilon(x_0) = \hat{Z}(x_0) - Z(x_0) = \begin{bmatrix}W^T&-1\end{bmatrix} \cdot \begin{bmatrix}Z(x_i)&\cdots&Z(x_N)&Z(x_0)\end{bmatrix}^T = \sum^{N}_{i=1}w_i(x_0) \times Z(x_i) - Z(x_0) }[/math] The two quality criteria referred to previously can now be expressed in terms of the mean and variance of the new random variable [math]\displaystyle{ \epsilon(x_0) }[/math] :
Lack of bias:
Since the random function is stationary, [math]\displaystyle{ E(Z(x_i))=E(Z(x_0))=m }[/math], the following constraint is observed: : [math]\displaystyle{ E\left(\epsilon(x_0)\right)=0 \Leftrightarrow \sum^{N}_{i=1}w_i(x_0) \times E(Z(x_i)) - E(Z(x_0))=0 \Leftrightarrow }[/math] : : [math]\displaystyle{ \Leftrightarrow m\sum^{N}_{i=1}w_i(x_0) - m=0 \Leftrightarrow \sum^N_{i=1} w_i(x_0) = 1 \Leftrightarrow \mathbf{1}^T \cdot W = 1 }[/math] In order to ensure that the model is unbiased, the weights must sum to one.
Minimum variance:
Two estimators can have [math]\displaystyle{ E\left[\epsilon(x_0)\right]=0 }[/math] , but the dispersion around their mean determines the difference between the quality of estimators. To find an estimator with minimum variance, we need to minimize [math]\displaystyle{ E\left(\epsilon(x_0)^2\right) }[/math] . : [math]\displaystyle{ \begin{array}{rl} \operatorname{Var}(\epsilon(x_0)) &= \operatorname{Var}\left(\begin{bmatrix}W^T&-1\end{bmatrix} \cdot \begin{bmatrix}Z(x_i)&\cdots&Z(x_N)&Z(x_0)\end{bmatrix}^T\right) =\\ &\overset{*}{=} \begin{bmatrix}W^T&-1\end{bmatrix} \cdot \operatorname{Var}\left(\begin{bmatrix}Z(x_i)&\cdots&Z(x_N)&Z(x_0)\end{bmatrix}^T\right) \cdot \begin{bmatrix}W\\-1\end{bmatrix} \end{array} }[/math] * see covariance matrix for a detailed explanation : [math]\displaystyle{ \operatorname{Var}(\epsilon(x_0)) \overset{*}{=} \begin{bmatrix}W^T&-1\end{bmatrix} \cdot \begin{bmatrix} \operatorname{Var}_{x_i}& \operatorname{Cov}_{x_ix_0}\\ \operatorname{Cov}_{x_ix_0}^T & \operatorname{Var}_{x_0}\end{bmatrix} \cdot \begin{bmatrix}W\\-1\end{bmatrix} }[/math] * where the literals [math]\displaystyle{ \left\{\operatorname{Var}_{x_i}, \operatorname{Var}_{x_0}, \operatorname{Cov}_{x_ix_0}\right\} }[/math] stand for [math]\displaystyle{ \left\{\operatorname{Var}\left(\begin{bmatrix}Z(x_1)&\cdots&Z(x_N)\end{bmatrix}^T\right), \operatorname{Var}(Z(x_0)), \operatorname{Cov} \left(\begin{bmatrix}Z(x_1)&\cdots&Z(x_N)\end{bmatrix}^T,Z(x_0)\right)\right\} }[/math] .
Once defined the covariance model or variogram, [math]\displaystyle{ C(\mathbf{h}) }[/math] or [math]\displaystyle{ \gamma(\mathbf{h}) }[/math] , valid in all field of analysis of [math]\displaystyle{ Z(x) }[/math], then we can write an expression for the estimation variance of any estimator in function of the covariance between the samples and the covariances between the samples and the point to estimate: : [math]\displaystyle{ \left\{\begin{array}{l} \operatorname{Var}(\epsilon(x_0)) = W^T \cdot \operatorname{Var}_{x_i} \cdot W - \operatorname{Cov}_{x_ix_0}^T \cdot W - W^T \cdot \operatorname{Cov}_{x_ix_0} + \operatorname{Var}_{x_0}\\ \operatorname{Var}(\epsilon(x_0)) = \operatorname{Cov}(0) + \sum_{i}\sum_j w_i w_j \operatorname{Cov}(x_i,x_j) - 2 \sum_iw_i C(x_i,x_0)\end{array} \right. }[/math] Some conclusions can be asserted from this expression. The variance of estimation:
- is not quantifiable to any linear estimator, once the stationarity of the mean and of the spatial covariances, or variograms, are assumed.
- grows when the covariance between the samples and the point to estimate decreases. This means that, when the samples are farther away from [math]\displaystyle{ x_0 }[/math], the worse the estimation.
- grows with the a priori variance [math]\displaystyle{ C(0) }[/math] of the variable [math]\displaystyle{ Z(x) }[/math] . When the variable is less disperse, the variance is lower in any point of the area [math]\displaystyle{ A }[/math] .
- does not depend on the values of the samples. This means that the same spatial configuration (with the same geometrical relations between samples and the point to estimate) always reproduces the same estimation variance in any part of the area [math]\displaystyle{ A }[/math] . This way, the variance does not measures the uncertainty of estimation produced by the local variable.
- ;System of equations : [math]\displaystyle{ \begin{align} &\underset{W}{\text{minimize}}& & W^T \cdot \operatorname{Var}_{x_i} \cdot W - \operatorname{Cov}_{x_ix_0}^T \cdot W - W^T \cdot \operatorname{Cov}_{x_ix_0} + \operatorname{Var}_{x_0} \\ &\text{subject to} & &\mathbf{1}^T \cdot W = 1 \end{align} }[/math] Solving this optimization problem (see Lagrange multipliers) results in the kriging system: : [math]\displaystyle{ \begin{bmatrix}\hat{W}\\\mu\end{bmatrix} = \begin{bmatrix} \operatorname{Var}_{x_i}& \mathbf{1}\\ \mathbf{1}^T& 0 \end{bmatrix}^{-1}\cdot \begin{bmatrix} \operatorname{Cov}_{x_ix_0}\\ 1\end{bmatrix} = \begin{bmatrix} \gamma(x_1,x_1) & \cdots & \gamma(x_1,x_n) &1 \\ \vdots & \ddots & \vdots & \vdots \\ \gamma(x_n,x_1) & \cdots & \gamma(x_n,x_n) & 1 \\ 1 &\cdots& 1 & 0 \end{bmatrix}^{-1} \begin{bmatrix}\gamma(x_1,x^*) \\ \vdots \\ \gamma(x_n,x^*) \\ 1\end{bmatrix} }[/math] the additional parameter [math]\displaystyle{ \mu }[/math] is a Lagrange multiplier used in the minimization of the kriging error [math]\displaystyle{ \sigma_k^2(x) }[/math] to honor the unbiasedness condition.
- The unknown value [math]\displaystyle{ Z(x_0) }[/math] is interpreted as a random variable located in [math]\displaystyle{ x_0 }[/math] , as well as the values of neighbors samples [math]\displaystyle{ Z(x_i), i=1,\cdots ,N }[/math] . The estimator [math]\displaystyle{ \hat{Z}(x_0) }[/math] is also interpreted as a random variable located in [math]\displaystyle{ x_0 }[/math] , a result of the linear combination of variables.