Autocovariance

From GM-RKB
(Redirected from autocovariance)
Jump to navigation Jump to search

An Autocovariance is a covariance between two time-points of the same stochastic process.

  • Context
    • It can be defined as covariance between the stochastic processes X and Y when X=Y, i.e.
[math]\displaystyle{ cov(X,X)=cov(X_t,X_s)=E(X_t)E(X_s) - \mu_t\; \mu_s }[/math]
where [math]\displaystyle{ \mu_x }[/math] represent the mean function, [math]\displaystyle{ E(x) }[/math] is the expectation function, [math]\displaystyle{ X_t }[/math] and [math]\displaystyle{ X_s }[/math] two X points at time t and s.


References

2016

[math]\displaystyle{ C_{XX}(t,s) = cov(X_t, X_s) = E[(X_t - \mu_t)(X_s - \mu_s)] = E[X_t X_s] - \mu_t \mu_s.\, }[/math]
Autocovariance is related to the more commonly used autocorrelation of the process in question.
In the case of a multivariate random vector [math]\displaystyle{ X=(X_1, X_2, … , X_n) }[/math], the autocovariance becomes a square n by n matrix, [math]\displaystyle{ C_{XX} }[/math], with entry [math]\displaystyle{ i,j }[/math] given by [math]\displaystyle{ C_{X_iX_j}(t,s) = cov(X_{i,t}, X_{j,s}) }[/math] and commonly referred to as autocovariance matrix associated with vectors [math]\displaystyle{ X_t }[/math] and [math]\displaystyle{ X_s }[/math].

2016

  • (Weissten , 2016) ⇒ Weisstein, Eric W. “Covariance.” From MathWorld -- A Wolfram Web Resource. http://mathworld.wolfram.com/Covariance.html Retrieved 2016-07-10
    • Covariance provides a measure of the strength of the correlation between two or more sets of random variates. The covariance for two random variates X and Y, each with sample size N, is defined by the expectation value
[math]\displaystyle{ cov(X,Y) = \langle (X-\mu_X)(Y-\mu_Y) \rangle = \langle XY\rangle-\mu_X\;\mu_y }[/math]
where [math]\displaystyle{ \mu_x=\langle X\rangle\gt }[/math]and [math]\displaystyle{ \mu_y=\langle Y\rangle }[/math] are the respective means, which can be written out explicitly as
[math]\displaystyle{ cov(X,Y)=\sum_{i=1}^N\frac{(x_i-\bar{x})(y_i-\bar{y})}{N} }[/math]
For uncorrelated variates,
[math]\displaystyle{ cov(X,Y)=\langle XY \rangle - \mu_X\; \mu_Y=\langle X\rangle\langle Y\rangle-\mu_X\;\mu_Y=0, }[/math]
so the covariance is zero. However, if the variables are correlated in some way, then their covariance will be nonzero. In fact, if [math]\displaystyle{ cov(X,Y)\gt 0 }[/math], then Y tends to increase as X increases, and if [math]\displaystyle{ cov(X,Y)\lt 0 }[/math], then Y tends to decrease as X increases. Note that while statistically independent variables are always uncorrelated, the converse is not necessarily true.
In the special case of Y=X,
[math]\displaystyle{ cov(X,X)=\langle X^2 \rangle -\langle X\rangle^2=\sigma_{X}^2, }[/math]
so the covariance reduces to the usual variance [math]\displaystyle{ \sigma_X^2=var(X) }[/math]. This motivates the use of the symbol [math]\displaystyle{ \sigma_{XY}=cov(X,Y) }[/math], which then provides a consistent way of denoting the variance as [math]\displaystyle{ \sigma_{XX}=\sigma_X^2 }[/math], where [math]\displaystyle{ \sigma_X }[/math] is the standard deviation.