Covariance Function
A Covariance Function is a bivariate vector function of the linear relationship between two random variables.
- Context:
- It can range from being a Population Covariance Function to being a Sample Covariance Function.
- …
- Counter-Example(s):
- a Variance.
- See: Gaussian Covariance, Covariance Matrix, Gram Matrix, Non-Negative Definite Matrix, Stationary Process, Smoothness, Periodic Function, Marginal Likelihood, Empirical Bayes.
References
2014
- (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Gaussian_process#Covariance_Functions Retrieved:2014-01-30.
- A key fact of Gaussian processes is that they can be completely defined by their second-order statistics. Thus, if a Gaussian process is assumed to have mean zero, defining the covariance function completely defines the process' behaviour. The covariance matrix K between all the pair of points x and x' specifies a distribution on functions and is known as the Gram matrix. Importantly, because every valid covariance function is a scalar product of vectors, by construction the matrix K is a non-negative definite matrix. Equivalently, the covariance function K is a non-negative definite function in the sense that for every pair x and x' , K(x,x')≥ 0, if K(,) >0 then K is called positive definite. Importantly the non-negative definiteness of K enables its spectral decomposition using the Karhunen-Loeve expansion. Basic aspects that can be defined through the covariance function are the process' stationarity, isotropy, smoothness and periodicity.[1] [2]
Stationarity refers to the process' behaviour regarding the separation of any two points x and x' . If the process is stationary, it depends on their separation, x - x' , while if non-stationary it depends on the actual position of the points x and x; an example of a stationary process is the Ornstein–Uhlenbeck process. On the contrary, the special case of an Ornstein–Uhlenbeck process, a Brownian motion process, is non-stationary.
If the process depends only on |x - x'|, the Euclidean distance (not the direction) between x and x then the process is considered isotropic. A process that is concurrently stationary and isotropic is considered to be homogeneous;[3] in practice these properties reflect the differences (or rather the lack of them) in the behaviour of the process given the location of the observer.
Ultimately Gaussian processes translate as taking priors on functions and the smoothness of these priors can be induced by the covariance function.[1] If we expect that for "near-by" input points x and x' their corresponding output points y and y' to be "near-by" also, then the assumption of smoothness is present. If we wish to allow for significant displacement then we might choose a rougher covariance function. Extreme examples of the behaviour is the Ornstein–Uhlenbeck covariance function and the squared exponential where the former is never differentiable and the latter infinitely differentiable.
Periodicity refers to inducing periodic patterns within the behaviour of the process. Formally, this is achieved by mapping the input x to a two dimensional vector u(x) =(cos(x), sin(x)).
There are a number of common covariance functions:
- Constant : [math]\displaystyle{ K_\text{C}(x,x') = C }[/math]
- Linear: [math]\displaystyle{ K_\text{L}(x,x') = x^T x' }[/math]
- Gaussian Noise: [math]\displaystyle{ K_\text{GN}(x,x') = \sigma^2 \delta_{x,x'} }[/math]
- Squared Exponential: [math]\displaystyle{ K_\text{SE}(x,x') = \exp \Big(-\frac{|d|^2}{2l^2} \Big) }[/math]
- Ornstein–Uhlenbeck: [math]\displaystyle{ K_\text{OU}(x,x') = \exp \Big(-\frac{|d| }{l} \Big) }[/math]
- Matérn: [math]\displaystyle{ K_\text{Matern}(x,x') = \frac{2^{1-\nu}}{\Gamma(\nu)} \Big(\frac{\sqrt{2\nu}|d|}{l} \Big)^\nu K_{\nu}\Big(\frac{\sqrt{2\nu}|d|}{l} \Big) }[/math]
- Periodic: [math]\displaystyle{ K_\text{P}(x,x') = \exp\Big(-\frac{ 2\sin^2(\frac{d}{2})}{ l^2} \Big) }[/math]
- Rational Quadratic: [math]\displaystyle{ K_\text{RQ}(x,x') = (1+|d|^2)^{-\alpha}, \quad \alpha \geq 0 }[/math]
- Here [math]\displaystyle{ d = x- x' }[/math]. The parameter [math]\displaystyle{ l }[/math] is the characteristic length-scale of the process (practically, "how far apart" two points [math]\displaystyle{ x }[/math] and [math]\displaystyle{ x' }[/math] have to be for [math]\displaystyle{ X }[/math] to change significantly), δ is the Kronecker delta and σ the standard deviation of the noise fluctuations. Here [math]\displaystyle{ K_\nu }[/math] is the modified Bessel function of order [math]\displaystyle{ \nu }[/math] and [math]\displaystyle{ \Gamma }[/math] is the gamma function evaluated for [math]\displaystyle{ \nu }[/math]. Importantly, a complicated covariance function can be defined as a linear combination of other simpler covariance functions in order to incorporate different insights about the data-set at hand.
Clearly, the inferential results are dependent on the values of the hyperparameters θ (e.g. [math]\displaystyle{ l }[/math] and σ) defining the model's behaviour. A popular choice for θ is to provide maximum a posteriori (MAP) estimates of it by maximizing the marginal likelihood of the process; the marginalization being done over the observed process values [math]\displaystyle{ y }[/math]. This approach is also known as maximum likelihood II, evidence maximization, or Empirical Bayes.
- A key fact of Gaussian processes is that they can be completely defined by their second-order statistics. Thus, if a Gaussian process is assumed to have mean zero, defining the covariance function completely defines the process' behaviour. The covariance matrix K between all the pair of points x and x' specifies a distribution on functions and is known as the Gram matrix. Importantly, because every valid covariance function is a scalar product of vectors, by construction the matrix K is a non-negative definite matrix. Equivalently, the covariance function K is a non-negative definite function in the sense that for every pair x and x' , K(x,x')≥ 0, if K(,) >0 then K is called positive definite. Importantly the non-negative definiteness of K enables its spectral decomposition using the Karhunen-Loeve expansion. Basic aspects that can be defined through the covariance function are the process' stationarity, isotropy, smoothness and periodicity.[1] [2]
- ↑ 1.0 1.1 Barber, David (2012). Bayesian Reasoning and Machine Learning. Cambridge University Press. ISBN 0-521-51814-7. http://web4.cs.ucl.ac.uk/staff/D.Barber/pmwiki/pmwiki.php?n=Brml.HomePage.
- ↑ Rasmussen, C.E.; Williams, C.K.I (2006). Gaussian Processes for Machine Learning. MIT Press. ISBN 0-262-18253-X. http://www.gaussianprocess.org/gpml/.
- ↑ Grimmett, Geoffrey; David Stirzaker (2001). Probability and Random Processes. Oxford University Press. ISBN 0198572220.
2013
- http://www.r-tutor.com/elementary-statistics/numerical-measures/covariance
- QUOTE: The covariance of two variables x and y in a data sample measures how the two are linearly related. A positive covariance would indicates a positive linear relationship between the variables, and a negative covariance would indicate the opposite.
The sample covariance is defined in terms of the sample means as: : [math]\displaystyle{ s_{xy} = \frac{1}{n-1} \Sigma_{i=1}^n (x_i − \bar{x})(y_i − \bar{y}) }[/math] Similarly, the population covariance is defined in terms of the population means μx, μy as: [math]\displaystyle{ \sigma_{xy} = \frac{1}{N} \Sigma_{i=1}^n (x_i − \mu_{x})(y_i − \mu_{y}) }[/math]
- QUOTE: The covariance of two variables x and y in a data sample measures how the two are linearly related. A positive covariance would indicates a positive linear relationship between the variables, and a negative covariance would indicate the opposite.
2012
- http://en.wikipedia.org/wiki/Covariance
- QUOTE: In probability theory and statistics, covariance is a measure of how much two random variables change together. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the smaller values, i.e. the variables tend to show similar behavior, the covariance is a positive number. In the opposite case, when the greater values of one variable mainly correspond to the smaller values of the other, i.e. the variables tend to show opposite behavior, the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not that easy to interpret. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation.
A distinction must be made between (1) the covariance of two random variables, which is a population parameter that can be seen as a property of the joint probability distribution, and (2) the sample covariance, which serves as an estimated value of the parameter.
- QUOTE: In probability theory and statistics, covariance is a measure of how much two random variables change together. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the smaller values, i.e. the variables tend to show similar behavior, the covariance is a positive number. In the opposite case, when the greater values of one variable mainly correspond to the smaller values of the other, i.e. the variables tend to show opposite behavior, the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not that easy to interpret. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation.