Sufficiency Principle
Sufficiency Principle is a rational rule of statistical inference.
- AKA: Sufficiency.
- Context:
- This principle states that:
A statistic [math]\displaystyle{ t=T(X) }[/math] is a sufficient statistic for [math]\displaystyle{ \theta }[/math] if the statistical inference does not change when either [math]\displaystyle{ x_1 }[/math] or [math]\displaystyle{ x_2 }[/math] is observed, T [math]\displaystyle{ (x_1) = T (x_2) }[/math].
[math]\displaystyle{ X }[/math] is the data collected vector, a random variable with probability distribution [math]\displaystyle{ P(x,\theta) }[/math].
This implies that the statistic [math]\displaystyle{ t=T(X) }[/math] does not depend on the parameter [math]\displaystyle{ \theta }[/math], i.e. [math]\displaystyle{ \Pr(\theta, x\mid t) = \Pr(\theta\mid t) \Pr(x\mid t) }[/math].
- This principle states that:
- See: Sufficient Statistic, Statistical Inference, Likelihood Principle, Ancillary Statistic, Conditionality Principle, Birnbaum’s Theorem.
References
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Sufficiency_principle Retrieved 2016-07-30
- In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than does the statistic, as to which of those probability distributions is that of the population from which the sample was taken.
Roughly, given a set [math]\displaystyle{ \mathbf{X} }[/math] of independent identically distributed data conditioned on an unknown parameter [math]\displaystyle{ \theta }[/math], a sufficient statistic is a function [math]\displaystyle{ T(\mathbf{X}) }[/math] whose value contains all the information needed to compute any estimate of the parameter (e.g. a maximum likelihood estimate). Due to the factorization theorem (see below), for a sufficient statistic [math]\displaystyle{ T(\mathbf{X}) }[/math], the joint distribution can be written as [math]\displaystyle{ p(\mathbf{X}) = h(\mathbf{X}) \, g(\theta, T(\mathbf{X}))\, }[/math]. From this factorization, it can easily be seen that the maximum likelihood estimate of [math]\displaystyle{ \theta }[/math] will interact with [math]\displaystyle{ \mathbf{X} }[/math] only through [math]\displaystyle{ T(\mathbf{X}) }[/math]. Typically, the sufficient statistic is a simple function of the data, e.g. the sum of all the data points.
More generally, the "unknown parameter" may represent a vector of unknown quantities or may represent everything about the model that is unknown or not fully specified. In such a case, the sufficient statistic may be a set of functions, called a jointly sufficient statistic. Typically, there are as many functions as there are parameters. For example, for a Gaussian distribution with unknown mean and variance, the jointly sufficient statistic, from which maximum likelihood estimates of both parameters can be estimated, consists of two functions, the sum of all data points and the sum of all squared data points (or equivalently, the sample mean and sample variance).
The concept, due to Ronald Fisher, is equivalent to the statement that, conditional on the value of a sufficient statistic for a parameter, the joint probability distribution of the data does not depend on that parameter. Both the statistic and the underlying parameter can be vectors.
A related concept is that of linear sufficiency, which is weaker than sufficiency but can be applied in some cases where there is no sufficient statistic, although it is restricted to linear estimators. The Kolmogorov structure function deals with individual finite data, the related notion there is the algorithmic sufficient statistic.
The concept of sufficiency has fallen out of favor in descriptive statistics because of the strong dependence on an assumption of the distributional form (see Pitman–Koopman–Darmois theorem below), but remains very important in theoretical work.
Mathematical definition:
A statistic t=T(X) is sufficient for underlying parameter θ precisely if the conditional probability distribution of the data X, given the statistic t=T(X), does not depend on the parameter θ,
- In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than does the statistic, as to which of those probability distributions is that of the population from which the sample was taken.
- [math]\displaystyle{ \Pr(x\mid t,\theta) = \Pr(x\mid t).\, }[/math]
- Instead of this expression, the definition still holds if one uses either of the equivalent expressions:
- [math]\displaystyle{ \Pr(\theta\mid t,x) = \Pr(\theta\mid t),\, }[/math]
- or
- [math]\displaystyle{ \Pr(\theta, x\mid t) = \Pr(\theta\mid t) \Pr(x\mid t),\, }[/math]
- which indicate, respectively, that the conditional probability of the parameter θ, given the sufficient statistic t, does not depend on the data x ; and that the conditional probability of the parameter θ given the sufficient statistic t and the conditional probability of the data x given the sufficient statistic t are statistically independent.
2014
- (Xu et al., 2014) ⇒ Xu, G., Zhu, S., & Chen, B. (2014). Decentralized Data Reduction With Quantization Constraints. IEEE Transactions on Signal Processing, 62(7), 1775-1784. DOI:10.1109/TSP.2014.2303432 [1]
- Suppose [math]\displaystyle{ \theta }[/math] is the parameter of inference interest and [math]\displaystyle{ X = \{ X_1, · · · , X_n \} }[/math] is a random vector observation collected at the node, whose distribution is given by [math]\displaystyle{ p(x|\theta) }[/math]. The sufficiency principle states that a function (or statistic) of [math]\displaystyle{ X }[/math], denoted by [math]\displaystyle{ T(X) }[/math], is a sufficient statistic for [math]\displaystyle{ \theta }[/math] if the inference outcome does not change when either [math]\displaystyle{ x }[/math] or [math]\displaystyle{ y }[/math] is observed as long as T [math]\displaystyle{ (x) = T (y) }[/math]. A useful tool to identify sufficient statistics is the Neyman-Fisher factorization theorem which states that a statistic [math]\displaystyle{ T (X) }[/math] is sufficient for [math]\displaystyle{ \theta }[/math] if and only if there exist functions [math]\displaystyle{ g(t|\theta) }[/math] and [math]\displaystyle{ h(x) }[/math] such that
- [math]\displaystyle{ p(x|\theta) = g(T(x)|\theta)h(x) }[/math]