AIC Statistic
(Redirected from AIC (Akaike information criterion))
Jump to navigation
Jump to search
An AIC Statistic is a statistical model selection information criterion based on Kullback-Leibler divergence.
- AKA: Akaike's Information Criterion, AIC.
- Context:
- It can, for categorical labeled data, use a Likelihood-Ratio Goodness-of-Fit Statistic.
- …
- Counter-Example(s):
- See: Statistical Goodness-of-Fit Measure, Information Entropy.
References
2012
- http://en.wikipedia.org/wiki/Akaike_information_criterion
- QUOTE: The Akaike information criterion is a measure of the relative goodness of fit of a statistical model. It was developed by Hirotsugu Akaike, under the name of "an information criterion" (AIC), and was first published by Akaike in 1974.[1] It is grounded in the concept of information entropy, in effect offering a relative measure of the information lost when a given model is used to describe reality. It can be said to describe the tradeoff between bias and variance in model construction, or loosely speaking between accuracy and complexity of the model.
AIC values provide a means for model selection. AIC does not provide a test of a model in the sense of testing a null hypothesis; i.e. AIC can tell nothing about how well a model fits the data in an absolute sense. If all the candidate models fit poorly, AIC will not give any warning of that.
- QUOTE: The Akaike information criterion is a measure of the relative goodness of fit of a statistical model. It was developed by Hirotsugu Akaike, under the name of "an information criterion" (AIC), and was first published by Akaike in 1974.[1] It is grounded in the concept of information entropy, in effect offering a relative measure of the information lost when a given model is used to describe reality. It can be said to describe the tradeoff between bias and variance in model construction, or loosely speaking between accuracy and complexity of the model.
- ↑ Akaike (1974)
2004
- (Burnham & Anderson, 2004) ⇒ Kenneth P. Burnham and David R. Anderson (2004). “Multimodel inference understanding AIC and BIC in model selection". Sociological methods & research, 33(2), 261-304. [doi:10.1177/0049124104268644]
- (...) K is the asymptotic bias correction term and is in no way arbitrary (as is sometimes erroneously stated in the literature). Akaike (1973,1974) multiplied this simple but profound result by –2 (for “historical reasons”), and this became Akaike’s information criterion:
AIC = −2 \log(L([math]\displaystyle{ \theta }[/math]|data))+ 2K.
In the special case of least squares (LS) estimation with normally distributed errors, AIC can be expressed as
AIC = n log( [math]\displaystyle{ \sigma^2 }[/math]) + 2K,
- (...) K is the asymptotic bias correction term and is in no way arbitrary (as is sometimes erroneously stated in the literature). Akaike (1973,1974) multiplied this simple but profound result by –2 (for “historical reasons”), and this became Akaike’s information criterion:
1974
- (Akaike, 1974) ⇒ Hirotugu Akaike. (1974). “A New Look at the Statistical Model Identification.” In: IEEE Transactions on Automatic Control, 19(6). doi:10.1109/TAC.1974.1100705.
- (...) The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are several competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples.