Chi-Square Statistic
(Redirected from chi-squared statistic)
Jump to navigation
Jump to search
A Chi-Square Statistic is a test statistic whose values are given by sum of the squares of the differences between the observed and expected frequencies divided by the corresponding expected frequency.
- AKA: Chi-Square Test Statistic.
- Context:
- It is generally defined as [math]\displaystyle{ \chi^2=\sum\frac{(observed−expected)^2}{expected} }[/math]
- It can be reference by a Chi-Square Goodness-of-Fit Test, Chi-Square Homogeneity Test, Chi-Square Test of Independence, Chi-Square Analysis of Variance Test.
- The probability distribution of all possible values of a chi-square statistic is a chi-square distribution. Usually, it is hypothesized that if a sample is large enough, the sampling distribution of the chi-square statistic approaches the chi-square distribution.
- …
- Counter-Example(s)
- See: Sum of Squares, Chi-Squared Goodness-Of-Fit Testing Task, Chi-Squared Probability Function, Pearson's Chi-Squared Test.
References
2017
- (Stattek,2017) ⇒ http://stattrek.com/m/statistics/dictionary.aspx?chi_square_statistic
- Suppose we select a random sample from a normal population. The chi-square statistic can be computed using the following equation:
- [math]\displaystyle{ \chi^2 = [( n - 1 ) * s^2] / σ^2 }[/math]
- where n is the sample size, σ is the population standard deviation, s is the sample standard deviation equals, and Χ2 is the chi-square statistic.
2016
- (Wikipedia, 2016) ⇒ http://en.wikipedia.org/wiki/
- A chi-squared test, also written as χ2 test, is any statistical hypothesis test wherein the sampling distribution of the test statistic is a chi-squared distribution when the null hypothesis is true. Without other qualification, 'chi-squared test' often is used as short for Pearson's chi-squared test.
- Chi-squared tests are often constructed from a sum of squared errors, or through the sample variance. Test statistics that follow a chi-squared distribution arise from an assumption of independent normally distributed data, which is valid in many cases due to the central limit theorem. A chi-squared test can be used to attempt rejection of the null hypothesis that the data are independent.
1992
- (Kramer & Schmidhammer, 1992) ⇒ Kramer, Matthew, and James Schmidhammer. “The chi-squared statistic in ethology: use and misuse." Animal Behaviour 44.5 (1992): 833-841. doi:10.1016/S0003-3472(05)80579-2
- ABSTRACT: Pearson's chi-squared and related tests are not appropriate for all frequency-type data. Lack of independence between observations can invalidate traditional contingency table analysis because sampling distributions are no longer Poisson, multinomial or product multinominal. The usual consequence is that a true null hypothesis is rejected too often, making dubious a claim of significance. If possible, counts should be verified as coming from a Poisson or multinomial distribution before conducting tests. Assuming independence is not sufficient; chi-squared and related tests are shown not to be robust to the violation of this assumption. Frequency-type ethological data, such as the number of encounters between individuals or performances of a behaviour, are likely to violate the assumption of independence. A superior approach for the analysis of these data is demonstrated using parametric and non-parametric analysis of variance (ANOVA).