Tukey's Range Test
Jump to navigation
Jump to search
A Tukey's Range Test is a post-hoc multiple comparison procedure and a statistical test that compares all possible pairs of sample means.
- AKA: Tukey's Test, Tukey's Method, Tukey's Honest Significance Test, Tukey's Honest Significant Difference Test, Tukey's HSD Test, Tukey–Kramer Method.
- …
- Counter-Example(s):
- See: Statistical Test, Post Hoc Analysis, Analysis of Variance, Multiple Comparisons Problem, Duncan's New Multiple Range Test, Student-Newman-Keuls Method, Familywise Error Rate.
References
2016
- (Wikipedia, 2016) ⇒ http://en.wikipedia.org/wiki/Tukey's_range_test Retrieved 2016-08-28
- Tukey's range test, also known as the Tukey's test, Tukey method, Tukey's honest significance test, Tukey's HSD (honest significant difference) test,or the Tukey–Kramer method, is a single-step multiple comparison procedure and statistical test. It can be used on raw data or in conjunction with an ANOVA (Post-hoc analysis) to find means that are significantly different from each other. Named after John Tukey, it compares all possible pairs of means, and is based on a studentized range distribution (q) (this distribution is similar to the distribution of t from the t-test. See below). The Tukey HSD tests should not be confused with the Tukey Mean Difference tests (also known as the Bland–Altman test).
- Tukey's test compares the means of every treatment to the means of every other treatment; that is, it applies simultaneously to the set of all pairwise comparisons
- [math]\displaystyle{ \mu_i-\mu_j \, }[/math]
- and identifies any difference between two means that is greater than the expected standard error. The confidence coefficient for the set, when all sample sizes are equal, is exactly 1 − α. For unequal sample sizes, the confidence coefficient is greater than 1 − α. In other words, the Tukey method is conservative when there are unequal sample sizes.
1949
- (Tukey, 1949) ⇒ John W. Tukey (1949). Comparing individual means in the analysis of variance. Biometrics, 99-114. doi:10.2307/3001913
- The practitioner of the analysis of variance often wants to draw as many conclusions as are reasonable about the relation of the true means for individual “treatments", and a statement by the F-test (or the z-test) that they are not all alike leaves him thoroughly unsatisfied. The problem of breaking up the treatment means into distinguishable groups has not been discussed at much length, the solutions given in the various textbooks differ and, what is more important, seem solely based on intuition. After discussing the problem on a basis combining intuition with some hard, cold facts about the distributions of certain test quantities (or “statistics") a simple and definite procedure is proposed for dividing treatments into distinguishable groups, and for determining that the treatments within some of these groups are different, although there is not enough evidence to say "which is which." The procedure is illustrated on examples.