Multiple Comparison Inference Algorithm
Jump to navigation
Jump to search
A Multiple Comparison Inference Algorithm is a statistical inference algorithm that can be applied by a multiple comparisons inference system (for solving the multiple comparisons inference task).
References
2016
- (Wikipedia, 2016) ⇒ http://en.wikipedia.org/wiki/Multiple_comparisons_problem#Post-hoc_testing_of_ANOVAs Retrieved 2016-08-21
- Multiple comparison procedures are commonly used in an analysis of variance after obtaining a significant omnibus test result, like the ANOVA F-test. The significant ANOVA result suggests rejecting the global null hypothesis H0 that the means are the same across the groups being compared. Multiple comparison procedures are then used to determine which means differ. In a one-way ANOVA involving K group means, there are K(K − 1)/2 pairwise comparisons.
- A number of methods have been proposed for this problem, some of which are:
- Single-step procedures
- Tukey–Kramer method (Tukey's HSD) (1951)
- Scheffé's method (1953)
- Rodger's method (precludes type 1 error rate inflation, using a decision-based error rate)
- Multi-step procedures based on Studentized range statistic
- Duncan's new multiple range test (1955)
- The Nemenyi test is similar to Tukey's range test in ANOVA.
- The Bonferroni–Dunn test allows comparisons, controlling the familywise error rate.
- Student Newman-Keuls post-hoc analysis
- Dunnett's test (1955) for comparison of number of treatments to a single control group.
- Choosing the most appropriate multiple-comparison procedure for your specific situation is not easy. Many tests are available, and they differ in a number of ways.
- For example, if the variances of the groups being compared are similar, the Tukey–Kramer method is generally viewed as performing optimally or near-optimally in a broad variety of circumstances. The situation where the variance of the groups being compared differ is more complex, and different methods perform well in different circumstances.
- The Kruskal–Wallis test is the non-parametric alternative to ANOVA. Multiple comparisons can be done using pairwise comparisons (for example using Wilcoxon rank sum tests) and using a correction to determine if the post-hoc tests are significant (for example a Bonferroni correction).