Dixon's Q Test
(Redirected from Q test)
Jump to navigation
Jump to search
A Dixon's Q Test is a statistical test for the identification and rejection of outliers.
- AKA: Q-test, Dixon Test.
- See: Statistical Test, Outlier, Grubbs' test for outliers.
References
2017
- (ITL-SED, 2017) ⇒ Retrieved 2017-01-08 from NIST (National Intitute of Standards and Technology, US) website http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/dixon.htm
2016
- (Wikipedia, 2016) ⇒ http://en.wikipedia.org/wiki/Dixon's_Q_test Retrieved 2016-08-21
- In statistics, Dixon's Q test, or simply the Q test, is used for identification and rejection of outliers. This assumes normal distribution and per Robert Dean and Wilfrid Dixon, and others, this test should be used sparingly and never more than once in a data set. To apply a Q test for bad data, arrange the data in order of increasing values and calculate Q as defined:
- [math]\displaystyle{ Q = \frac{\text{gap}}{\text{range}} }[/math]
- Where gap is the absolute difference between the outlier in question and the closest number to it. If Q > Qtable, where Qtable is a reference value corresponding to the sample size and confidence level, then reject the questionable point. Note that only one point may be rejected from a data set using a Q test.