Non-Parametric Statistical Test
(Redirected from Distribution-Independent Statistical Inference)
Jump to navigation
Jump to search
A Non-Parametric Statistical Test is a statistical hypothesis test whose test statistic is not based on a hypothesized population parameters.
- AKA: Distribution-free Statistical Test.
- Context:
- It is based on a test statistic that does not depend on hypothesized population parameters.
- It does not make any assumption about the underlying probability distribution, population variance, nor about dataset relationships. It does not require that the distribution of sample means will approximate a normal distribution (central limit theorem).
- It uses median as central tendency measure, thus, it less affected by outliers.
- It is applicable to nominal or ordinal data.
- It uses histogram to estimate the probability distribution.
- Example(s):
- Chi-Square Goodness-of-Fit Test.
- Chi-Square Independence Test.
- Order Statistics.
- Distribution-Free Confidence Intervals for Percentiles Test.
- Randomness Test.
- Run Test.
- Rank Test.
- Wilcoxon–Mann–Whitney Tests.
- Kolmogorov-Smirnov Test.
- Goodness-of-fit Test.
- Spearman Correlation Test.
- Kruskal-Wallis Test.
- Friedman's Test
- Counter-Example(s):
- See: Test Statistic, Non-Parametric Model, Nonparametric Statistics, Probability Distribution, Parametrization, Descriptive Statistics.
References
2017a
- (Changing Works, 2017) ⇒ Retrieved on 2017-05-07 from http://changingminds.org/explanations/research/analysis/parametric_non-parametric.htm Copyright: Changing Works 2002-2016
- There are two types of test data and consequently different types of analysis. As the table below shows, parametric data has an underlying normal distribution which allows for more conclusions to be drawn as the shape can be mathematically described. Anything else is non-parametric.
Parametric Statistical Tests Non-Parametric Statistical Tests Assumed distribution Normally Distributed Any Assumed variance Homogeneous Any Typical data Ratio or Interval Ordinal or Nominal Data set relationships Independent Any Usual central measure Mean Median Benefits Can draw more conclusions Simplicity; Less affected by outliers
2017b
- (Jim Frost, 2015) ⇒ Retrieved on 2017-05-07 from http://blog.minitab.com/blog/adventures-in-statistics-2/choosing-between-a-nonparametric-test-and-a-parametric-test Copyright ©2017 Minitab Inc. All rights Reserved.
- Nonparametric tests are like a parallel universe to parametric tests. The table shows related pairs of hypothesis tests that Minitab statistical software offers.
Parametric tests (means) Nonparametric tests (medians) 1-sample t test 1-sample Sign, 1-sample Wilcoxon 2-sample t test Mann-Whitney test One-Way ANOVA Kruskal-Wallis, Mood’s median test Factorial DOE with one factor and one blocking variable Friedman test
2017c
- (Surbhi, 2016) ⇒ Retrived on 2017-05-07 from http://keydifferences.com/difference-between-parametric-and-nonparametric-test.html Copyright © 2017 KeyDifferences
PARAMETRIC TEST NON-PARAMETRIC TEST Independent Sample t Test Mann-Whitney test Paired samples t test Wilcoxon signed Rank test One way Analysis of Variance (ANOVA) Kruskal Wallis Test One way repeated measures Analysis of Variance Friedman's ANOVA
2016A
- (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/Nonparametric_statistics Retrieved:2016-5-24.
- Nonparametric statistics are statistics not based on parameterized families of probability distributions. They include both descriptive and inferential statistics. The typical parameters are the mean, variance, etc. Unlike parametric statistics, nonparametric statistics make no assumptions about the probability distributions of the variables being assessed. The difference between parametric models and non-parametric models is that the former has a fixed number of parameters, while the latter grows the number of parameters with the amount of training data. Note that the non-parametric model does not have any parameters: parameters are determined by the training data, not the model.
2016b
- (Encyclopedia of Math, 2016) ⇒ https://www.encyclopediaofmath.org/index.php/Non-parametric_test Retrieved:2016-9-11.
- statistical test of a hypothesis [math]\displaystyle{ H_0:\; \theta\in\Theta_0\subset\Theta }[/math] against the alternative [math]\displaystyle{ H_1:\; \theta\in\Theta_1=\Theta\setminus\Theta_0 }[/math] when at least one of the two parameter sets [math]\displaystyle{ \Theta_0 }[/math] and [math]\displaystyle{ \Theta_1 }[/math] is not topologically equivalent to a subset of a Euclidean space. Apart from this definition there is also another, wider one, according to which a statistical test is called non-parametric if the statistical inferences obtained using it do not depend on the particular null-hypothesis probability distribution of the observable random variables on the basis of which one wants to test [math]\displaystyle{ H_0 }[/math] against [math]\displaystyle{ H_1 }[/math]. In this case, instead of the term "non-parametric test" one speaks frequently of a "distribution-free statistical test" . The Kolmogorov test is a classic example of a non-parametric test.
2016c
- (Quality Control Plan (At-PQC),2016) ⇒ http://www.quality-control-plan.com/StatGuide/sg_glos.htm Retrieved 2016-10-12
- QUOTE:Nonparametric tests are tests that do not make distributional assumptions, particularly the usual distributional assumptions of the normal-theory based tests. These include tests that do not involve population parameters at all (truly nonparametric tests such as the chi-square goodness of fit test), and distribution-free tests, whose validity does not depend on the population distribution(s) from which the data have been sampled. In particular, nonparametric tests usually drop the assumption that the data come from normally distributed populations. However, distribution-free tests generally do make some assumptions, such as equality of population variances.
2008
- (Sasha & Wilson, 2008) ⇒ Dennis Shasha, and Manda Wilson. (2008). “Statistics is Easy!" Synthesis Lectures on Mathematics and Statistics. doi:10.2200/S00142ED1V01Y200807MAS001
- Abstract: * Statistics is the activity of inferring results about a population given a sample. Historically, statistics books assume an underlying distribution to the data (typically, the normal distribution) and derive results under that assumption. Unfortunately, in real life, one cannot normally be sure of the underlying distribution. For that reason, this book presents a distribution-independent approach to statistics based on a simple computational counting idea called resampling.
This book explains the basic concepts of resampling, then systematically presents the standard statistical measures along with programs (in the language Python) to calculate them using resampling, and finally illustrates the use of the measures and programs in a case study. The text uses junior high school algebra and many examples to explain the concepts. The ideal reader has mastered at least elementary mathematics, likes to think procedurally, and is comfortable with computers.
- Abstract: * Statistics is the activity of inferring results about a population given a sample. Historically, statistics books assume an underlying distribution to the data (typically, the normal distribution) and derive results under that assumption. Unfortunately, in real life, one cannot normally be sure of the underlying distribution. For that reason, this book presents a distribution-independent approach to statistics based on a simple computational counting idea called resampling.