Normality Test
Jump to navigation
Jump to search
A Normality Test is a hypothesis test for determining whether the dataset can be modeled by a normal distribution.
- See: Null Hypothesis, Shapiro–Wilk test, Cramer-Von Mises Test, Kolmogorov-Smirnov test, Anderson–Darling test.
References
2016
- (Wikipedia, 2016) ⇒ https://www.wikiwand.com/en/Normality_test Retrieved 2016-07-31
- In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.
More precisely, the tests are a form of model selection, and can be interpreted several ways, depending on one's interpretations of probability:
- In descriptive statistics terms, one measures a goodness of fit of a normal model to the data – if the fit is poor then the data are not well modeled in that respect by a normal distribution, without making a judgment on any underlying variable.
- In frequentist statistics statistical hypothesis testing, data are tested against the null hypothesis that it is normally distributed.
- In Bayesian statistics, one does not "test normality" per se, but rather computes the likelihood that the data come from a normal distribution with given parameters μ,σ (for all μ,σ), and compares that with the likelihood that the data come from other distributions under consideration, most simply using a Bayes factor (giving the relative likelihood of seeing the data given different models), or more finely taking a prior distribution on possible models and parameters and computing a posterior distribution given the computed likelihoods.
- In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.