Z Goodness-of-Fit Measure
A Z Goodness-of-Fit Measure is a goodness-of-fit test for which the distribution of the Z-test statistic (under the null hypothesis) can be approximated by a normal distribution.
- AKA: Normal Test.
- Context:
- It can be an input to a Z-test for Correlation.
- Example(s):
- …
- Counter-Example(s):
- See: z-Distribution, Cumulative Distribution Function, Statistical Hypothesis Testing, Test Statistic, Expected Value, Standard Deviation, Standard Score.
References
2016
- (Wikipedia, 2016) ⇒ http://wikipedia.org/wiki/Z-test Retrieved:2016-3-9.
- A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution. Because of the central limit theorem, many test statistics are approximately normally distributed for large samples. For each significance level, the Z-test has a single critical value (for example, 1.96 for 5% two tailed) which makes it more convenient than the Student's t-test which has separate critical values for each sample size. Therefore, many statistical tests can be conveniently performed as approximate Z-tests if the sample size is large or the population variance known. If the population variance is unknown (and therefore has to be estimated from the sample itself) and the sample size is not large (n < 30), the Student's t-test may be more appropriate.
If T is a statistic that is approximately normally distributed under the null hypothesis, the next step in performing a Z-test is to estimate the expected value θ of T under the null hypothesis, and then obtain an estimate s of the standard deviation of T. After that the standard score Z = (T − θ) / s is calculated, from which one-tailed and two-tailed p-values can be calculated as Φ(−Z) (for upper-tailed tests), Φ(Z) (for lower-tailed tests) and 2Φ(−|Z|) (for two-tailed tests) where Φ is the standard normal cumulative distribution function.
- A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution. Because of the central limit theorem, many test statistics are approximately normally distributed for large samples. For each significance level, the Z-test has a single critical value (for example, 1.96 for 5% two tailed) which makes it more convenient than the Student's t-test which has separate critical values for each sample size. Therefore, many statistical tests can be conveniently performed as approximate Z-tests if the sample size is large or the population variance known. If the population variance is unknown (and therefore has to be estimated from the sample itself) and the sample size is not large (n < 30), the Student's t-test may be more appropriate.
2008
- (Upton & Cook, 2008) ⇒ Graham Upton, and Ian Cook. (2008). “A Dictionary of Statistics, 2nd edition revised." Oxford University Press. ISBN:0199541450
- QUOTE: Z-test (normal test): A test, using a sample of n independent observations, of the null hypothesis that a normal distribution with variance 02, has mean /4. Writing the sample mean as [math]\displaystyle{ \bar{x} }[/math], the test statistic is z, given by [math]\displaystyle{ z = Tx _ M }[/math].
In the case where the alternative hypothesis is that the mean is not /4, then, if zl > 1.96, there is evidence to reject the null hypothesis at the 5% significance level in favour of the altemative hypothesis. See HYPOTHESIS TEST.
- QUOTE: Z-test (normal test): A test, using a sample of n independent observations, of the null hypothesis that a normal distribution with variance 02, has mean /4. Writing the sample mean as [math]\displaystyle{ \bar{x} }[/math], the test statistic is z, given by [math]\displaystyle{ z = Tx _ M }[/math].
2004
- (Durtschi et al., 2004) ⇒ Cindy Durtschi, William Hillison, and Carl Pacini. (2004). “The Effective Use of Benford's Law to Assist in Detecting Fraud in Accounting Data.” In: Journal of forensic accounting, 5(1).
- QUOTE: ... A z-statistic can be used to determine whether a particular digit’s proportion from a set of data is suspect. In other words, does a digit appear more or less frequently in a particular position than a Benford distribution would predict? The z-statistic is calculated as follows (Nigrini 1996): :[math]\displaystyle{ z = (|(p_o – p_e| – \frac{1}{2n} )/s_i \ (3) }[/math] Where [math]\displaystyle{ p_o }[/math] is the observed proportion in the data set;
[math]\displaystyle{ p_e }[/math] is the expected proportion based on Benford’s law;
[math]\displaystyle{ s_i }[/math] is the standard deviation for a particular digit; and [math]\displaystyle{ n }[/math] is the number of observations (the term [math]\displaystyle{ 1/(2n) }[/math] is a continuity correction factor and is used only when it is smaller than the absolute value term).A z-statistic of 1.96 would indicate a p-value of .05 (95 percent confidence) while a z-statistic of 1.64 would suggest a p-value of .10 (90 percent confidence).
- QUOTE: ... A z-statistic can be used to determine whether a particular digit’s proportion from a set of data is suspect. In other words, does a digit appear more or less frequently in a particular position than a Benford distribution would predict? The z-statistic is calculated as follows (Nigrini 1996): :[math]\displaystyle{ z = (|(p_o – p_e| – \frac{1}{2n} )/s_i \ (3) }[/math] Where [math]\displaystyle{ p_o }[/math] is the observed proportion in the data set;