Test Statistic
Jump to navigation
Jump to search
A Test Statistic is a statistic function, [math]\displaystyle{ t= g(s) }[/math], that ... can be used to determine the veracity of the statistical hypotheses.
- Context:
- It can be defined as the ratio between the systematic variance divided by the random variance or the ratio between the experimental effect divided the variability.
- Parametric Statistical Tests: It is determined by relationship between a point estimate and the corresponding population parameters normalized by a standard deviation.
- It is generally defined as [math]\displaystyle{ t= f(\hat{\theta}(X),\sigma(X,\theta),\theta_0)=\frac{\hat{\theta}(X)-\theta_0}{\sigma (X,\theta)} }[/math]
- where [math]\displaystyle{ \hat{\theta}(X) }[/math] is a (point estimate derived from sample data of the random variable [math]\displaystyle{ X }[/math], [math]\displaystyle{ \theta_0 }[/math] is a population parameter value stated under the null hypothesis (i.e. [math]\displaystyle{ H_0:\; \theta=\theta_0 }[/math]) and [math]\displaystyle{ \sigma(X,\theta) }[/math] standard deviation which depends on both sampling distribution and population distribution.
- Non-Parametric Statistical Tests: It doesn't depend on population parameters and sampling distribution.
- It is generally defined as sum of observed differences or ranks, [math]\displaystyle{ t= \sum f(R_i) }[/math]
- Example(s)
- Parametric Statistical Tests:
- [math]\displaystyle{ t=\frac{\overline{x}-\mu_0}{s/\sqrt{n}} }[/math], One-sample t-Statistic obtained from the sample mean value ([math]\displaystyle{ \overline{x} }[/math]), the population mean value stated by the null hypothesis ([math]\displaystyle{ \mu_0 }[/math]), sample standard deviation ([math]\displaystyle{ s }[/math]) and sample size ([math]\displaystyle{ n }[/math]).
- [math]\displaystyle{ t = \frac{\bar{d} - D}{s_d/\sqrt{n}} }[/math], Matched-Pair t-Statistic obtained from the mean difference between matched pairs in the sample ([math]\displaystyle{ \bar{d} }[/math]), hypothesized difference between population means (D) and the standard deviation of the differences for each matched pair ([math]\displaystyle{ s_d }[/math]).
- [math]\displaystyle{ t = \frac{(\overline{x}_1 - \overline{x}_2) - d_0}{s_p \sqrt{1/n_1+1/n_2}} }[/math], Independent Two-Sample t-Statistic obtained from the sample means of sample drawn from population 1 and 2 ([math]\displaystyle{ \overline{x_1}, \; \overline{x_2} }[/math]) with sample size [math]\displaystyle{ n_1 }[/math] and [math]\displaystyle{ n_2 }[/math], as well as the hypothesized difference between population means ([math]\displaystyle{ d_0 }[/math]), ans pooled standard deviation ([math]\displaystyle{ s_p }[/math]).
- Z-statistic.
- F-statistic.
- Non-Parametric Statistical Tests:
- [math]\displaystyle{ W =\sum^n_{i=1} R^{(+)}_i }[/math], Wilcoxon Signed-Rank Test Statistic obtained as the sum of the positive ranks ([math]\displaystyle{ R^{(+)}_i }[/math]).
- [math]\displaystyle{ \chi^2=\sum^n_{i=1}\frac{(O_i−E_i)^2}{E_i} }[/math], Chi-Square Statistic obtained from the observed frequency count for the ith level of the categorical variable ([math]\displaystyle{ O_i }[/math]), and the expected frequency count for the ith level of the categorical variable ([math]\displaystyle{ E_i }[/math]).
- [math]\displaystyle{ n_1 n_2 + \frac{n_2(n_2+1)}{2} - \sum^{n_2}_{i=n_1+1}R_i }[/math], Mann-Whitney U Statistic obtained from sample size one ([math]\displaystyle{ n_1 }[/math]), sample size two ([math]\displaystyle{ n_2 }[/math]) and sum of the ranks ([math]\displaystyle{ R_i }[/math]).
- …
- Parametric Statistical Tests:
- Counter-Example(s)
- See: Statistical Hypothesis Testing, Null Hypothesis, P-Value, Statistical Population, Sufficient Statistic, Order Statistic, Likelihood-ratio test, Sampling Distribution.
References
2016
- (Wikipedia, 2016) ⇒ http://en.wikipedia.org/wiki/test_statistic Retrieved 2016-09-11
- QUOTE: A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing. A hypothesis test is typically specified in terms of a test statistic, considered as a numerical summary of a data-set that reduces the data to one value that can be used to perform the hypothesis test. In general, a test statistic is selected or defined in such a way as to quantify, within observed data, behaviours that would distinguish the null from the alternative hypothesis, where such an alternative is prescribed, or that would characterize the null hypothesis if there is no explicitly stated alternative hypothesis (...) For example, suppose the task is to test whether a coin is fair (i.e. has equal probabilities of producing a head or a tail). If the coin is flipped 100 times and the results are recorded, the raw data can be represented as a sequence of 100 heads and tails. If there is interest in the marginal probability of obtaining a head, only the number T out of the 100 flips that produced a head needs to be recorded. But T can also be used as a test statistic in one of two ways:
- the exact sampling distribution of T under the null hypothesis is the binomial distribution with parameters 0.5 and 100.
- the value of T can be compared with its expected value under the null hypothesis of 50, and since the sample size is large a normal distribution can be used as an approximation to the sampling distribution either for T or for the revised test statistic T−50.
- Using one of these sampling distributions, it is possible to compute either a one-tailed or two-tailed p-value for the null hypothesis that the coin is fair. Note that the test statistic in this case reduces a set of 100 numbers to a single numerical summary that can be used for testing.
2016
- (Stat Trek, 2016) ⇒ http://stattrek.com/statistics/dictionary.aspx?definition=Statistic Retrieved: 10-02-2016
- QUOTE: In hypothesis testing, the test statistic is a value computed from sample data. The test statistic is used to assess the strength of evidence in support of a null hypothesis.
- Suppose the test statistic in a hypothesis test is equal to S. If the probability of observing a test statistic as extreme as S is less than the significance level, we reject the null hypothesis.
2016
- (Statistical Analysis Glossary, 2016) ⇒ http://www.quality-control-plan.com/StatGuide/sg_glos.htm#P_value Retrieved: 10-02-2016
- QUOTE: In a statistical hypothesis test, the P value is the probability of observing a test statistic at least as extreme as the value actually observed, assuming that the null hypothesis is true. This probability is then compared to the pre-selected significance level of the test. If the P value is smaller than the significance level, the null hypothesis is rejected, and the test result is termed significant. The P value depends on both the null hypothesis and the alternative hypothesis. In particular, a test with a one-sided alternative hypothesis will generally have a lower P value (and thus be more likely to be significant) than a test with a two-sided alternative hypothesis. However, one-sided tests require more stringent assumptions than two-sided tests. They should only be used when those assumptions apply.
2016
- (Vsevolozhskaya et al., 2016) ⇒ Olga A. Vsevolozhskaya, Chia-Ling Kuo, Gabriel Ruiz, Luda Diatchenko, and Dmitri V. Zaykin. (2016). “The More You Test, the More You Find: Smallest P-values Become Increasingly Enriched with Real Findings As More Tests Are Conducted." arXiv preprint arXiv:1609.01788
- QUOTE: We consider P-values derived from commonly used test statistics, such as chi-squared, F, normal z, and Student's t statistics. ...
1978
- (Rosenthal, 1978) ⇒ Robert Rosenthal. (1978). “Combining results of independent studies." Psychological bulletin 85, no. 1 (1978): 185.
- QUOTE: ... Not simply in connection with combining ps but at any time that test statistics such as t, F, or Z are reported, estimated effect sizes should routinely be reported. The particular effect size d seems to be the most useful one to em- ploy when two groups are being compared. ...