t-Statistic
Jump to navigation
Jump to search
A t-Statistic is a test statistic used in a Student's t-Test.
- AKA: t-Score, t-test statistic.
- Context:
- It can be defined by the [math]\displaystyle{ t= \frac{X_s - m}{S} }[/math] where [math]\displaystyle{ X_s }[/math] is a point estimate of the population parameter [math]\displaystyle{ m }[/math], and [math]\displaystyle{ S }[/math] scaling parameter. Note that, [math]\displaystyle{ X_s }[/math] depends only on the sample data, [math]\displaystyle{ m }[/math] is value defined under null hypothesis, [math]\displaystyle{ S }[/math] is value that characterizes both sampling distribution and population distribution as well as sample size and population size.
- It ranges from being a one-sample t-statistic to two-sample t-statistic.
- It ranges from being a independent-measure t-statistic to a matched-pair t-statistic.
- It can be used in Comparison of Means Tests, Correlational Hypothesis Tests, Group Differences Hypothesis Tests and Regression Test.
- It assumes that sample centrality measures are all identically normally distributed, i.e. means and variances are equal.
- Example(s)
- One-Sample t-Statistic, [math]\displaystyle{ X_s=\overline{x} }[/math] (sample mean), [math]\displaystyle{ m=\mu }[/math] (population mean) and [math]\displaystyle{ S\propto s/n }[/math] (where [math]\displaystyle{ s }[/math] is the sample standard deviation and [math]\displaystyle{ n }[/math] the sample size)
- Two-Sample t-Statistic.
- Matched-Pair t-Statistic.
- Student's t-Test for Correlation.
- Regression t-Test.
- Counter-Example(s):
- Welch's t-test is used when sample variances are not equal.
- a z-Statistic, which can produce a z-score, it is used when the standard deviation is known.
- an F-Statistic, which can produce an F-statistic score.
- a Chi-Squared Statistic.
- See: t-Test, Bootstrapping (Statistics), Standard Error (Statistics), Statistical Hypothesis Testing, Augmented Dickey–Fuller Test, Sampling Distribution, t-Distribution.
References
2016
- (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/t-statistic Retrieved:2016-7-12.
- In statistics, the t-statistic is a ratio of the departure of an estimated parameter from its notional value and its standard error. It is used in hypothesis testing, for example in the Student’s t-test, in the augmented Dickey–Fuller test, and in bootstrapping.
- (Changing Minds, 2016) ⇒ http://changingminds.org/explanations/research/analysis/t-test.htm Retrieved 2016-10-16
- QUOTE: The t-test (or student's t-test) gives an indication of the separateness of two sets of measurements, and is thus used to check whether two sets of measures are essentially different (and usually that an experimental effect has been demonstrated). The typical way of doing this is with the null hypothesis that means of the two sets of measures are equal.
- The t-test assumes:
- A normal distribution (parametric data)
- Underlying variances are equal (if not, use Welch's test)
- It is used when there is random assignment and only two sets of measurement to compare.
- There are two main types of t-test:
- Independent-measures t-test: when samples are not matched.
- Matched-pair t-test: When samples appear in pairs (eg. before-and-after).
- A single-sample t-test compares a sample against a known figure, for example where measures of a manufactured item are compared against the required standard.
- The t-test assumes:
2014
- http://stattrek.com/statistics/dictionary.aspx?definition=t_statistic
- QUOTE: The t statistic is defined by: [math]\displaystyle{ t = \frac{x - μ}{\frac{s}{\sqrt{n}}} }[/math]
- where x is the sample mean, μ is the population mean, s is the standard deviation of the sample, and n is the sample size.
2007
- (Goldsman, 2007) ⇒ David Goldsman (2007). Chapter 6 - Sampling Distributions, Course Notes: "ISyE 3770 - Probability and Statistics" [1], PDF file
- QUOTE: Let [math]\displaystyle{ X \sim N(\mu, \sigma^2) }[/math]. Then [math]\displaystyle{ X \sim N(\mu, \sigma^2/n) }[/math] or, equivalently, [math]\displaystyle{ Z =(X − \mu)/(\sigma/\sqrt{n}) \sim N(0, 1) }[/math]. In most cases, the value of [math]\displaystyle{ \sigma^2 }[/math] is not available. Thus, we will use [math]\displaystyle{ S^2 }[/math] to estimate [math]\displaystyle{ \sigma^2 }[/math]. The t-distribution deals with the distribution about the statistic T defined by
- [math]\displaystyle{ T =\frac{X-\mu}{S/\sqrt{n}} }[/math]
- [...] Let [math]\displaystyle{ Z \sim N(0, 1) }[/math] and [math]\displaystyle{ W \sim \chi^{2\nu} }[/math] be two independent random variables. The random variable [math]\displaystyle{ T =Z/\sqrt{W/\nu} }[/math] is said to possess a t-distribution with [math]\displaystyle{ \nu }[/math] degrees of freedom and is denoted by [math]\displaystyle{ T \sim t_\nu }[/math]