One-sample t-Statistic
(Redirected from one-sample t-test statistic)
Jump to navigation
Jump to search
A One-sample t-Statistic is a t-statistic used in one-sample t-test to measure the difference between sample mean and the population mean.
- AKA: Single-sample t-statistic.
- Context:
- It is defined as [math]\displaystyle{ t = \frac{\overline{x} - \mu_0}{SE} }[/math] where SE is the sample standard error, [math]\displaystyle{ \overline{x} }[/math] is the sample mean, [math]\displaystyle{ \mu_0 }[/math] is the hypothesized population mean, [math]\displaystyle{ s }[/math] is the sample standard deviation, [math]\displaystyle{ n }[/math] is the sample size and [math]\displaystyle{ N }[/math] is the population size. For larger samples, i.e. for [math]\displaystyle{ \frac{N}{n} \gt 20 }[/math], the sample standard error approximates [math]\displaystyle{ SE= s/\sqrt{n} }[/math] while for smaller samples, i.e.[math]\displaystyle{ \frac{N}{n}\leq 20 }[/math], [math]\displaystyle{ SE=s\sqrt{\frac{1}{n}\frac{N - n}{N - 1}} }[/math]
- Example(s)
- Let's consider a sample of size 50, with mean value [math]\displaystyle{ \overline{x}=395 }[/math] and standard deviation [math]\displaystyle{ s=30 }[/math]. In this case [math]\displaystyle{ SE=s/\sqrt{n}=30/\sqrt{50}=4.24 }[/math]. If the hypothesized population mean value is [math]\displaystyle{ \mu_0=400 }[/math], then [math]\displaystyle{ t=\frac{395-400}{4.24}=-1.18 }[/math]
- Counter-Example(s):
- See: Test Statistic, Comparison of Means Test, Parametric Statistical Test.
References
2016
- (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/t-statistic Retrieved:2016-7-12.
- In statistics, the t-statistic is a ratio of the departure of an estimated parameter from its notional value and its standard error. It is used in hypothesis testing, for example in the Student’s t-test, in the augmented Dickey–Fuller test, and in bootstrapping.
- (QCP, 2016) &rARrr; https://www.quality-control-plan.com/StatGuide/ttest_one.htm Retrieved 2016-10-16
- QUOTE: The one-sample t test is used to test the null hypothesis that the mean of the population from which the data sample is drawn is equal to a hypothesized value.
- Assumptions:
- The sample values are independent.
- The sample values are all identically normally distributed (same mean and variance).
- Assumptions:
2014
- http://stattrek.com/statistics/dictionary.aspx?definition=t_statistic
- QUOTE: The t statistic is defined by: [math]\displaystyle{ t = \frac{x - μ}{\frac{s}{\sqrt{n}}} }[/math]
- where x is the sample mean, μ is the population mean, s is the standard deviation of the sample, and n is the sample size.
2007
- (Goldsman, 2007) ⇒ David Goldsman (2007). Chapter 6 - Sampling Distributions, Course Notes: "ISyE 3770 - Probability and Statistics" [1], PDF file
- QUOTE: Let [math]\displaystyle{ X \sim N(\mu, \sigma^2) }[/math]. Then [math]\displaystyle{ X \sim N(\mu, \sigma^2/n) }[/math] or, equivalently, [math]\displaystyle{ Z =(X − \mu)/(\sigma/\sqrt{n}) \sim N(0, 1) }[/math]. In most cases, the value of [math]\displaystyle{ \sigma^2 }[/math] is not available. Thus, we will use [math]\displaystyle{ S^2 }[/math] to estimate [math]\displaystyle{ \sigma^2 }[/math]. The t-distribution deals with the distribution about the statistic T defined by
- [math]\displaystyle{ T =\frac{X-\mu}{S/\sqrt{n}} }[/math]
- [...] Let [math]\displaystyle{ Z \sim N(0, 1) }[/math] and [math]\displaystyle{ W \sim \chi^{2\nu} }[/math] be two independent random variables. The random variable [math]\displaystyle{ T =Z/\sqrt{W/\nu} }[/math] is said to possess a t-distribution with [math]\displaystyle{ \nu }[/math] degrees of freedom and is denoted by [math]\displaystyle{ T \sim t_\nu }[/math]