F-Test
(Redirected from F-test)
Jump to navigation
Jump to search
An F-Test is an Omnibus statistical test that uses an F-statistic (with an F-distribution).
- Context:
- It can be an Omnibus Test.
- …
- Example(s):
- an ANOVA F test to test significance between all factor means and/or between their variances equality in Analysis of Variance procedure.
- an omnibus multivariate F Test in ANOVA with repeated measures.
- an F test for equality/inequality of the regression coefficients in Multiple Regression.
- an F-test of Equality of Variances.
- …
- Counter-Example(s):
- a Chi-Squared Test, with a Chi-Squared statistic.
- a t-Test, with a t-statistic.
- See: Bartlett's Test, Brown–Forsythe Test, Population (Statistics), Least Squares, One-Way ANOVA.
References
2016
- (Wikipedia, 2016) ⇒ http://en.wikipedia.org/wiki/F-test Retrieved 2016-08-07
- An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis.
It is most often used when comparing statistical models that have been fitted to a data set, in order to identify the model that best fits the population from which the data were sampled. Exact "F-tests" mainly arise when the models have been fitted to the data using least squares. The name was coined by George W. Snedecor, in honour of Sir Ronald A. Fisher. Fisher initially developed the statistic as the variance ratio in the 1920s.
- An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis.
- Common examples of F-tests
- Common examples of the use of F-tests include the study of the following cases:
- The hypothesis that the means of a given set of normally distributed populations, all having the same standard deviation, are equal. This is perhaps the best-known F-test, and plays an important role in the analysis of variance (ANOVA).
- The hypothesis that a proposed regression model fits the data well. See Lack-of-fit sum of squares.
- The hypothesis that a data set in a regression analysis follows the simpler of two proposed linear models that are nested within each other.
In addition, some statistical procedures, such as Scheffé's method for multiple comparisons adjustment in linear models, also use F-tests.
- (Wikipedia, 2016) ⇒ http://en.wikipedia.org/wiki/Omnibus_test Retrieved 2016-08-28
- Omnibus tests are a kind of statistical test. They test whether the explained variance in a set of data is significantly greater than the unexplained variance, overall. One example is the F-test in the analysis of variance. There can be legitimate significant effects within a model even if the omnibus test is not significant. For instance, in a model with two independent variables, if only one variable exerts a significant effect on the dependent variable and the other does not, then the omnibus test may be non-significant. This fact does not affect the conclusions that may be drawn from the one significant variable. In order to test effects within an omnibus test, researchers often use contrasts.
- In addition, Omnibus test as a general name refers to an overall or a global test. Other names include F-test or Chi-squared test.
- Omnibus test as a statistical test is implemented on an overall hypothesis that tends to find general significance between parameters' variance, while examining parameters of the same type [...]
- Omnibus tests commonly refers to either one of those statistical tests:
- ANOVA F test to test significance between all factor means and/or between their variances equality in Analysis of Variance procedure ;
- The omnibus multivariate F Test in ANOVA with repeated measures ;
- F test for equality/inequality of the regression coefficients in Multiple Regression;
- Chi-Square test for exploring significance differences between blocks of independent explanatory variables or their coefficients in a logistic regression.