Correlational Hypothesis Testing Task
(Redirected from Correlational Hypothesis Test)
Jump to navigation
Jump to search
A Correlational Hypothesis Testing Task is a statistical hypothesis testing task for examining association or dependence between two random variables based on a calculated correlation coefficient.
- Context:
- It can range from being a Parametric Correlational Test to being a Non-Parametric Correlational Test.
- It can be solved by a Correlational Hypothesis Testing System (that implements a correlational hypothesis testing algorithm).
- It can be described and solved by the following procedure:
- Test Requirements:
- Task Input : A pair of random variables [math]\displaystyle{ (X_i, Y_i) }[/math] corresponding to the values of random sample [math]\displaystyle{ (i=1,...,n) }[/math] drawn from a bivariate or multivariate probability distribution and where [math]\displaystyle{ n }[/math] represents the number of observations or measurements.
- Significance Level ([math]\displaystyle{ \alpha_0 }[/math]) is required for decision rules approach.
- Other requirements are defined by the specific correlational hypothesis test
- Hypotheses to be tested:
- Null Hypothesis - There no correlation present in population, or simply [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are not correlated.
- Alternative Hypothesis - There correlation present in population, or or simply [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are correlated.
- Test Method and Sample Data Analysis:
- Test Statistic: It is based on the calculation of the correlation coefficient.
- Decision Rule: It is based on the calculated correlation coefficient. P-value and region of acceptance approches are considered in certain correlational hypothesis tests.
- Results and Interpretation:
- Task Output: Sample Correlation Coefficient, Optional outputs: P-value, Region of Acceptance, Region of Rejection, Decision Errors (Type I or Type II errors) and respective probabilities ([math]\displaystyle{ \alpha,\; \beta }[/math]).
When the calculated correlation coefficient is 0, then [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are proven to be independent random variables;
- Task Output: Sample Correlation Coefficient, Optional outputs: P-value, Region of Acceptance, Region of Rejection, Decision Errors (Type I or Type II errors) and respective probabilities ([math]\displaystyle{ \alpha,\; \beta }[/math]).
- Test Requirements:
- Example(s):
- Parametric Correlational Tests, such as: a Pearson Correlation Test, T-test for Correlation, Z-test for Correlation, and a R-test for Correlation.
- Non-Parametric Correlational Tests, such as: a Spearman Correlation Test, Mantel Correlation Test, Kendall Tau Correlation Test, and a Gamma Correlation Test.
- Counter-Example(s):
- See: Correlation, Autocorrelation, Cointegration, Correlation Coefficient.
References
2017a
- (Wikipedia, 2016) ⇒ http://en.wikipedia.org/wiki/Correlation_and_dependence
- In statistics, dependence or association is any statistical relationship, whether causal or not, between two random variables or two sets of data. Correlation is any of a broad class of statistical relationships involving dependence, though in common usage it most often refers to the extent to which two variables have a linear relationship with each other.
- Familiar examples of dependent phenomena include the correlation between the physical statures of parents and their offspring, and the correlation between the demand for a product and its price.
- Correlations are useful because they can indicate a predictive relationship that can be exploited in practice. For example, an electrical utility may produce less power on a mild day based on the correlation between electricity demand and weather. In this example there is a causal relationship, because extreme weather causes people to use more electricity for heating or cooling; however, correlation is not sufficient to demonstrate the presence of such a causal relationship (i.e., correlation does not imply causation).
- Formally, dependence refers to any situation in which random variables do not satisfy a mathematical condition of probabilistic independence. In loose usage, correlation can refer to any departure of two or more random variables from independence, but technically it refers to any of several more specialized types of relationship between mean values. There are several correlation coefficients, often denoted ρ or r, measuring the degree of correlation. The most common of these is the Pearson correlation coefficient, which is sensitive only to a linear relationship between two variables (which may exist even if one is a nonlinear function of the other). Other correlation coefficients have been developed to be more robust than the Pearson correlation – that is, more sensitive to nonlinear relationships.[1][2][3] Mutual information can also be applied to measure dependence between two variables.
2017b
- (STAT 414/415, 2017) ⇒ Retrieved from [Lesson 43: Tests Concerning Regression and Correlation https://onlinecourses.science.psu.edu/stat414/node/226] on 2017-01-28
- (...) Suppose instead that we had a situation in which we thought of the pair [math]\displaystyle{ (X_i, Y_i) }[/math] as being a random sample, [math]\displaystyle{ i = 1, 2, ..., n }[/math], from a bivariate normal distribution with parameters [math]\displaystyle{ \mu_X,\;\mu_Y,\; \sigma^2_X\;\sigma_Y^2 }[/math] and [math]\displaystyle{ \rho }[/math]. Then, we might be interested in testing the null hypothesis.
- [math]\displaystyle{ H_0:\;\rho=0 }[/math]
- (...) Suppose instead that we had a situation in which we thought of the pair [math]\displaystyle{ (X_i, Y_i) }[/math] as being a random sample, [math]\displaystyle{ i = 1, 2, ..., n }[/math], from a bivariate normal distribution with parameters [math]\displaystyle{ \mu_X,\;\mu_Y,\; \sigma^2_X\;\sigma_Y^2 }[/math] and [math]\displaystyle{ \rho }[/math]. Then, we might be interested in testing the null hypothesis.
- because we know that if the correlation coefficient is 0, then [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are independent random variables.
2017d
- (Stat Guide, 2017) ⇒ https://www.quality-control-plan.com/StatGuide/sg_glos.htm#transformation
- Correlation is the linear association between two random variables X and Y. It is usually measured by a correlation coefficient, such as Pearson's r, such that the value of the coefficient ranges from -1 to 1. A positive value of r means that the association is positive; i.e., that if X increases, the value of Y tends to increase linearly, and if X decreases, the value of Y tends to decrease linearly. A negative value of r means that the association is negative; i.e., that if X increases, the value of Y tends to decrease linearly, and if X decreases, the value of Y tends to increase linearly. The larger r is in absolute value, the stronger the linear association between X and Y. If r is 0, X and Y are said to be uncorrelated, with no linear association between X and Y. Independent variables are always uncorrelated, but uncorrelated variables need not be independent.
- ↑ Croxton, Frederick Emory; Cowden, Dudley Johnstone; Klein, Sidney (1968) Applied General Statistics, Pitman. ISBN 9780273403159 (page 625)
- ↑ Dietrich, Cornelius Frank (1991) Uncertainty, Calibration and Probability: The Statistics of Scientific and Industrial Measurement 2nd Edition, A. Higler. ISBN 9780750300605 (Page 331)
- ↑ Aitken, Alexander Craig (1957) Statistical Mathematics 8th Edition. Oliver & Boyd. ISBN 9780050013007 (Page 95)