Sargan Test
(Redirected from Sargan test)
Jump to navigation
Jump to search
A Sargan Test is a statistical hypothesis test for assessing the validity of over-identifying restrictions in statistical inference of time series.
- AKA: Sargan–Hansen Test, Sargan's J Test.
- See: Panel Data, Statistical Test, Over-Identifying Restriction, Statistical Model, John Denis Sargan, Econometrica, Lars Peter Hansen, Generalized Method of Moments, Time Series, Errors And Residuals, Instrumental Variable, Chi-Squared Distribution, Estimator, Orthogonality, Economic Model, Covariance.
References
2016
- (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/Sargan–Hansen_test Retrieved:2016-12-17.
- The Sargan–Hansen test or Sargan's [math]\displaystyle{ J }[/math] test is a statistical test used for testing over-identifying restrictions in a statistical model. It was proposed by John Denis Sargan in 1958, and several variants were derived by him in 1975. Lars Peter Hansen re-worked through the derivations and showed that it can be extended to general non-linear GMM in a time series context. The Sargan test is based on the assumption that model parameters are identified via a priori restrictions on the coefficients, and tests the validity of over-identifying restrictions. The test statistic can be computed from residuals from instrumental variables regression by constructing a quadratic form based on the cross-product of the residuals and exogenous variables. Under the null hypothesis that the over-identifying restrictions are valid, the statistic is asymptotically distributed as a chi-square variable with [math]\displaystyle{ (m - k) }[/math] degrees of freedom (where [math]\displaystyle{ m }[/math] is the number of instruments and [math]\displaystyle{ k }[/math] is the number of endogenous variables). This version of the Sargan statistic was developed for models estimated using instrumental variables from ordinary time series or cross-sectional data. When longitudinal ("panel data") data are available, it is possible to extend such statistics for testing exogeneity hypotheses for subsets of explanatory variables. Testing of over-identifying assumptions is less important in longitudinal applications because realizations of time varying explanatory variables in different time periods are potential instruments, i.e., over-identifying restrictions are automatically built into models estimated using longitudinal data.
1982
- (Hansen,1982) ⇒ Hansen, L. P. (1982). Large sample properties of generalized method of moments estimators. Econometrica: Journal of the Econometric Society, 1029-1054. doi:10.2307/1912775
- Abstract: This paper studies estimators that make sample analogues of population orthogonality conditions close to zero. Strong consistency and asymptotic normality of such estimators is established under the assumption that the observable variables are stationary and ergodic. Since many linear and nonlinear econometric estimators reside within the class of estimators studied in this paper, a convenient summary of the large sample properties of these estimators, including some whose large sample properties have not heretofore been discussed, is provided.
1958
- (Sargan, 1958) ⇒ Sargan, J. D. (1958). The estimation of economic relationships using instrumental variables. Econometrica: Journal of the Econometric Society, 393-415 doi:10.2307/1907619.
- INTRODUCTION THE USE OF INSTRUMENTAL variables was first suggested by Reiersol [13, 14] for the case in which economic variables subject to exact relationships are affected by random disturbances or measurement errors. It has since been discussed for the same purpose by several authors, notably by Geary [9] and Durbin [7]. In this article the method is applied to a more general case in which the relationships are not exact, so that a set of ideal economic variables is assumed to be generated by a set of dynamic stochastic relationships, as in Koopmans [12], and the actual economic time series are assumed to differ from the ideal economic variables because of random disturbances or measurement errors. The asymptotic error variance matrix for the coefficients of one of the relationships is obtained in the case in which these relationships are estimated using instrumental variables. With this variance matrix we are able to discuss the problem of choice that arises when there are more instrumental variables available than the minimum number required to enable the method to be used. A method of estimation is derived which involves a characteristic equation already considered by Hotelling in defining the canonical correlation [10]. This method was previously suggested by Durbin [7]. The same estimates would be obtained by the maximum-likelihood limited-information method if all the predetermined variables which are assumed subject to disturbances or errors were treated as if they were jointly determined, and the instrumental variables treated as if they were predetermined variables. Such a procedure was suggested by Chernoff and Rubin [5]. It is possible to use the smallest roots of the characteristic equation for significance tests in exactly the same way as when using the maximum-likelihood method, and similar confidence regions can be defined. All the results listed so far depend on the use of asymptotic approximations. A few calculations were made by the author on the order of magnitude of the errors involved in this approximation. They were found to be proportional to the number of instrumental variables, so that, if the asymptotic approximations are to be used, this number must be small.