Out-of-Sample Evaluation Task
Jump to navigation
Jump to search
An Out-of-Sample Evaluation Task is an retrospective evaluation that uses out-of-sample data.
- Example(s):
- Holdout Evaluation.
- Cross-Validation.
- Prospective Evaluation.
- an Out-of-Sample Forecast Evaluation Task, using data from an Out-of-Sample Period. (Hansen & Timmermann, 2012).
- …
- Counter-Example(s):
- See: Sampling Task, Out-of-group Bias, Learning Performance, Holdout Data, Machine Learning Algorithm, Bootstrap Sampling, Recursive Estimation.
References
2017
- (Sammut & Webb, 2017) ⇒ (2017) "Out-of-Sample Evaluation". In: Sammut C., Webb G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA
- QUOTE: Out-of-sample evaluation refers to algorithm evaluation whereby the learned model is evaluated on out-of-sample data, which are data that were not used in the process of learning the model. Out-of-sample evaluation provides a less biased estimate of learning performance than in-sample evaluation. Cross validation, holdout evaluation and prospective evaluation are three main approaches to out-of-sample evaluation. Cross validation and holdout evaluation run risks of overestimating performance relative to what should be expected on future data, especially if the data set used is not a true random sample of the distribution on which the learned models are to be applied in the future.
2012a
- (Hansen & Timmermann, 2012) ⇒ Peter R. Hansen, and Allan Timmermann. (2012). “Choice of Sample Split in Out-of-sample Forecast Evaluation.” xxx yyy.
- QUOTE: … Statistical tests of a model's forecast performance are commonly conducted by splitting a given data set into an in-sample period, used for initial parameter estimation and model selection, and an out-of-sample period, used to evaluate forecast performance. Empirical …
2012b
- (Rossi & Inoue, 2012) ⇒ Barbara Rossi, and Atsushi Inoue. (2012). “Out-of-sample Forecast Tests Robust to the Choice of Window Size.” Journal of Business & Economic Statistics 30, no. 3