Holdout Evaluation

From GM-RKB
(Redirected from Holdout evaluation)
Jump to navigation Jump to search

A Holdout Evaluation is a simple predictive model evaluation task that uses a (random) holdout datasets.



References

2018

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Cross-validation_(statistics)#Holdout_method Retrieved:2018-5-15.
    • In the holdout method, we randomly assign data points to two sets d0 and d1, usually called the training set and the test set, respectively. The size of each of the sets is arbitrary although typically the test set is smaller than the training set. We then train on d0 and test on d1.

      In typical cross-validation, multiple runs are aggregated together; in contrast, the holdout method, in isolation, involves a single run. While the holdout method can be framed as "the simplest kind of cross-validation", many sources instead classify holdout as a type of simple validation, rather than a simple or degenerate form of cross-validation. [1] [2]

2017

1997


  1. Kohavi, Ron. “A study of cross-validation and bootstrap for accuracy estimation and model selection." Ijcai. Vol. 14. No. 2. 1995.
  2. Arlot, Sylvain, and Alain Celisse. “A survey of cross-validation procedures for model selection." Statistics surveys 4 (2010): 40-79. “In brief, CV consists in averaging several hold-out estimators of the risk corresponding to different data splits."