Cross-Validation System
(Redirected from Cross-Validation Evaluation System)
Jump to navigation
Jump to search
A Cross-Validation System is a model validation system that can solve a Cross-Validation Task by implementing a cross-validation algorithm.
- Context:
- It can produce a Stable Estimate of a Predictive Model's Model Performance.
- …
- Example(s):
- Counter-Example(s):
- See: 10-Fold Cross-Validation.
References
2014
- http://scikit-learn.org/stable/modules/cross_validation.html
- QUOTE: Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called overfitting. To avoid it, it is common practice when performing a (supervised) machine learning experiment to hold out part of the available data as a test set
X_test, y_test
. Note that the word “experiment” is not intended to denote academic use only, because even in commercial settings machine learning usually starts out experimentally.In scikit-learn a random split into training and test sets can be quickly computed with the train_test_split helper function. Let’s load the iris data set to fit a linear support vector machine on it: …
- QUOTE: Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called overfitting. To avoid it, it is common practice when performing a (supervised) machine learning experiment to hold out part of the available data as a test set