Unstable Learner
Jump to navigation
Jump to search
An Unstable Learner is a Machine Learning System that produces large differences in generalization patterns when small changes are made to its initial conditions.
- Example(s):
- Neural Networks (assuming gradient descent learning),
- Decision Trees,
- ...
- …
- Counter-Example(s):
- See: Artificial Neural Network, Ensemble Learning Task.
References
2017
- (Sammut & Webb, 2017) ⇒ Claude Sammut, and Geoffrey I. Webb. (2017). "Unstable Learner". In: (Sammut & Webb, 2017).
- QUOTE: An unstable learner produces large differences in generalization patterns when small changes are made to its initial conditions. The obvious initial condition is the set of training data used – for an unstable learner, sampling a slightly different training set produces a large difference in testing behavior. Some models can be unstable in additional ways, for example neural networks are unstable with respect to the initial weights. In general this is an undesirable property – high sensitivity to training conditions is also known as high variance, which results in higher overall mean squared error. The flexibility enabled by being sensitive to data can thus be a blessing or a curse. Unstable learners can however be used to an advantage in ensemble learning methods, where large variance is “averaged out” across multiple learners.