Bias-Variance Tradeoff
(Redirected from Bias-Variance Trade-offs; Novel Applications)
Jump to navigation
Jump to search
A Bias-Variance Tradeoff is an Error Minimization Problem that consists of simultaneously minimizing a prediction's bias and variance to solve overfitting and underfitting problems.
- AKA: Bias-Variance Dilemma, Bias-Variance Problem.
- Example(s):
- Counter-Example(s):
- See: Model Validation, Cross-Validation, Regularization, Feature Selection, Hyperparameter Optimization, Errors And Residuals in Statistics, Supervised Learning, Estimator, Overfitting, Expected Value, Bias, Variance.
References
2019
- (Belkin et al., 2019) ⇒ Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. (2019). “Reconciling Modern Machine-learning Practice and the Classical Bias--variance Trade-off.” In: Proceedings of the National Academy of Sciences, 116(32).
- QUOTE: ... Breakthroughs in machine learning are rapidly changing science and society, yet our fundamental understanding of this technology has lagged far behind. Indeed, one of the central tenets of the field, the bias-variance trade-off, appears to be at odds with the observed behavior of methods used in modern machine-learning practice. The bias-variance trade-off implies that a model should balance underfitting and overfitting: Rich enough to express underlying structure in data and simple enough to avoid fitting spurious patterns. However, in modern practice, very rich models such as neural networks are trained to exactly fit (i.e., interpolate) the data. Classically, such models would be considered overfitted, and yet they often obtain high accuracy on test data. This apparent contradiction has raised questions about the mathematical foundations of machine learning and their relevance to practitioners. In this paper, we reconcile the classical understanding and the modern practice within a unified performance curve. This "double-descent" curve subsumes the textbook U-shaped bias-variance trade-off curve by showing how increasing model capacity beyond the point of interpolation results in improved performance. …
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Bias–variance_tradeoff Retrieved:2017-11-25.
- In statistics and machine learning, the bias–variance tradeoff (or dilemma) is the problem of simultaneously minimizing two sources of error that prevent supervised learning algorithms from generalizing beyond their training set:* The bias is error from erroneous assumptions in the learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs (underfitting).
- The variance is error from sensitivity to small fluctuations in the training set. High variance can cause an algorithm to model the random noise in the training data, rather than the intended outputs (overfitting).
- The bias–variance decomposition is a way of analyzing a learning algorithm's expected generalization error with respect to a particular problem as a sum of three terms, the bias, variance, and a quantity called the irreducible error, resulting from noise in the problem itself.
This tradeoff applies to all forms of supervised learning: classification, regression (function fitting), and structured output learning. It has also been invoked to explain the effectiveness of heuristics in human learning.
- In statistics and machine learning, the bias–variance tradeoff (or dilemma) is the problem of simultaneously minimizing two sources of error that prevent supervised learning algorithms from generalizing beyond their training set:* The bias is error from erroneous assumptions in the learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs (underfitting).
2011
- (Rajnarayan & Wolpert, 2011) ⇒ Dev Rajnarayan, and David Wolpert. (2011). “Bias-Variance Trade-offs; Novel Applications.” In: (Sammut & Webb, 2011) p.101
2004
- (Bouchard & Triggs, 2004) ⇒ Guillaume Bouchard, and Bill Triggs. (2004). “The Trade-off Between Generative and Discriminative Classifiers.” In: Proceedings of COMPSTAT 2004.
- QUOTE: … The key argument is that the discriminative estimator converges to the conditional density that minimizes the negative log-likelihood classification loss against the true density p(x, y) [2]. For finite sample sizes, there is a bias-variance tradeoff and it is less obvious how to choose between generative and discriminative classifiers.
1996
- (Kohavi & Wolpert, 1996) ⇒ Ron Kohavi, and David H. Wolpert. (1996). “Bias Plus Variance Decomposition for Zero-One Loss Functions.” In: Proceedings of the 13th International Conference on Machine Learning (ICML 1996).