Bias-Variance-Covariance Decomposition
Jump to navigation
Jump to search
A Bias-Variance-Covariance Decomposition is a theoretical result underlying ensemble learning algorithms See: Bias-Variance Decomposition.
References
2011
- (Sammut & Webb, 2011) ⇒ Claude Sammut (editor), and Geoffrey I. Webb (editor). (2011). “Bias-Variance-Covariance Decomposition.” In: (Sammut & Webb, 2011) p.111
- QUOTE: The bias-variance-covariance decomposition is a theoretical result underlying ensemble learning algorithms. It is an extension of the bias-variance decomposition, for linear combinations of models. The expected squared error of the ensemble f¯(x) from a target d is: [math]\displaystyle{ ED{(f¯(x)−d)2}=bias¯¯¯¯¯¯¯¯¯2+1Tvar¯¯¯¯¯¯¯+(1−1T)covar¯¯¯¯¯¯¯¯¯¯¯¯. }[/math] The error is composed of the average bias of the models, plus a term involving their average variance, and a final term involving their average pairwise covariance. This shows that while a single model has a two-way bias-variance tradeoff, an ensemble is controlled by a three-way tradeoff. This ensemble tradeoff is often referred to as the accuracy-diversity dilemma for an ensemble. See ensemble learning for more details.