Bootstrap Aggregating Algorithm
(Redirected from Bootstrap aggregating algorithm)
Jump to navigation
Jump to search
A bootstrap aggregating algorithm is an ensemble meta-algorithm that repeatedly selects a random sample with replacement of the training set (a bootstrap sample) and fits the model to these samples.
- AKA: Bagging.
- Context
- It can range from being a Bagging Classification Algorithm to being a Bagging Regression Algorithm.
- It can reduce variance over the base algorithm while only slightly increasing bias. (Domingos, 2012)
- It can be implemented by a Bootstrap Aggregating System (to solve a Bootstrap aggregating task).
- Example(s):
- Counter-Example(s):
- See: Bootstrap Learning Algorithm, Wagging Algorithm, AdaBoost, Meta-Algorithm, Overfitting.
References
- http://dsg.harvard.edu/courses/hst951/ppt/Bagging.ppt
- http://www.umiacs.umd.edu/~shaohua/enee698a_f03/bagging.ppt
- http://faculty.washington.edu/fxia/courses/LING572/bagging.ppt
2014
- (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Bootstrap_aggregating Retrieved:2014-10-28.
- Bootstrap aggregating, also called bagging, is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. It also reduces variance and helps to avoid overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method. Bagging is a special case of the model averaging approach.
2012
- (Domingos, 2012) ⇒ Pedro Domingos. (2012). “A Few Useful Things to Know About Machine Learning.” In: Communications of the ACM Journal, 55(10). doi:10.1145/2347736.2347755
2011
- (Sammut & Webb, 2011) ⇒ Claude Sammut (editor), and Geoffrey I. Webb (editor). (2011). “Bagging.” In: (Sammut & Webb, 2011) p.73
2005
- (Bühlmann, 2005) ⇒ Peter Bühlmann. (2005). “16.2 Bagging and Related Methods." website
- QUOTE: Bagging (Breiman, 1996), a sobriquet for bootstrap aggregating, is an ensemble method for improving unstable estimation or classification schemes. Breiman (Breiman, 1996) motivated bagging as a variance reduction technique for a given base procedure, such as decision trees or methods that do variable selection and fitting in a linear model. It has attracted much attention, probably due to its implementational simplicity and the popularity of the bootstrap methodology. At the time of its invention, only heuristic arguments were presented why bagging would work. Later, it has been shown in (Bühlmann & Yu, 2002) that bagging is a smoothing operation which turns out to be advantageous when aiming to improve the predictive performance of regression or classification trees. In case of decision trees, the theory in (Bühlmann & Yu, 2002) confirms Breiman's intuition that bagging is a variance reduction technique, reducing also the mean squared error (MSE). The same also holds for subagging (subsample aggregating), defined in Sect. 16.2.3, which is a computationally cheaper version than bagging. However, for other (even complex) base procedures, the variance and MSE reduction effect of bagging is not necessarily true; this has also been shown in (Buja & Stuetzle, 2002) for the simple case where the estimator is a $ U$-statistics.
2002
- (Buja & Stuetzle, 2002) ⇒ Andreas Buja, and Werner Stuetzle. (2002). “Observations on Bagging." Preprint (2002). Available from http://ljsavage.wharton.upenn.edu/~buja See: (Buja & Stuetzle, 2006).
2003
- (Chang et al., 2003) ⇒ E.Y. Chang, B. Li, G. Wu, and K. Goh. (2003). “Statistical Learning for Effective Visual Information Retrieval.” In: Proceedings 2003 International IEEE Conference on Image Processing (ICIP 2003).
- QUOTE: Bagging subsamples training data into a number of bags, trains each bag, and aggregates the decisions of the bags to make final class predictions.
2002
- (Bühlmann & Yu, 2002) ⇒ Peter Bühlmann, and B. Yu. (2002). “Analyzing Bagging.” In: Annals of Statistics 30.
1999
- (Bauer & Kohavi, 1999) ⇒ Eric Bauer, and Ron Kohavi. (1999). “An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting and Variants.” In: Machine Learning, 36(1-2).
1996
- (Breiman, 1996) ⇒ Leo Breiman. (1996). “Bagging Predictors.” In: Machine Learning, 24.