LogitBoost Algorithm
(Redirected from LogitBoost)
Jump to navigation
Jump to search
A LogitBoost_Algorithm is a boosting algorithm that ...
References
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/LogitBoost Retrieved:2015-2-9.
- In machine learning and computational learning theory, LogitBoost is a boosting algorithm formulated by Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The original paper [1] casts the AdaBoost algorithm into a statistical framework. Specifically, if one considers AdaBoost as a generalized additive model and then applies the cost functional of logistic regression, one can derive the LogitBoost algorithm.
- ↑ Jerome Friedman, Trevor Hastie and Robert Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics 28(2), 2000. 337–407. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.9525
2009
- Yoav Freund (2009). “Statistical Machine Learning (Boosting). Course: UC San Diego, CSE254, Winter 2009. http://seed.ucsd.edu/mediawiki/index.php/CSE254
2000
- (Friedman et al., 2000) ⇒ Jerome Friedman, Trevor Hastie, and Robert Tibshirani. (2000). “Additive logistic regression: a statistical view of boosting. Annals of Statistics 28(2).
- ABSTRACT: Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training data and then taking a weighted majority vote of the sequence of classifiers thus produced. For many classification algorithms, this simple strategy results in dramatic improvements in performance. We show that this seemingly mysterious phenomenon can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally, making it more suitable to large-scale data mining applications.