Boosted Decision Tree Algorithm
(Redirected from Boosted Trees Algorithm)
Jump to navigation
Jump to search
A Boosted Decision Tree Algorithm is a boosting algorithm that uses a decision tree learning algorithm.
- Context:
- It can be implemented in a Boosted Decision Tree System (that can solve a Boosted decision tree task).
- Example(s):
- AdaBoost.
- a XGBoost Algorithm (for XGBoost)>
- a MART Algorithm (for MART)>
- Counter-Example(s):
- See: Boosted Lasso.
References
2016
- (Nielsen, 2016) ⇒ Didrik Nielsen. (2016). “Tree Boosting With XGBoost-Why Does XGBoost Win" Every" Machine Learning Competition?.”
- QUOTE: Tree boosting has empirically proven to be a highly effective approach to predictive modeling. It has shown remarkable results for a vast array of problems.
2006
- (Caruana & Niculescu-Mizil, 2006) ⇒ Rich Caruana, and Alexandru Niculescu-Mizil. (2006). “An Empirical Comparison of Supervised Learning Algorithms.” In: Proceedings of the 23rd International Conference on Machine learning. ISBN:1-59593-383-2 doi:10.1145/1143844.1143865
- QUOTE: A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance.