Stacked Ensemble-based Learning Algorithm

From GM-RKB
Revision as of 16:52, 7 May 2018 by Gmelli (talk | contribs) (Created page with "A Stacked Ensemble-based Algorithm is an ensemble learning algorithm that can be applied by a stacked learning system (that can solve a stacked learning task w...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

A Stacked Ensemble-based Algorithm is an ensemble learning algorithm that can be applied by a stacked learning system (that can solve a stacked learning task which creates a decision function which makes its decision based on the outputs of several base models).



References

2018

  • (Wikipedia, 2012) ⇒ http://en.wikipedia.org/wiki/Ensemble_learning#Stacking
    • Stacking (sometimes called stacked generalization) involves training a learning algorithm to combine the predictions of several other learning algorithms. First, all of the other algorithms are trained using the available data, then a combiner algorithm is trained to make a final prediction using all the predictions of the other algorithms as additional inputs. If an arbitrary combiner algorithm is used, then stacking can theoretically represent any of the ensemble techniques described in this article, although, in practice, a logistic regression model is often used as the combiner.

      Stacking typically yields performance better than any single one of the trained models.[1] It has been successfully used on both supervised learning tasks (regression,[2] classification and distance learning [3]) and unsupervised learning (density estimation).[4] It has also been used to estimate bagging's error rate.[5][6] It has been reported to out-perform Bayesian model-averaging.[7] The two top-performers in the Netflix competition utilized blending, which may be considered to be a form of stacking.[8]

2012

2013

  • ...
    • QUOTE:

  1. Wolpert, D., Stacked Generalization., Neural Networks, 5(2), pp. 241-259., 1992
  2. Breiman, L., Stacked Regression, Machine Learning, 24, 1996 Template:Doi
  3. Ozay, M.; Yarman Vural, F. T. (2013). A New Fuzzy Stacked Generalization Technique and Analysis of its Performance. arXiv:1204.0171. 
  4. Smyth, P. and Wolpert, D. H., Linearly Combining Density Estimators via Stacking, Machine Learning Journal, 36, 59-83, 1999
  5. Cite error: Invalid <ref> tag; no text was provided for refs named Rokach2010
  6. Wolpert, D.H., and Macready, W.G., An Efficient Method to Estimate Bagging’s Generalization Error, Machine Learning Journal, 35, 41-55, 1999
  7. Clarke, B., Bayes model averaging and stacking when model approximation error cannot be ignored, Journal of Machine Learning Research, pp 683-712, 2003
  8. Sill, J.; Takacs, G.; Mackey, L.; Lin, D. (2009). Feature-Weighted Linear Stacking. arXiv:0911.0460.