All-vs-All Multiclass Classification Algorithm
(Redirected from all-vs-all (AVA))
Jump to navigation
Jump to search
An All-vs-All Multiclass Classification Algorithm is a binary-based multiclass supervised classification algorithm that trains a binary classifier for each label pair.
- AKA: Round Robin Classification, AVA Algorithm.
- …
- Counter-Example(s):
- See: Multiclass Supervised Classification Algorithm.
References
2009
- (Rifkin, 2009) ⇒ Ryan Rifkin. (2009). “Multiclass Classification.” In: MIT Course, 9.520: Statistical Learning Theory and Applications, Spring 2009.
- QUOTE: OVA and AVA are so simple that many people invented them independently. It’s hard to write papers about them. So there’s a whole cottage industry in fancy, sophisticated methods for multiclass classification. To the best of my knowledge, choosing properly tuned regularization classifiers (RLSC, SVM) as your underlying binary classifiers and using one-vs-all (OVA) or all-vs-all (AVA) works as well as anything else you can do. If you actually have to solve a multiclass problem, I strongly urge you to simply use OVA or AVA, and not worry about anything else. The choice between OVA and AVA is largely computational.
2004
- (Wu et al., 2004) ⇒ Ting-Fan Wu, Chih-Jen Lin, and Ruby C. Weng. (2004). “Probability Estimates for Multi-class Classification by Pairwise Coupling.” In: The Journal of Machine Learning Research, 5.
- Cited by ~344 http://scholar.google.com/scholar?cites=17766139141171698492
- ABSTRACT: Pairwise coupling is a popular multi-class classification method that combines all comparisons for each pair of classes. This paper presents two approaches for obtaining class probabilities. Both methods can be reduced to linear systems and are easy to implement. We show conceptually and experimentally that the proposed approaches are more stable than the two existing popular methods: voting and the method by Hastie and Tibshirani (1998)
2002
- (Fürnkranz, 2002) ⇒ Johannes Fürnkranz. (2002). “Round Robin Classification.” In: The Journal of Machine Learning Research, 2. doi:10.1162/153244302320884605