2011 TradingRepresentabilityforScala
- (Wang et al., 2011) ⇒ Zhuang Wang, Nemanja Djuric, Koby Crammer, and Slobodan Vucetic. (2011). “Trading Representability for Scalability: Adaptive Multi-hyperplane Machine for Nonlinear Classification.” In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2011) Journal. ISBN:978-1-4503-0813-7 doi:10.1145/2020408.2020420
Subject Headings:
Notes
Cited By
- http://scholar.google.com/scholar?q=%222011%22+Trading+Representability+for+Scalability%3A+Adaptive+Multi-hyperplane+Machine+for+Nonlinear+Classification
- http://dl.acm.org/citation.cfm?id=2020408.2020420&preflayout=flat#citedby
Quotes
Author Keywords
- Algorithms; classifier design and evaluation; design; experimentation; large-scale learning; nonlinear classification; performance; stochastic gradient descent; support vector machines
Abstract
Support Vector Machines (SVMs) are among the most popular and successful classification algorithms. Kernel SVMs often reach state-of-the-art accuracies, but suffer from the curse of kernelization due to linear model growth with data size on noisy data. Linear SVMs have the ability to efficiently learn from truly large data, but they are applicable to a limited number of domains due to low representational power. To fill the representability and scalability gap between linear and nonlinear SVMs, we propose the Adaptive Multi-hyperplane Machine (AMM) algorithm that accomplishes fast training and prediction and has capability to solve nonlinear classification problems. AMM model consists of a set of hyperplanes (weights), each assigned to one of the multiple classes, and predicts based on the associated class of the weight that provides the largest prediction. The number of weights is automatically determined through an iterative algorithm based on the stochastic gradient descent algorithm which is guaranteed to converge to a local optimum. Since the generalization bound decreases with the number of weights, a weight pruning mechanism is proposed and analyzed. The experiments on several large data sets show that AMM is nearly as fast during training and prediction as the state-of-the-art linear SVM solver and that it can be orders of magnitude faster than kernel SVM. In accuracy, AMM is somewhere between linear and kernel SVMs. For example, on an OCR task with 8 million highly dimensional training examples, AMM trained in 300 seconds on a single-core processor had 0.54% error rate, which was significantly lower than 2.03% error rate of a linear SVM trained in the same time and comparable to 0.43% error rate of a kernel SVM trained in 2 days on 512 processors. The results indicate that AMM could be an attractive option when solving large-scale classification problems. The software is available at http://www.dabi.temple.edu/~vucetic/AMM.html.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2011 TradingRepresentabilityforScala | Koby Crammer Zhuang Wang Nemanja Djuric Slobodan Vucetic | Trading Representability for Scalability: Adaptive Multi-hyperplane Machine for Nonlinear Classification | 10.1145/2020408.2020420 | 2011 |