Voted Perceptron Model
Jump to navigation
Jump to search
A Voted Perceptron Model is a Perceptron Model that is used for linear classification by combining Rosenblatt's perceptron algorithm with Helmbold and Warmuth's leave-one-out method.
- AKA: Generalized Perceptron Model, Voted Perceptron.
- Context:
- It is reproduced using a Voted Perceptron Algorithm.
- It was initially developed by Freund & Schapire (1999).
- It can be used in very high dimensional spaces using kernel functions.
- It takes advantage of data that are linearly separable with large margins.
- Example(s):
- the voted perceptron model described in Sassano, 2008,
- the voted perceptron model described in Freund & Schapire (1999),
- ...
- …
- Counter-Example(s):
- See: Perceptron Model, Boosted Model, Artificial Neural Network.
References
2015
- (Curtis, 2015) ⇒ http://curtis.ml.cmu.edu/w/courses/index.php/Voted_Perceptron
- QUOTE: The voted perceptron method is based on the perceptron algorithm of Rosenblatt and Frank. The algorithm takes advantage of data that are linearly separable with large margins. This method is simpler to implement, and much more efficient in terms of computation time as compared to Vapnik's SVM. The algorithm can also be used in very high dimensional spaces using kernel functions.
Manabu Sassano, IJCNLP performed an experimental comparison of voted perceptron with SVM on classification tasks in NLP and observed that the voted perceptron is comparable to SVM in terms of accuracy and, in addition, as to learning time and prediction speed of voted perceptron is considerable better than SVM.
- QUOTE: The voted perceptron method is based on the perceptron algorithm of Rosenblatt and Frank. The algorithm takes advantage of data that are linearly separable with large margins. This method is simpler to implement, and much more efficient in terms of computation time as compared to Vapnik's SVM. The algorithm can also be used in very high dimensional spaces using kernel functions.
2008
- (Sassano, 2008) ⇒ Manabu Sassano (2008). "An Experimental Comparison of the Voted Perceptron and Support Vector Machines in Japanese Analysis Tasks". In: Proceedings Third International Joint Conference on Natural Language Processing (IJCNLP 2008).
- QUOTE: Following (Freund and Schapire, 1999), we show the training and prediction algorithm of the voted perceptron in Figure 1. The voted perceptron as well as SVM can use a kernel function. We show in Figure 2 the algorithm of the voted perceptron with a kernel function. This algorithm seems to require $O(k^2)$ kernel calculations. However, we can avoid them by taking advantage of the recurrence $\mathbf{v}_{j+1}\cdot \mathbf{x} =\mathbf{v}_j\cdots \mathbf{x}+y_{uj} K\left(\mathbf{x}_{uj},\mathbf{x}\right)$.[1]
- ↑ Herbrich describes an optimized verson of the algorithm of the kernel perceptron (Herbrich, 2002, page 322). We can us the same techique in training of the kernel version of the voted perceptron.
1999
- (Freund & Schapire, 1999) ⇒ Yoav Freund and Robert E. Schapire (1999). "Large Margin Classification Using the Perceptron Algorithm". In: Machine Learning 37, 277–296 (1999). DOI:10.1023/A:1007662407062.
- QUOTE: We introduce and analyze a new algorithm for linear classification which combines Rosenblatt's perceptron algorithm with Helmbold and Warmuth's leave-one-out method. Like Vapnik's maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large margins. Compared to Vapnik's algorithm, however, ours is much simpler to implement, and much more efficient in terms of computation time.