Non-Linear Kernel-based Support Vector Machine Algorithm
(Redirected from Non-Linear SVM Algorithm)
Jump to navigation
Jump to search
A Non-Linear Kernel-based Support Vector Machine Algorithm is an SVM training algorithm that can be implemented by a non-linear SVM training system to solve a non-linear SVM training task (to produce a non-linear SVM based on a non-linear kernel).
- Context:
- It can be interpreted as maximizing the margin of a Non-Linear Kernel (so that the distance to the closest misclassified entity is the widest)
- Example(s):
- …
- Counter-Example(s):
- See: Kernel-based SVM Algorithm, Discriminative Non-Linear Classifier Learning Algorithm, Linearly Separable, Quadratic Programming, Lagrange Multiplier, Karush–Kuhn–Tucker Conditions, Bias of an Estimator, Dual Problem, Maximum-Margin Hyperplane.
References
2016
- https://www.quora.com/Why-is-kernelized-SVM-much-slower-than-linear-SVM
- QUOTE: Basically, a kernel-based SVM requires on the order of n^2 computation for training and order of find computation for classification, where n is the number of training examples and d the input dimension (and assuming that the number of support vectors ends up being a fraction of n, which is shown to be expected in theory and in practice). Instead, a 2-class linear SVM requires on the order of find computation for training (times the number of training iterations, which remains small even for large n) and on the order of d computations for classification.
2010
- (Chang et al., 2010) ⇒ Yin-Wen Chang, Cho-Jui Hsieh, Kai-Wei Chang, Michael Ringgaard, and Chih-Jen Lin. (2010). “Training and Testing Low-degree Polynomial Data Mappings via Linear SVM.” In: The Journal of Machine Learning Research, 11.
- QUOTE: … In this work, we apply fast linear-SVM methods to the explicit form of polynomially mapped data and investigate implementation issues. The approach enjoys fast training and time, but may sometimes achieve accuracy close to that of using highly nonlinear kernels.