Support Vector Machine (SVM) Classification System: Difference between revisions
No edit summary |
m (Text replacement - "]] Category" to " [[Category") |
||
(31 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
A [[Support Vector Machine (SVM) Classification System]] is a [[SVM System]] that is a [[Supervised Classification System]]. | |||
* <B | * <B>AKA:</B> [[Support Vector Machine (SVM) Classification System|Support Vector Classification System]]. | ||
* <B | * <B>Example(s):</B> | ||
** [[SVMlight]]. | ** [[SVMlight]]. | ||
** [[TinySVM]]. | ** [[TinySVM]]. | ||
** [[LIBSVM System]]. | ** [[LIBSVM System]]. | ||
* <B | ** A [[sklearn.svm]] Classification System such as: | ||
*** [[sklearn.svm.LinearSVC]]. | |||
*** [[sklearn.svm.NuSVC]]. | |||
*** [[sklearn.svm.SVC]]. | |||
** … | |||
* <B>Counter-Example(s):</B> | |||
** an [[SVM Ranking System]]. | ** an [[SVM Ranking System]]. | ||
** an [[SVM Regression System]]. | ** an [[SVM Regression System]]. | ||
* <B | * <B>See:</B> [[Neural Network System]]. | ||
---- | ---- | ||
---- | ---- | ||
==References == | |||
=== | == References == | ||
* | |||
=== 2018 === | |||
* (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Support_vector_machine Retrieved:2018-4-8. | |||
** In [[machine learning]], '''support vector machines''' ('''SVMs''', also '''support vector networks'''<ref name="CorinnaCortes"> Cortes, Corinna; Vapnik, Vladimir N. ([[1995]]). “Support-vector networks". Machine Learning. 20 (3): 273–297. [https://doi.org/10.1007%2FBF00994018 doi:10.1007/BF00994018]. </ref>) are [[supervised learning model]]s with associated learning [[algorithm]]s that analyze data used for [[Statistical classification|classification]] and [[regression analysis]]. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-[[probabilistic classification|probabilistic]] [[binary classifier|binary]] [[linear classifier]] (although methods such as [[Platt scaling]] exist to use SVM in a probabilistic classification setting). An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall. <P> In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the [[kernel trick]], implicitly mapping their inputs into high-dimensional feature spaces. <P> When data are not labeled, supervised learning is not possible, and an [[unsupervised learning]] approach is required, which attempts to find natural [[Data clustering|clustering of the data]] to groups, and then map new data to these formed groups. The '''support vector clustering'''<ref name="HavaSiegelmann">Ben-Hur, Asa; Horn, David; Siegelmann, Hava; and Vapnik, Vladimir N.; "Support vector clustering"; (2001); ''Journal of Machine Learning Research'', 2: 125–137 </ref> algorithm created by [[Hava Siegelmann]] and [[Vladimir Vapnik]], applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data, and is one of the most widely used clustering algorithms in industrial applications. | |||
=== 2017 === | |||
* (Zhang, 2017) ⇒ Zhang X. (2017) [https://link.springer.com/referenceworkentry/10.1007/978-1-4899-7687-1_810 Support Vector Machines]. In: [[Sammut, C.]], [[Webb, G.I.]] (eds) [https://link.springer.com/referencework/10.1007/978-1-4899-7687-1 Encyclopedia of Machine Learning and Data Mining]. Springer, Boston, MA | |||
** QUOTE: [[Support vector machines (SVMs)]] are a [[class]] of [[linear algorithm]]s which can be used for [[classification]], [[regression]], [[density estimation]], [[novelty detection]], etc. In the simplest case of [[two-class classification]], [[SVM]]s find a [[hyperplane]] that separates the two classes of [[data]] with as wide a margin as possible. This leads to good [[generalization]] accuracy on unseen data and supports specialized [[optimization method]]s that allow [[SVM]] to learn from a [[Big Data|large amount of data]]. | |||
---- | ---- | ||
Line 19: | Line 32: | ||
__NOTOC__ | __NOTOC__ | ||
[[Category:Concept]] | [[Category:Concept]] | ||
[[Category:Machine Learning]] |
Latest revision as of 23:02, 26 November 2023
A Support Vector Machine (SVM) Classification System is a SVM System that is a Supervised Classification System.
- AKA: Support Vector Classification System.
- Example(s):
- SVMlight.
- TinySVM.
- LIBSVM System.
- A sklearn.svm Classification System such as:
- …
- Counter-Example(s):
- See: Neural Network System.
References
2018
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Support_vector_machine Retrieved:2018-4-8.
- In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.
When data are not labeled, supervised learning is not possible, and an unsupervised learning approach is required, which attempts to find natural clustering of the data to groups, and then map new data to these formed groups. The support vector clustering[2] algorithm created by Hava Siegelmann and Vladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data, and is one of the most widely used clustering algorithms in industrial applications.
- In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
2017
- (Zhang, 2017) ⇒ Zhang X. (2017) Support Vector Machines. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA
- QUOTE: Support vector machines (SVMs) are a class of linear algorithms which can be used for classification, regression, density estimation, novelty detection, etc. In the simplest case of two-class classification, SVMs find a hyperplane that separates the two classes of data with as wide a margin as possible. This leads to good generalization accuracy on unseen data and supports specialized optimization methods that allow SVM to learn from a large amount of data.
- ↑ Cortes, Corinna; Vapnik, Vladimir N. (1995). “Support-vector networks". Machine Learning. 20 (3): 273–297. doi:10.1007/BF00994018.
- ↑ Ben-Hur, Asa; Horn, David; Siegelmann, Hava; and Vapnik, Vladimir N.; "Support vector clustering"; (2001); Journal of Machine Learning Research, 2: 125–137