Margin Classifier
Jump to navigation
Jump to search
See: Margin, Classifier, Linear Classifier, Maximum-Margin Classifier, Support Vector Machine Classifier.
References
2009
- http://en.wikipedia.org/wiki/Margin_classifier
- In machine learning, a margin classifer is a classifier which is able to give an associated distance from decision boundary for each example. For instance, if a linear classifier (e.g. perceptron or linear discriminant analysis) is used, the distance (typically euclidean distance, though others may be used) of an example from the separating hyperplane is the margin of that example.
- The notion of margin is important in several machine learning classification algorithms, as it can be used to bound the generalization error of the classifier. These bounds are frequently shown using the VC dimension. Of particular prominence is the generalization error bound on boosting algorithms and support vector machines.
- http://en.wikipedia.org/wiki/Support_vector_machine#Motivation
- Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a new data point will be in. In the case of support vector machines, a data point is viewed as a [math]\displaystyle{ p }[/math]-dimensional vector (a list of [math]\displaystyle{ p }[/math] numbers), and we want to know whether we can separate such points with a [math]\displaystyle{ p-1 }[/math]-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum margin classifier.