Linear Decision Boundary
Jump to navigation
Jump to search
A Linear Decision Boundary is a decision boundary that is expressed by linear functions.
- Example(s):
- Counter-Example(s):
- a Non-Linear Decision Boundary, such as an RBF kernel.
- See: Linear Kernel, Linear Classifier.
References
2015
- http://spark.apache.org/docs/latest/mllib-linear-methods.html
- QUOTE: Many standard machine learning methods can be formulated as a convex optimization problem, i.e. the task of finding a minimizer of a convex function f that depends on a variable vector w (called weights in the code), which has d entries. Formally, we can write this as the optimization problem [math]\displaystyle{ minw∈Rdf(w) }[/math], where the objective function is of the form: [math]\displaystyle{ f(w):=λR(w)+1n∑i=1nL(w;xi,yi) .(1) }[/math] Here the vectors [math]\displaystyle{ x_i∈Rd }[/math] are the training data examples, for 1≤i≤n, and yi∈R are their corresponding labels, which we want to predict. We call the method linear if L(w;x,y) can be expressed as a function of wTx and y. Several of MLlib’s classification and regression algorithms fall into this category, and are discussed here.
The objective function f has two parts: the regularizer that controls the complexity of the model, and the loss that measures the error of the model on the training data. The loss function L(w;.) is typically a convex function in w. The fixed regularization parameter λ≥0 (regParam in the code) defines the trade-off between the two goals of minimizing the loss (i.e., training error) and minimizing model complexity (i.e., to avoid overfitting).
- QUOTE: Many standard machine learning methods can be formulated as a convex optimization problem, i.e. the task of finding a minimizer of a convex function f that depends on a variable vector w (called weights in the code), which has d entries. Formally, we can write this as the optimization problem [math]\displaystyle{ minw∈Rdf(w) }[/math], where the objective function is of the form: [math]\displaystyle{ f(w):=λR(w)+1n∑i=1nL(w;xi,yi) .(1) }[/math] Here the vectors [math]\displaystyle{ x_i∈Rd }[/math] are the training data examples, for 1≤i≤n, and yi∈R are their corresponding labels, which we want to predict. We call the method linear if L(w;x,y) can be expressed as a function of wTx and y. Several of MLlib’s classification and regression algorithms fall into this category, and are discussed here.