Linear Decision Function
Jump to navigation
Jump to search
A Linear Decision Function is Decision Function that is a Linear Function.
- Example(s):
- Counter-Example(s):
- See: Linear Decision Boundary, Classification Task, Linear Regression, Linear Classifier, Decision Tree Learning.
References
2018
- (Vijayarekha, 2018) ⇒ Dr. K.Vijayarekha. "Generalized decision functions". School of Electrical and Electronics Engineering. SASTRA University, Thanjavur-613 401. Retrieved: 2018-10-12.
- QUOTE:If pattern classes with more than one pattern are present, each class can be easily separated from the rest of the pattern classes. The separation is possible with the decision functions which may either be linear or non-linear. In some cases, the decision boundary may be simple whereas it may be complex in some cases. The complexity of decision boundary may result in a much complicated complex non-linear systems. Hence if a generalized method is available for finding the decision function it will be of use to us. A method to generalize concept of linear decision function is by using the generalized decision function.
[math]\displaystyle{ d(x)=w_1f_1(x)+w_2f_2(x)+\cdots+w_nf_n(x)+w_{n+1} }[/math]
are scalar functions of the pattern sample [math]\displaystyle{ x }[/math] where [math]\displaystyle{ x\in \mathbb{R}^n }[/math]
- QUOTE:If pattern classes with more than one pattern are present, each class can be easily separated from the rest of the pattern classes. The separation is possible with the decision functions which may either be linear or non-linear. In some cases, the decision boundary may be simple whereas it may be complex in some cases. The complexity of decision boundary may result in a much complicated complex non-linear systems. Hence if a generalized method is available for finding the decision function it will be of use to us. A method to generalize concept of linear decision function is by using the generalized decision function.
2004
- (Hastie et al., 2004) ⇒ Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. (2004). “The Entire Regularization Path for the Support Vector Machine.” In: The Journal of Machine Learning Research, 5.
- QUOTE: We have a set of [math]\displaystyle{ n }[/math] training pairs [math]\displaystyle{ x_i,\; y_i }[/math] , where [math]\displaystyle{ x_i \in \mathbb{R}^p }[/math] is a p-vector of real valued predictors (attributes) for the ith observation, [math]\displaystyle{ y_i \in \{−1, +1\} }[/math] codes its binary response. We start off with the simple case of a linear classifier, where our goal is to estimate a linear decision function
[math]\displaystyle{ f(x) = \beta_0 +\beta^T x,\quad }[/math](1)
and its associated classifier
[math]\displaystyle{ C(x) = sign[f(x)]\quad }[/math](2)
There are many ways to fit such a linear classifier, including linear regression, Fisher’s linear discriminant analysis, and logistic regression [Hastie et al., 2001, Chapter 4].
- QUOTE: We have a set of [math]\displaystyle{ n }[/math] training pairs [math]\displaystyle{ x_i,\; y_i }[/math] , where [math]\displaystyle{ x_i \in \mathbb{R}^p }[/math] is a p-vector of real valued predictors (attributes) for the ith observation, [math]\displaystyle{ y_i \in \{−1, +1\} }[/math] codes its binary response. We start off with the simple case of a linear classifier, where our goal is to estimate a linear decision function