Linear Classifier: Difference between revisions
Jump to navigation
Jump to search
m (Text replacement - ". ----" to ". ----") |
m (Text replacement - " …." to " …") |
||
Line 8: | Line 8: | ||
=== 2004 === | === 2004 === | ||
* ([[2004_TheEntireRegulPathForTheSVM|Hastie et al., 2004]]) ⇒ [[Trevor Hastie]], [[Saharon Rosset]], [[Robert Tibshirani]], and Ji Zhu. ([[2004]]). “[http://www.jmlr.org/papers/volume5/hastie04a/hastie04a.pdf The Entire Regularization Path for the Support Vector Machine].” In: The Journal of Machine Learning Research, 5. | * ([[2004_TheEntireRegulPathForTheSVM|Hastie et al., 2004]]) ⇒ [[Trevor Hastie]], [[Saharon Rosset]], [[Robert Tibshirani]], and Ji Zhu. ([[2004]]). “[http://www.jmlr.org/papers/volume5/hastie04a/hastie04a.pdf The Entire Regularization Path for the Support Vector Machine].” In: The Journal of Machine Learning Research, 5. | ||
** … | ** … We start off with the simple case of a linear classifier, where our goal is to estimate a linear decision function | ||
*** ''ƒ''(''x'') = β<sub>0</sub>+β<sup>T</sup>''x'', | *** ''ƒ''(''x'') = β<sub>0</sub>+β<sup>T</sup>''x'', | ||
** and its associated classifier | ** and its associated classifier |
Latest revision as of 18:17, 24 January 2023
.
References
2004
- (Hastie et al., 2004) ⇒ Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. (2004). “The Entire Regularization Path for the Support Vector Machine.” In: The Journal of Machine Learning Research, 5.
- … We start off with the simple case of a linear classifier, where our goal is to estimate a linear decision function
- ƒ(x) = β0+βTx,
- and its associated classifier
- Class(x) = sign[ƒ(x)].
- There are many ways to fit such a linear classifier, including linear regression, Fisher’s linear discriminant analysis, and logistic regression
- … We start off with the simple case of a linear classifier, where our goal is to estimate a linear decision function