Multinomial Logistic Regression Algorithm
(Redirected from (multinomial) logistic regression)
Jump to navigation
Jump to search
A Multinomial Logistic Regression Algorithm is a logistic regression algorithm that is a supervised multinomial classification algorithm and that fits a multinomial logistic function.
- AKA: Softmax Activation Function Regression, Multinomial Logit, MNL.
- Context:
- It can be applied by a Multinomial Logistic Regression System (to solve a Multinomial Logistic Regression Task).
- …
- Counter-Example(s):
- See: Logistic Loss Function, Multinomial Distribution, Statistical Independence, Maximum A Posteriori, Softmax Activation Function, Multinomial Probit Algorithm, Linear Predictor Function, Regression Coefficient.
References
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Multinomial_logistic_regression#Setup Retrieved:2017-7-24.
- The basic setup is the same as in logistic regression, the only difference being that the dependent variables are categorical rather than binary, i.e. there are K possible outcomes rather than just two. The following description is somewhat shortened; for more details, consult the logistic regression article.
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Multinomial_logistic_regression#Linear_predictor Retrieved:2017-7-24.
- As in other forms of linear regression, multinomial logistic regression uses a linear predictor function [math]\displaystyle{ f(k,i) }[/math] to predict the probability that observation i has outcome k, of the following form: : [math]\displaystyle{ f(k,i) = \beta_{0,k} + \beta_{1,k} x_{1,i} + \beta_{2,k} x_{2,i} + \cdots + \beta_{M,k} x_{M,i}, }[/math] where [math]\displaystyle{ \beta_{m,k} }[/math] is a regression coefficient associated with the mth explanatory variable and the kth outcome. As explained in the logistic regression article, the regression coefficients and explanatory variables are normally grouped into vectors of size M+1, so that the predictor function can be written more compactly: : [math]\displaystyle{ f(k,i) = \boldsymbol\beta_k \cdot \mathbf{x}_i, }[/math] where [math]\displaystyle{ \boldsymbol\beta_k }[/math] is the set of regression coefficients associated with outcome k, and [math]\displaystyle{ \mathbf{x}_i }[/math] (a row vector) is the set of explanatory variables associated with observation i.
2012
- (Mehmet Süzen, 2012) ⇒ http://science-memo.blogspot.ca/2012/10/hands-on-crash-tutorial-for-supervised.html
- If you are a computer scientist, probably you would call a task supervised learning what others might call classification based on data. Data being anything that has a well defined structure and associated classes. We will work on hands on example for multi-class prediction with logistic regression i.e. multinomial logistic regression (MNL). What I meant by multi-class is here that we have k ∈ Z distinct classes for each observation, k>1. Actually if you think in terms of link functions from Generalized Linear Models (GLMs), the support of the distribution will tell you the distinction of the nature of the class. In statistics literature classes can be manifest as factors (or categories).