2009 LinearProbLogitModels
Jump to navigation
Jump to search
- (Ratner, 2009) ⇒ Bruce Ratner. (2009). “Linear Probability, Logit, and Probit Models: How Do They Differ?.” http://www.geniq.net Webpage
Subject Headings: Logistic Regression Algorithm, Linear Regression Algorithm, Ordinary Least Squares Algorithm.
Notes
Quotes
- At the beginning of everyday for the regression modeler, whose tasks are to predict a continuous dependent (e.g., profit) and a binary dependent (e.g., yes-no response), the ordinary least squares (OLS) regression model and the logistic regression model, respectively, are likely to be put to use, giving promise of another workday of successful models. The essence of any prediction model is the fitness function, which quantifies the optimality (goodness or accuracy) of a solution (predictions). The fitness function of the OLS regression model is mean squared error (MSE), which is minimized by calculus. … The OLS regression model is celebrating 204 years of popularity, as the invention of the method of least squares was on March 6, 1805.
- The first use of OLS regression with a binary dependent has an intractable past: Who, when, and why are not known. The pent-up need for a binary dependent-variable linear regression model was quite apparent, as once it was employed there was no turning back the clock. The general passion of the users’ of the new probability regression model resulted in renaming it the Linear Probability Model. The problems of the linear probability model today are well known. But, its usage came to a quick halt when the probit model was invented.
- The fitness function of the logistic regression model (LRM) is the likelihood function, which is maximized by calculus (i.e., the method of maximum likelihood). [The likelihood function represents the joint probability of observing the data that have been collected. The term "joint probability" means a probability that combines the contributions of all the individuals in the study.] The logistic function has its roots spread back to the 19th century, when the Belgian mathematician Verhulst invented the function, which he named logistic, to describe population growth. The rediscovery of the function in 1920 is due to Pearl and Reed, the survival of the term logistic to Yule, and the introduction of the function in statistics to Berkson. Berkson used the logistic function in his regression model as an alternative to the normal-probability probit model, usually credited to Bliss in 1934, and sometimes to Gaddum in 1933. (However, the probit can be first traced to Fechner in 1860.) As of 1944, Berkson’s LRM was not accepted as a viable alternative to Bliss’ probit. After the ideological debate about the logistic and probit had abated in the 1960s, Berkson’s logistic gained wide acceptance. Berkson was much derided for coining the term “logit” by analogy to the probit of Bliss, who coined the term probit for “probability unit.”,
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2009 LinearProbLogitModels | Bruce Ratner | Linear Probability, Logit, and Probit Models: How Do They Differ? | http://www.geniq.net/res/Linear-Prob-Logit-Probit-Models.html | 2009 |