Discriminative Statistical Model Family
A Discriminative Statistical Model Family is a statistical model class that estimates the conditional probability distribution [math]\displaystyle{ p(\bf{Y}|\bf{X}) }[/math] of an dependent variable [math]\displaystyle{ \bf{Y} }[/math] based on independent variables [math]\displaystyle{ \bf{X} }[/math] (in order to predict [math]\displaystyle{ p(y|\hat{x}) }[/math], for previously unseen [math]\displaystyle{ \hat{x} }[/math]).
- AKA: Discriminative Method, Discriminative Probabilistic Model, Conditional Model.
- Context:
- It can (typically) be in a Generative-Discriminative Relation with a Generative Statistical Model Family.
- It can be used by a Discriminative Learning System (that implements a discriminative learning algorithm to a discriminative model learning task to produce a discriminative model instance).
- It cannot, unlike Generative Models, represent a Joint Distribution.
- …
- Example(s):
- Counter-Example(s):
- See: Discriminant Learning Algorithm, Inductive Logic System.
References
2013
- (Wikipedia, 2013) ⇒ http://en.wikipedia.org/wiki/Discriminative_model
- Discriminative models, also called conditional models, are a class of models used in machine learning for modeling the dependence of an unobserved variable [math]\displaystyle{ y }[/math] on an observed variable [math]\displaystyle{ x }[/math]. Within a probabilistic framework, this is done by modeling the conditional probability distribution [math]\displaystyle{ P(y|x) }[/math], which can be used for predicting [math]\displaystyle{ y }[/math] from [math]\displaystyle{ x }[/math].
Discriminative models differ from generative models in that they do not allow one to generate samples from the joint distribution of [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math]. However, for tasks such as classification and regression that do not require the joint distribution, discriminative models can yield superior performance.[1][2] On the other hand, generative models are typically more flexible than discriminative models in expressing dependencies in complex learning tasks. In addition, most discriminative models are inherently supervised and cannot easily be extended to unsupervised learning. Application specific details ultimately dictate the suitability of selecting a discriminative versus generative model.
- Discriminative models, also called conditional models, are a class of models used in machine learning for modeling the dependence of an unobserved variable [math]\displaystyle{ y }[/math] on an observed variable [math]\displaystyle{ x }[/math]. Within a probabilistic framework, this is done by modeling the conditional probability distribution [math]\displaystyle{ P(y|x) }[/math], which can be used for predicting [math]\displaystyle{ y }[/math] from [math]\displaystyle{ x }[/math].
- ↑ P. Singla and P. Domingos. Discriminative training of Markov logic networks. In AAAI, 2005.
- ↑ J. Lafferty, A. McCallum, and F. Pereira. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In ICML, 2001.
2004
- (Bouchard & Triggs, 2004) ⇒ Guillaume Bouchard, and Bill Triggs. (2004). “The Trade-off Between Generative and Discriminative Classifiers.” In: Proceedings of COMPSTAT 2004.
- QUOTE: In supervised classification, inputs [math]\displaystyle{ x }[/math] and their labels [math]\displaystyle{ y }[/math] arise from an unknown joint probability p(x ; y). If we can approximate p(x,y) using a parametric family of models [math]\displaystyle{ G }[/math] = {p\theta(x,y),\theta \in T}, then a natural classifier is obtained by first estimating the class-conditional densities, then classifying each new data point to the class with highest posterior probability. This approach is called generative classification.
However, if the overall goal is to find the classification rule with the smallest error rate, this depends only on the conditional density [math]\displaystyle{ p(y \vert x) }[/math]. Discriminative methods directly model the conditional distribution, without assuming anything about the input distribution p(x). Well known generative-discriminative pairs include Linear Discriminant Analysis (LDA) vs. Linear logistic regression and naive Bayes vs. Generalized Additive Models (GAM). Many authors have already studied these models e.g. [5,6]. Under the assumption that the underlying distributions are Gaussian with equal covariances, it is known that LDA requires less data than its discriminative counterpart, linear logistic regression [3]. More generally, it is known that generative classifiers have a smaller variance than.
Conversely, the generative approach converges to the best model for the joint distribution p(x,y) but the resulting conditional density is usually a biased classifier unless its pθ(x) part is an accurate model for p(x). In real world problems the assumed generative model is rarely exact, and asymptotically, a discriminative classifier should typically be preferred [9, 5]. The key argument is that the discriminative estimator converges to the conditional density that minimizes the negative log-likelihood classification loss against the true density p(x, y) [2]. For finite sample sizes, there is a bias-variance tradeoff and it is less obvious how to choose between generative and discriminative classifiers.
- QUOTE: In supervised classification, inputs [math]\displaystyle{ x }[/math] and their labels [math]\displaystyle{ y }[/math] arise from an unknown joint probability p(x ; y). If we can approximate p(x,y) using a parametric family of models [math]\displaystyle{ G }[/math] = {p\theta(x,y),\theta \in T}, then a natural classifier is obtained by first estimating the class-conditional densities, then classifying each new data point to the class with highest posterior probability. This approach is called generative classification.