Posterior Probability Function
(Redirected from Posterior)
Jump to navigation
Jump to search
A posterior probability function is a probability function (of posterior probability values) of a conditional probability value derived from observed events.
- Context:
- output: a Posterior Probability.
- It reports the probability after making an observation.
- It can be derived from a Prior Probability Distribution and a Likelihood Function.
- It can range from being a Posterior Probability Mass Function to being a Posterior Probability Density Function.
- Example(s):
- [math]\displaystyle{ P(A_i \mid B) }[/math], is the Posterior Probability of Event Ai given B.
- …
- Counter-Example(s):
- See: Joint Probability Distribution, Bayes Rule, Discriminative Model.
References
2009
- (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Posterior_probability
- The posterior probability of a Random Event or an uncertain proposition is the Conditional Probability that is assigned after the relevant evidence is taken into account.
- The posterior probability distribution of one Random Variable given the value of another can be calculated with Bayes' Theorem by multiplying the Prior Probability Distribution by the Likelihood Function, and then dividing by the Normalizing Constant, as follows:
- [math]\displaystyle{ f_{X\mid Y=y}(x)={f_X(x) L_{X\mid Y=y}(x) \over {\int_{-\infty}^\infty f_X(x) L_{X\mid Y=y}(x)\,dx}} }[/math]
- gives the posterior Probability Density Function for a random variable [math]\displaystyle{ X }[/math] given the data [math]\displaystyle{ Y }[/math] = [math]\displaystyle{ y }[/math], where
- [math]\displaystyle{ f_X(x) }[/math] is the prior density of X,
- [math]\displaystyle{ L_{X\mid Y=y}(x) = f_{Y\mid X=x}(y) }[/math] is the likelihood function as a function of x,
- [math]\displaystyle{ \int_{-\infty}^\infty f_X(x) L_{X\mid Y=y}(x)\,dx }[/math] is the normalizing constant, and
- [math]\displaystyle{ f_{X\mid Y=y}(x) }[/math] is the posterior density of [math]\displaystyle{ X }[/math] given the data [math]\displaystyle{ Y }[/math] = y.
- (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Fair_coin
- In probability theory and statistics, a sequence of independent Bernoulli trials with probability 1/2 of success on each trial is metaphorically called a fair coin. One for which the probability is not 1/2 is called a biased or unfair coin. In theoretical studies, the assumption that a coin is fair is often made by referring to an ideal coin.
2008
- (Xiang et al., 2008) ⇒ Shiming Xiang, Feiping Nie, and Changshui Zhang. (2008). “Learning a Mahalanobis Distance Metric for Data Clustering and Classification.” In: Pattern Recognition, 41. doi:10.1016/j.patcog.2008.05.018
- Yang et al. presented a Bayesian framework in which a posterior distribution for the distance metric is estimated from the labeled pairwise constraints [40].
2007
- (Yang et al., 2007) ⇒ Liu Yang, Rong Jin, and Rahul Sukthankar. (2007). “Bayesian active distance metric learning."In: Proceedings of Uncertainty in Artificial Intelligence (UAI 2007).
- QUOTE: This paper presents a Bayesian framework for distance metric learning that estimates a posterior distribution for the distance metric from labeled pairwise constraints...
Furthermore, the proposed framework estimates not only the most likely distance metric, but also the uncertainty (i.e., the posterior distribution) for the estimated distance metric, which is further used for Active Distance Metric Learning.
- QUOTE: This paper presents a Bayesian framework for distance metric learning that estimates a posterior distribution for the distance metric from labeled pairwise constraints...
2006
- (Cox, 2006) ⇒ David R. Cox. (2006). “Principles of Statistical Inference." Cambridge University Press. ISBN:9780521685672