Bayesian Learning Algorithm
Jump to navigation
Jump to search
A Bayesian Learning Algorithm is a posterior probability-based inference_algorithm that can be applied by a Bayesian learning system (that can solve a Bayesian learning task that requires the application of Bayes rule).
- Context:
- It can range from being a Parametric Bayesian Learning Algorithm and Nonparametric Bayesian Learning Algorithm[1]
- It can range from being an Exact Bayesian Learning Algorithm to being an Approximate Bayesian Learning Algorithm (such as Variational Bayes).
- Example(s):
- Counter-Example(s):
- See: Model Training Algorithm, Bayesian Network Training Algorithm, Bayesian Nonparametric Metamodel.
References
2011
- (Ghahramani, 2011) ⇒ Zoubin Ghahramani. (2011). “Why Bayesian Nonparametrics?" NIPS 2011 Workshop. http://videolectures.net/nipsworkshops2011_ghahramani_nonparametrics/
- (Buntine, 2011) ⇒ Wray Buntine. (2011). “Bayesian Methods.” In: (Sammut & Webb, 2011) p.75
- QUOTE: The two most important concepts used in Bayesian modeling are probability and utility. Probabilities are used to model our belief about the state of the world and utilities are used to model the value to us of different outcomes, thus to model costs and benefits. Probabilities are represented in the form of p(x|C), where C is the current known context and x is some event ( s) of interest from a space χ. The left and right arguments of the probability function are in general propositions (in the logical sense). Probabilities are updated based on new evidence or outcomes y using Bayes rule, which takes the form :[math]\displaystyle{ p(x|C,y) = \frac{p(x|C)p(y|x,C)}{p(y|C)}, }[/math] where χ is the discrete domain of x. More generally, any measurable set can be used for the domain χ. An integral or mixed sum and integral can replace the sum. For a utility function u (x) of some event x, for instance the benefit of a particular outcome, the expected value of u () is …
1997
- (Mitchell, 1997) ⇒ Tom M. Mitchell. (1997). “Machine Learning." McGraw-Hill.