Bayesian Probability Theory

From GM-RKB
(Redirected from Bayesian learning theory)
Jump to navigation Jump to search

A Bayesian probability theory is a probability theory in which probability is a measure of subjective belief.



References

2014

  • (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Bayesian_probability
    • Bayesian probability is one of the different interpretations of the concept of probability. The Bayesian interpretation of probability can be seen as an extension of propositional logic that enables reasoning with hypotheses, i.e., the propositions whose truth or falsity is uncertain.

      Bayesian probability belongs to the category of evidential probabilities; to evaluate the probability of a hypothesis, the Bayesian probabilist specifies some prior probability, which is then updated in the light of new, relevant data (evidence).[1] The Bayesian interpretation provides a standard set of procedures and formulae to perform this calculation.

      In contrast to interpreting probability as the "frequency" or "propensity" of some phenomenon, Bayesian probability is a quantity that we assign for the purpose of representing a state of knowledge,[2] or a state of belief. In the Bayesian view, a probability is assigned to a hypothesis, whereas under the frequentist view, a hypothesis is typically tested without being assigned a probability.

      The term "Bayesian" refers to the 18th century mathematician and theologian Thomas Bayes, who provided the first mathematical treatment of a non-trivial problem of Bayesian inference.[3] Mathematician Pierre-Simon Laplace pioneered and popularised what is now called Bayesian probability.[4]

      Broadly speaking, there are two views on Bayesian probability that interpret the probability concept in different ways. According to the objectivist view, the rules of Bayesian statistics can be justified by requirements of rationality and consistency and interpreted as an extension of logic.[2][5] According to the subjectivist view, probability quantifies a "personal belief".[6] Many modern machine learning methods are based on objectivist Bayesian principles.[7]

  1. Paulos, John Allen. The Mathematics of Changing Your Mind, New York Times (US). August 5, 2011; retrieved 2011-08-06
  2. 2.0 2.1 Jaynes, E.T. “Bayesian Methods: General Background." In Maximum-Entropy and Bayesian Methods in Applied Statistics, by J. H. Justice (ed.). Cambridge: Cambridge Univ. Press, 1986
  3. Stigler, Stephen M. (1986) The history of statistics. Harvard University press. pg 131.
  4. Stigler, Stephen M. (1986) The history of statistics., Harvard University press. pp97-98, 131.
  5. Cox, Richard T. Algebra of Probable Inference, The Johns Hopkins University Press, 2001
  6. de Finetti, B. (1974) Theory of probability (2 vols.), J. Wiley & Sons, Inc., New York
  7. Bishop, C.M. Pattern Recognition and Machine Learning. Springer, 2007

2003

  • (Korb & Nicholson, 2003) ⇒ Kevin B. Korb, and Ann E. Nicholson. (2003). “Bayesian Artificial Intelligence." Chapman & Hall/CRC.
    • Bayesianism is the philosophy that asserts that in order to understand human opinions as it ought to be, constrained by ignorance and uncertainty, the probability calculus is the single most important tool for representing appropriate strengths of belief.

2000