2015 WhatAboutStatisticalRelationalL
- (Domingos et al., 2015) ⇒ Pedro Domingos, Kristian Kersting, Raymond Mooney, and Jude Shavlik. (2015). “What About Statistical Relational Learning?.” In: Communications of the ACM Journal, 58(12). doi:10.1145/2841423
Subject Headings: Statistical Relational Learning, Markov Logic Networks.
Notes
Cited By
- http://scholar.google.com/scholar?q=%222015%22+What+About+Statistical+Relational+Learning%3F
- http://dl.acm.org/citation.cfm?id=2847579.2841423&preflayout=flat#citedby
Quotes
Body
While Stuart Russell's review article "Unifying Logic and Probability" (July 2015) provided an excellent summary of a number of attempts to unify these two representations, it also gave an incomplete picture of the state of the art. The entire field of statistical relational learning (SRL), which was never mentioned in the article, is devoted to learning logical probabilistic models. Although the article said little is known about computationally feasible algorithms for learning the structure of these models, SRL researchers have developed a wide variety of them. Likewise, contrary to the article's statement that generic inference for logical probabilistic models remains too slow, many efficient algorithms for this purpose have been developed.
The article mentioned Markov logic networks (MLNs), arguably the leading approach to unifying logic and probability, but did not accurately describe them. While the article conflated MLNs with Nilsson's probabilistic logic, the two are quite different in a number of crucial respects. For Nilsson, logical formulas are indivisible constraints; in contrast, MLNs are log-linear models that use first-order formulas as feature templates, with one feature per grounding of the formula. This novel use of first-order formulas allows MLNs to compactly represent most graphical models, something previous probabilistic logics could not do. This capability contributes significantly to the popularity of MLNs. And since MLNs subsume first-order Bayesian networks, the article's claim that MLNs have problems with variable numbers of objects and irrelevant objects that Bayes-net approaches avoid is incorrect. MLNs and their variants cannot only handle object uncertainty but relation uncertainty as well. Further, the article said MLNs perform inference by applying MCMC to a ground network, but several lifted inference algorithms for them exist.
Another major strand of research in this area the article did not portray accurately is probabilistic logic programming. The article said the "... first significant probabilistic programming language was Pfeffer's IBAL. “ While IBAL is definitely significant, Poole's ICL and Sato's PRISM were developed much earlier and have had a significant impact on the field. ICL and PRISM essentially extend the Prolog programming language by labeling facts with probabilities. They then use these probabilistic facts in the same way probabilistic databases use labeled tuples — to define a probability distribution over possible worlds. They can represent Bayesian networks, as well as cope with infinite possible worlds and an unknown number of objects. Sato received the test-of-time award from the International Conference on Logic Programming in 2015 for his seminal 1995 paper on PRISM.
The article concluded with, "... these are early days in the process of unifying logic and probability. “ On the contrary; with developments like MLNs, probabilistic logic programming, lifted inference, statistical relational learning, and more generally statistical relational AI, we are well on our way to solving this longstanding problem.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2015 WhatAboutStatisticalRelationalL | Pedro Domingos Kristian Kersting Raymond J. Mooney Jude Shavlik | What About Statistical Relational Learning? | 10.1145/2841423 | 2015 |