Lifted Inference Algorithm
A Lifted Inference Algorithm is a first-order statistical inference algorithm that ...
- Context:
- It can deals with groupsh of random variables at a first-order level
- It can reason over a large domains in time independent of the number of objects
- It can carry out probabilistic inference without reasoning about each individual separately
- It can exploit shared factors - same uncertainties and correlations repeatedly occur in data
- It can perform inference at the first]order logic level and ground out only when necessary (Getoor, 2012).
- See: Statistical Relational Learning.
References
2011
- http://www.biostat.wisc.edu/~natarasr/tutorials/lifted.htm
- Much has been achieved in the field of AI, yet much remains to be done if we are to reach the goals we all imagine. One of the key challenges with moving ahead is closing the gap between logical and statistical AI. Recent years have seen an explosion of successes in combining probability and (subsets of) first-order logic respectively programming languages and databases in several subfields of AI: Reasoning, Learning, Knowledge Representation, Planning, Databases, NLP, Robotics, Vision, etc. Nowadays, we can learn probabilistic relational models automatically from millions of inter-related objects. We can generate optimal plans and learn to act optimally in uncertain environments involving millions of objects and relations among them. Exploiting shared factors can speed up message-passing algorithms for relational inference but also for classical propositional inference such as solving SAT problems. We can even perform exact lifted probabilistic inference avoiding explicit state enumeration by manipulating first-order state representations directly.
Lifted Inference algorithms can be classified into three major categories: exact inference, approximate inference and preprocessing for lifted inference. The first lifted inference algorithm was proposed by David Poole in 2003. This is an exact lifted inference algorithm that combines variable elimination with resolution. This was later extended by Braz et al., Milch et al., Sen et al., and then by Kizyanski and Poole. The approximate lifted inference methods can be further classified as deterministic approximate methods, sampling-based methods and interval-based methods. The deterministic approximate methods are based on message passing methods and were introduced by Singla and Domingos. This was later generalized by Kersting et al., and further extended by Nath and Domingos. Sen et al., meanwhile extended their bisimulation based algorithm for approximate inference. The first sampling based lifted inference method was based on MCMC sampling and developed by Milch and Russell. Zettlemoyer and Natarajan et al., developed sampling based algorithms for dynamic models. Sampling based methods were also developed for Markov Logic networks based on a combination of satisfiability and MCMC techniques by Poon and Domingos. Braz et al., proposed an interval-based message passing technique in 2009 that propagated bounds as messages. Shavlik and Natarajan in a distinct yet related work proposed a preprocessing method for inference that yielded smaller networks to make inference tractable.
This one day tutorial on lifted inference is the first-of-its-kind with the goal of getting an "unbiased" view on lifted inference algorithms. To this effect, the researchers who developed the original lifted inference algorithms will themselves present their work. These algorithms range from exact inference to message passing algorithms to lifted algorithms and pre-processing for lifted inference algorithms. The presenters will use suitable examples and introduce the necessary background required for understanding their algorithms. In doing so, this tutorial will provide an excellent and an unique opportunity for students and researchers who wish to pursue research in probabilistic logical models, lifted inference, and statistical relational AI.
- Much has been achieved in the field of AI, yet much remains to be done if we are to reach the goals we all imagine. One of the key challenges with moving ahead is closing the gap between logical and statistical AI. Recent years have seen an explosion of successes in combining probability and (subsets of) first-order logic respectively programming languages and databases in several subfields of AI: Reasoning, Learning, Knowledge Representation, Planning, Databases, NLP, Robotics, Vision, etc. Nowadays, we can learn probabilistic relational models automatically from millions of inter-related objects. We can generate optimal plans and learn to act optimally in uncertain environments involving millions of objects and relations among them. Exploiting shared factors can speed up message-passing algorithms for relational inference but also for classical propositional inference such as solving SAT problems. We can even perform exact lifted probabilistic inference avoiding explicit state enumeration by manipulating first-order state representations directly.