1999 LearningProbabilisticNetworks
Jump to navigation
Jump to search
- (Krause, 1999) ⇒ Paul J. Krause. (1999). “Learning Probabilistic Networks.” In: The Knowledge Engineering Review, 13(4). doi:10.1017/S0269888998004019
Subject Headings:
Notes
Cited By
~75 http://scholar.google.com/scholar?cites=11049291384216845408
Quotes
Abstract
- A probabilistic network is a graphical model that encodes probabilistic relationships between variables of interest. Such a model records qualitative influences between variables in addition to the numerical parameters of the probability distribution. As such it provides an ideal form for combining prior knowledge, which might be limited solely to experience of the influences between some of the variables of interest, and data. In this paper, we first show how data can be used to revise initial estimates of the parameters of a model. We then progress to showing how the structure of the model can be revised as data is obtained. Techniques for learning with incomplete data are also covered. In order to make the paper as self contained as possible, we start with an introduction to probability theory and probabilistic graphical models. The paper concludes with a short discussion on how these techniques can be applied to the problem of learning causal relationships between variables in a domain of interest.
4. Graph theory
- As mentioned in the opening to this section, the important point about a graphical representation of a set of variables is that the edges can be used to indicate relevance or influences between variables. Absence of an edge between two variables, on the other hand, provides some form of independence statement; nothing about the state of one variable can be inferred by the state of the other
- There is a direct relationship between the independence relationships that can be expressed graphically and the independence relationships that can be defined in terms of probability distributions.
7.5 Some further reading on learning parameters
- Dawid and Lauritzen [22] introduced the notion of a hyper Markov law. This is a probability distribution over a set of probability measures on a multivariate space. Their philosophy is similar to [88], in that the role of the expert is seen as a provider of a prior distribution expressing their uncertainty about the numerical parameters of a graphical model. These parameters may then be revised and eventually superseded, using case data. A major distinct contribution of [22] is to explore the details of this Bayesian approach to learning parameters in the context of undirected rather than directed graphs.
References
,
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
1999 LearningProbabilisticNetworks | Paul J. Krause | Learning Probabilistic Networks | http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.9078 | 10.1017/S0269888998004019 |