Directed Conditional Graphical Model Family
A Directed Conditional Graphical Model Family is a conditional graphical model family that is a directed graphical model family.
- AKA: Bayesian Inference Network, BN, Belief Networks.
- Context:
- It can be associated with a Directed Conditional Statistical Network.
- It can be associated to a Directed Probabilistic Metamodel.
- It can be defined by a Directed Acyclic Graph [math]\displaystyle{ \mathcal{G}=(V,E) }[/math] with a [math]\displaystyle{ n }[/math] discrete random variables, and a CPD Vector [math]\displaystyle{ \Theta }[/math] for each node.
- It can be an input to a Bayesian Network Training Algorithm (that infers network structure and trains conditional probabilities)
- It can range from being a Shallow Directed Conditional Probability Network to being a Deep Directed Conditional Probability Network.
- It can range from being a Linear Bayesian Metamodel, a Hierarchical Bayesian Metamodel, or a General Bayesian Metamodel ..
- It can range from being a Bayesian Parametric Model to being a Bayesian Nonparametric Model.
- It can be described with the use of Graphical Metamodel Plate Notation.
- It can be used for: Bayesian Inference//Bayesian Reasoning.
- It can range from being a Static Bayesian Metamodel to being a Dynamic Bayesian Metamodel.
- It can be a Generative Statistical Model.
- It can allow for Efficient Encoding of Probability Distributions over high-dimensional datasets.
- Example(s):
- Counter-Example(s):
- See: Bayes Theorem, Inference Rule, Neighboring Clique, Outward Neighbor, Root Clique, Independency Model, Inward Neighbor, Sigmoid Belief Network, Random Subvector, Posterior Partition, Probabilistic Soundness, Junction Tree Algorithm, Generative Algorithm, Graphical Models; Random Field Network; Bayesian Inference.
References
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Bayesian_network Retrieved:2017-6-23.
- A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
Formally, Bayesian networks are DAGs whose nodes represent random variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Edges represent conditional dependencies; nodes that are not connected (there is no path from one of the variables to the other in the Bayesian network) represent variables that are conditionally independent of each other. Each node is associated with a probability function that takes, as input, a particular set of values for the node's parent variables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, if [math]\displaystyle{ m }[/math] parent nodes represent [math]\displaystyle{ m }[/math] Boolean variables then the probability function could be represented by a table of [math]\displaystyle{ 2^m }[/math] entries, one entry for each of the [math]\displaystyle{ 2^m }[/math] possible combinations of its parents being true or false. Similar ideas may be applied to undirected, and possibly cyclic, graphs; such as Markov networks.
Efficient algorithms exist that perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
- A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
2011
- (Sammut & Webb, 2011) ⇒ Claude Sammut (editor), and Geoffrey I. Webb (editor). (2011). “Bayesian Network.” In: (Sammut & Webb, 2011) p.81
- QUOTE: A Bayesian network is a form of directed graphical model for representing multivariate probability distributions.
The nodes of the network represent a set of random variables, and the directed arcs represent causal relationships between variables. The Markov property is usually required: every direct dependency between a possible cause and a possible effect has to be shown with an arc. Bayesian networks with the Markov property are called I-maps (independence maps). If all arcs in the network correspond to a direct dependence on the system being modeled, then the network is said to be a D-map (dependence-map). Each node is associated with a conditional probability distribution, that quantifies the effects the parents of the node, if any, have on it. Bayesian support various forms of reasoning: diagnosis, to derive causes from symptoms, prediction, to derive effects from causes, and intercausal reasoning, to discover the mutual causes of …
- QUOTE: A Bayesian network is a form of directed graphical model for representing multivariate probability distributions.
2007
- (Domingos, 2007) ⇒ Pedro Domingos, (2007). “Practical Statistical Relational AI." Tutorial at AAAI 2007 Conference.
2003a
- (Davison, 2003) ⇒ Anthony C. Davison. (2003). “Statistical Models." Cambridge University Press. ISBN:0521773393
2003b
- (Korb & Nicholson, 2003) ⇒ Kevin B. Korb, and Ann E. Nicholson. (2003). “Bayesian Artificial Intelligence." CRC Press.
- QUOTE: Bayesian networks (BNs) are graphical models for reasoning under uncertainty, where the nodes represent variables (discrete or continuous) and arcs represent direct connections between them. There direct connections are often causal connections. In addition, BNs model the quantitative strength of the connections between variables, allowing probabilistic beliefs about them to be updated automatically as new information becomes available. … The only constraint on the arcs allowed in a BN is that there must not be any directed cycles.
2001
- (Ng & Jordan, 2001) ⇒ Andrew Y. Ng, and Michael I. Jordan. (2001). “On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes.” In: Proceeding of NIPS Conference, 14 (NIPS 2001).
1999
- (Krause, 1999) ⇒ Paul J. Krause. (1999). “Learning Probabilistic Networks." The Knowledge Engineering Review, 13(4). doi:10.1017/S0269888998004019
1998
- (Murphy, 1998) ⇒ Kevin P. Murphy. (1998). “A Brief Introduction to Graphical Models and Bayesian Networks." Web tutorial.
- QUOTE: Hence Bayes nets are often called "generative" models, because they specify how causes generate effects.
1997
- (Koller & Avi, 1997) ⇒ Daphne Koller, and Avi Pfeffer. (1997). “Object-Oriented Bayesian Networks.” In: Proceedings of UAI (UAI 1997).
1996a
- (Buntine, 1996) ⇒ Wray Buntine. (1996). “A Guide to the Literature on Learning Probabilistic Networks from Data.” In: IEEE Transactions on Knowledge and Data Engineering, 8.
1996b
- (Heckerman, 1996) ⇒ David Heckerman. (1996). “A Tutorial on Learning with Bayesian Networks.” Technical Report MSR-TR-95-06, Microsoft Corporation.
1993
- (Tzeras & Hartman, 1993) ⇒ Kostas Tzeras, and Stephan Hartmann. (1993). “Automatic Indexing Based on Bayesian Inference Networks.” In: Proceedings of the ACM SIGIR 1993 Conference. doi:10.1145/160688.160691
1992
- (Cooper & Herskovits, 1992) ⇒ Gregory F. Cooper, and Edward Herskovits. (1992). “A Bayesian method for the induction of probabilistic networks from data.” In: Machine Learning, 9(4). doi:10.1007/BF00994110
1988
- (Pearl, 1988) ⇒ Judea Pearl. (1988). “Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference." Morgan Kaufmann. ISBN:1558604790