Dynamic Bayesian Model
A Dynamic Bayesian Model is a directed conditional statistical metamodel that can represent a Markov chain.
- AKA: Temporal Dynamic Probabilistic Network, DBN, Dynamic Bayesian Network.
- Context:
- It can be used to model a Stochastic Process.
- It can be created by a Dynamic Bayesian Modeling System (that solve a dynamic Bayesian modeling task).
- Example(s):
- See: Undirected Graphical Model.
References
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Dynamic_Bayesian_network Retrieved:2015-10-1.
- A Dynamic Bayesian Network (DBN) is a Bayesian Network which relates variables to each other over adjacent time steps. This is often called a Two-Timeslice BN (2TBN) because it says that at any point in time T, the value of a variable can be calculated from the internal regressors and the immediate prior value (time T-1). DBNs were developed by Paul Dagum in the early 1990s when he led research funded by two National Science Foundation grants at Stanford University's Section on Medical Informatics. [1] [2] Dagum developed DBNs to unify and extend traditional linear state-space models such as Kalman filters, linear and normal forecasting models such as ARMA and simple dependency models such as hidden Markov models into a general probabilistic representation and inference mechanism for arbitrary nonlinear and non-normal time-dependent domains. [3] [4] Today, DBNs are common in robotics, and have shown potential for a wide range of data mining applications. For example, they have been used in speech recognition, digital forensics, protein sequencing, and bioinformatics. DBN is a generalization of hidden Markov models and Kalman filters.
</references>
2011
- (Sammut & Webb, 2011) ⇒ Claude Sammut (editor), and Geoffrey I. Webb (editor). (2011). “DBN.” In: (Sammut & Webb, 2011) p.261
2002
- (Xing et al., 2002) ⇒ Eric P. Xing, Michael I. Jordan, Richard M. Karp and Stuart J. Russell. (2002). “A Hierarchical Bayesian Markovian Model for Motifs in Biopolymer Sequences.” In: Proceedings of Advances in Neural Information Processing Systems (NIPS 2002).
- QUOTE: We propose a dynamic Bayesian model for motifs in biopolymer sequences which captures rich biological prior knowledge and positional dependencies in motif structure in a principled way. Our model posits that the position-specific multinomial parameters for monomer distribution are distributed as a latent Dirichlet-mixture random variable, and the position-specific Dirichlet component is determined by a hidden Markov process.
2001
- (Murphy & Paskin, 2001) ⇒ Kevin P. Murphy, and Mark A. Paskin. (2001). “Linear Time Inference in Hierarchical HMMs.” In: Proceedings of NIPS 2001.
1998
- (Murphy, 1998) ⇒ Kevin P. Murphy. (1998). “A Brief Introduction to Graphical Models and Bayesian Networks." Web tutorial.
- QUOTE: Dynamic Bayesian Networks (DBNs) are directed graphical models of stochastic processes. They generalise hidden Markov models (HMMs) and linear dynamical systems (LDSs) by representing the hidden (and observed) state in terms of state variables, which can have complex interdependencies. The graphical structure provides an easy way to specify these conditional independencies, and hence to provide a compact parameterization of the model.
Note that “temporal Bayesian network” would be a better name than “dynamic Bayesian network", since it is assumed that the model structure does not change, but the term DBN has become entrenched. We also normally assume that the parameters do not change, i.e., the model is time-invariant. However, we can always add extra hidden nodes to represent the current "regime", thereby creating mixtures of models to capture periodic non-stationarities. There are some cases where the size of the state space can change over time, e.g., tracking a variable, but unknown, number of objects. In this case, we need to change the model structure over time.
- QUOTE: Dynamic Bayesian Networks (DBNs) are directed graphical models of stochastic processes. They generalise hidden Markov models (HMMs) and linear dynamical systems (LDSs) by representing the hidden (and observed) state in terms of state variables, which can have complex interdependencies. The graphical structure provides an easy way to specify these conditional independencies, and hence to provide a compact parameterization of the model.
1997
- (Ghahramani, 1997) ⇒ Zoubin Ghahramani. (1997). “Learning Dynamic Bayesian Networks.” In: Lecture Notes In Computer Science, 1387.
1995
- (Kanazawa et al., 1995) ⇒ K. Kanazawa, Daphne Koller, and S. Russell. (1995). “Stochastic Simulation Algorithms for Dynamic Probabilistic Networks.” In: Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence.