Variational Expectation Maximization (VEM) Algorithm
(Redirected from Variational Expectation Maximization Algorithm)
Jump to navigation
Jump to search
A Variational Expectation Maximization (VEM) Algorithm is an EM algorithm that ...
- …
- Counter-Example(s):
- See: Maximium Likelihood-based Algorithm, Variational Inference Algorithm.
References
2018
- (Blei, 2018) ⇒ David M. Blei. (2018). “Expressive Probabilistic Models and Scalable Method of Moments: Technical Perspective.” In: Communications of the ACM Journal, 61(4). doi:10.1145/3186260
- QUOTE: ... Such guarantees have not been proved for likelihood-based methods, like Markov chain Monte Carlo (MCMC), variational Bayes, or variational expectation maximization. More generally, the paper represents an elegant blend of theoretical computer science and probabilistic machine learning.
2006
- (Nasios & Bors, 2006) ⇒ Nikolaos Nasios and Adrian G. Bors. (2006). “Variational Learning for Gaussian Mixture Models.” In: IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 36(4). doi:10.1109/TSMCB.2006.872273
- ABSTRACT: This paper proposes a joint maximum likelihood and Bayesian methodology for estimating Gaussian mixture models. In Bayesian inference, the distributions of parameters are modeled, characterized by hyperparameters. In the case of Gaussian mixtures, the distributions of parameters are considered as Gaussian for the mean, Wishart for the covariance, and Dirichlet for the mixing probability. The learning task consists of estimating the hyperparameters characterizing these distributions. The integration in the parameter space is decoupled using an unsupervised variational methodology entitled variational expectation-maximization (VEM). This paper introduces a hyperparameter initialization procedure for the training algorithm. In the first stage, distributions of parameters resulting from successive runs of the expectation-maximization algorithm are formed. Afterward, maximum-likelihood estimators are applied to find appropriate initial values for the hyperparameters. The proposed initialization provides faster convergence, more accurate hyperparameter estimates, and better generalization for the VEM training algorithm. The proposed methodology is applied in blind signal detection and in color image segmentation
- CITED BY: ~35 http://scholar.google.com/scholar?q=%22Variational+learning+for+Gaussian+mixture+models%22+2006
1998
- (Jaakkola & Jordan, 1998) ⇒ Tommi S. Jaakkola, and Michael I. Jordan. (1998). “Improving the Mean Field Approximation Via the Use of Mixture Distributions.” In: (Jordan, 1998) "Learning in Graphical Models." MIT Press. ISBN:0-262-60032-3