Zoubin Ghahramani
Jump to navigation
Jump to search
Zoubin Ghahramani is a person.
- See: Approximate Bayesian Inference Algorithm, MCMC Algorithm, Non-Parametric Statistical Modeling Algorithm, Gaussian Process Algorithm, Variational Inference Algorithm.
References
- Personal Homepage: http://learning.eng.cam.ac.uk/zoubin/
- DBLP http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/g/Ghahramani:Zoubin.html
- Google Scholar Author Page: http://scholar.google.com/citations?user=0uTu7fYAAAAJ
2016
- (Gal & Ghahramani, 2016) ⇒ Yarin Gal, and Zoubin Ghahramani. (2016). “Dropout As a Bayesian Approximation: Representing Model Uncertainty in Deep Learning.” In: Proceedings of the 33rd International Conference on International Conference on Machine Learning (ICML-2016).
2012
- (Mohamed et al., 2012) ⇒ Shakir Mohamed, Katherine Heller, and Zoubin Ghahramani. (2012). “Bayesian and L1 Approaches to Sparse Unsupervised Learning.” In: Proceedings of the 29th International Conference on Machine Learning (ICML-12).
- QUOTE: The use of L1 regularisation for sparse learning has generated immense research interest, with successful application in such diverse areas as signal acquisition, image coding, genomics and collaborative filtering.
2010
- (Leskovec et al., 2010) ⇒ Jure Leskovec, Deepayan Chakrabarti, Jon Kleinberg, Christos Faloutsos, and Zoubin Ghahramani. (2010). “Kronecker Graphs: An approach to modeling networks.” In: The Journal of Machine Learning Research, 11.
2009
- (Ghahramani, 2009a) ⇒ Zoubin Ghahramani. (2009). http://learning.eng.cam.ac.uk/zoubin/approx.html
- Approximate Inference: For all but the simplest statistical models, exact learning and inference are computationally intractable. Approximate inference methods make it possible to learn realistic models from large data sets. Generally, approximate inference methods trade off computation time for accuracy. Some of the major classes of approximate inference methods include Markov chain Monte Carlo methods, variational methods and related algorithms such as Expectation Propagation.
- (Ghahramani, 2009b) ⇒ Zoubin Ghahramani. (2009). http://learning.eng.cam.ac.uk/zoubin/mcmc.html
- Markov chain Monte Carlo (MCMC) methods use sampling to approximate high dimensional integrals and intractable sums. MCMC methods are widely used in many areas of science, applied mathematics and engineering. They are an indispensable approximate inference tool for Bayesian statistics and machine learning.
- (Ghahramani, 2009c) ⇒ Zoubin Ghahramani. (2009). http://learning.eng.cam.ac.uk/zoubin/nonparam.html
- Non-parametric models are very flexible statistical models in which the complexity of the model grows with the amount of observed data. While traditional parametric models make strong assumptions about how the data was generated, non-parametric models try to make weaker assumptions and let the data "speak for itself". Many non-parametric models can be seen as infinite limits of finite parametric models, and an important family of non-parametric models are derived from Dirichlet processes. See also Gaussian Processes.
- (Ghahramani, 2009d) ⇒ Zoubin Ghahramani. (2009). http://learning.eng.cam.ac.uk/zoubin/gp.html
- Gaussian processes are a non-parametric method for doing Bayesian inference and learning on unknown functions. They can be used for non-linear regression, time-series modelling, classification, and many other problems.
2007
- (Perez-Cruz et al., 2007) ⇒ Fernando Perez-Cruz, Massimiliano Pontil, Zoubin Ghahramani. (2007). “Conditional Graphical Models.” In: (Bakir et al., 2007)
2006
- (Ghahramani & Heller, 2006) ⇒ Zoubin Ghahramani, and Katherine A. Heller. (2006). “Bayesian Sets.” In: Advances in Neural Information Processing Systems (NIPS 2006).
2005
- (Heller & Ghahramani, 2005) ⇒ Katherine A. Heller, and Zoubin Ghahramani. (2005). “Bayesian Hierarchical Clustering.” In: Proceedings of the 22nd International Conference on Machine Learning (ICML 2005). doi:10.1145/1102351.1102389
- (Zhang, Ghahramani & Yang, 2005) ⇒ J. Zhang, Zoubin Ghahramani, and Y. Yang. (2005). “Learning Multiple Related Tasks using Latent Independent Component Analysis.” In: Advances in Neural Information Processing Systems, 18 (NIPS 2005).
2004
- (Ghahramani, 2004) ⇒ Zoubin Ghahramani. (2004). “Bayesian Methods in Machine Learning." Seminar Talk, Oct 18 2004 at University of Birmingham.
- Bayesian methods can be applied to a wide range of probabilistic models commonly used in machine learning and pattern recognition. The challenge is to discover approximate inference methods that can deal with complex models and large scale data sets in reasonable time. In the past few years Variational Bayesian (VB) approximations have emerged as an alternative to MCMC methods. I will review VB methods and demonstrate applications to clustering, dimensionality reduction, time series modelling with hidden Markov and state-space models, independent components analysis (ICA) and learning the structure of probablistic graphical models. Time permitting, I will discuss current and future directions in the machine learning community, including non-parametric Bayesian methods (e.g. Gaussian processes, Dirichlet processes, and extensions).
2003
- (Zhu, Ghahramani & Lafferty, 2003) ⇒ Xiaojin Zhu, Zoubin Ghahramani, and John D. Lafferty. (2003). “Semi-supervised learning using Gaussian fields and harmonic functions.” In: Proceedings of the 20th International Conference on Machine Learning (ICML 2003).
- CITED BY: http://scholar.google.com/scholar?q=%22Semi-supervised+learning+using+gaussian+fields+and+harmonic+functions%22+2003
- NOTES: A graph-based semi-supervised learning algorithm that creates a graph over labeled and unlabeled examples. More similar examples are connected by edges with higher weights. The intuition is for the labels to propagate on the graph to unlabeled data. The solution can be found with simple matrix operations, and has strong connections to spectral graph theory.
- ABSTRACT: An approach to semi-supervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning problem is then formulated in terms of a Gaussian random field on this graph, where the mean of the field is characterized in terms of harmonic functions, and is efficiently obtained using matrix methods or belief propagation. The resulting learning algorithms have intimate connections with random walks, electric networks, and spectral graph theory. We discuss methods to incorporate class priors and the predictions of classifiers obtained by supervised learning. We also propose a method of parameter learning by entropy minimization, and show the algorithm’s ability to perform feature selection. Promising experimental results are presented for synthetic data, digit classification, and text classification tasks.
1999
- (Jordan et al., 1999) ⇒ Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. (1999). “An Introduction to Variational Methods for Graphical Models.” In: Machine Learning, 37(2). doi:10.1023/A:1007665907178
1995
- (Ghahramani & Jordan, 1995) ⇒ Zoubin Ghahramani, and Michael I. Jordan. (1995). “Factorial Hidden Markov Models.” In: Proceedings of Advances in Neural Information Processing Systems (NIPS 8).
1994
- (Ghahramani & Jordan, 1994) ⇒ Zoubin Ghahramani, and Michael I. Jordan. (1994). “Supervised Learning from Incomplete Data Via an EM Approach.” In: Advances in Neural Information Processing Systems (NIPS 6).