Diederik P. Kingma
Jump to navigation
Jump to search
Diederik P. Kingma is a person.
- See: Variational Autoencoding Algorithm, MaskGAN Benchmark Task, MaskGAN System, Sparse Autoencoder Network.
References
References
2020
- (Song et al., 2020) ⇒ Y. Song, J. Sohl-Dickstein, Diederik P. Kingma, A. Kumar, S. Ermon, and B. Poole. (2020). "Score-based Generative Modeling through Stochastic Differential Equations." In: arXiv preprint arXiv:2011.13456.
- NOTE: Proposes a new generative modeling framework using stochastic differential equations.
- NOTE: Demonstrates how this approach improves the generation process in complex, high-dimensional spaces.
- NOTE: Provides a novel method to combine diffusion models and score-based generative modeling.
2018
- (Kingma & Dhariwal, 2018) ⇒ Diederik P. Kingma and P. Dhariwal. (2018). "Glow: Generative Flow with Invertible 1x1 Convolutions." In: Advances in Neural Information Processing Systems, 10215-10224.
- NOTE: Glow introduces normalizing flows for efficient training and inference in generative models.
- NOTE: Utilizes invertible 1x1 convolutions, allowing easy training and sampling.
- NOTE: Glow's architecture simplifies the inversion process and improves the generation of realistic samples.
2019
- (Kingma & Welling, 2019) ⇒ Diederik P. Kingma and M. Welling. (2019). "An Introduction to Variational Autoencoders." In: Foundations and Trends® in Machine Learning, 12(4), 307-392.
- NOTE: A comprehensive review of Variational Autoencoders (VAEs), covering their theoretical foundations.
- NOTE: Highlights various practical applications of VAEs across different machine learning tasks.
- NOTE: Discusses the limitations and future directions for improvement in VAE-based models.
2016
- (Salimans & Kingma, 2016) ⇒ T. Salimans and Diederik P. Kingma. (2016). "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks." In: Advances in Neural Information Processing Systems, 901-901.
- NOTE: Introduces weight normalization as a technique to accelerate the convergence of deep learning models.
- NOTE: Weight normalization simplifies optimization by decoupling the length of weight vectors from their direction.
- NOTE: It significantly speeds up training, especially when used with stochastic gradient descent.
- (Kingma et al., 2016) ⇒ Diederik P. Kingma, T. Salimans, R. Jozefowicz, X. Chen, I. Sutskever, and M. Welling. (2016). "Improved Variational Inference with Inverse Autoregressive Flow." In: Advances in Neural Information Processing Systems, 4743-4751.
- NOTE: Proposes inverse autoregressive flows to improve the flexibility of variational inference.
- NOTE: This method enhances the capacity of variational distributions, improving inference accuracy.
- NOTE: Extends the applicability of VAEs by allowing more expressive posterior approximations.
2015
- (Kingma & Ba, 2015) ⇒ Diederik P. Kingma, and Jimmy Ba. (2015). “Adam: A Method for Stochastic Optimization.” In: Proceedings of the 3rd International Conference for Learning Representations (ICLR-2015).
- NOTE: Introduces the Adam optimizer, combining the advantages of AdaGrad and RMSProp.
- NOTE: Adam is computationally efficient, with low memory requirements, making it suitable for large datasets.
- NOTE: Uses first-order gradients, making it robust and adaptable for various machine learning tasks.
2014
- (Kingma et al., 2014) ⇒ Diederik P. Kingma, S. Mohamed, D.J. Rezende, and M. Welling. (2014). "Semi-Supervised Learning with Deep Generative Models." In: Advances in Neural Information Processing Systems, 3581-3589.
- NOTE: Introduces a method for combining generative modeling with semi-supervised learning.
- NOTE: Allows model training with limited labeled data by leveraging a deep generative model.
- NOTE: Demonstrates improvement in classification tasks by incorporating generative modeling principles.
2013
- (Kingma & Welling, 2013) ⇒ Diederik P. Kingma and M. Welling. (2013). "Auto-Encoding Variational Bayes." In: arXiv preprint arXiv:1312.6114.
- NOTE: Introduces Variational Autoencoders (VAEs), a generative model based on variational inference.
- NOTE: Uses the "reparameterization trick" to allow backpropagation through stochastic layers.
- NOTE: VAEs have become foundational in unsupervised learning and generative modeling.