Variational Auto-Encoder (VAE) Network

From GM-RKB
(Redirected from variational auto-encoder)
Jump to navigation Jump to search

A Variational Auto-Encoder (VAE) Network is an neural auto-encoder that introduces a probabilistic framework to encode inputs into a latent variable space.

  • Context:
    • It can (typically) use a VAE Encoder to map input data into a probabilistic distribution in the latent space, typically represented by a Gaussian distribution.
    • It can (typically) use a reparameterization trick, which allows gradients to be backpropagated through the stochastic layer during training, ensuring that the model can be trained using gradient-based optimization methods.
    • It can (often) be trained by a VAE Training System (that implements a VAE algorithm).
    • ...
  • Example(s):
    • A Vanilla VAE where both the encoder and decoder are simple fully connected neural networks, and the latent space is modeled as a Gaussian distribution.
    • A Conditional Variational Auto-Encoder (CVAE) where the model generates data conditioned on some auxiliary variable (e.g., generating images conditioned on labels).
    • A Variational Graph Auto-Encoder (VGAE) which operates on graph-structured data using graph convolutional networks as encoders.
    • A Beta-VAE that modifies the VAE framework to introduce a hyperparameter β, which controls the balance between the reconstruction loss and the KL divergence.
    • A Vector Quantized VAE (VQ-VAE) where the latent variables are discrete rather than continuous, making it easier to model certain types of data like discrete sequences.
    • ...
  • Counter-Example(s):
    • A Generative Adversarial Network (GAN): While both GANs and VAEs are used for generative modeling, GANs rely on adversarial training with a generator and discriminator, while VAEs use probabilistic inference.
    • A Standard Autoencoder: A regular autoencoder does not impose a probabilistic structure on the latent space and does not perform data generation via sampling.
    • A PCA (Principal Components Analysis), which is a linear dimensionality reduction method without probabilistic components or data generation capabilities.
  • See: VAE Algorithm, VAE System, Conditional Variational Auto-Encoder (CVAE), Variational Graph Auto-Encoder (VGAE).


References

2017

2016

  • (Kipf & Welling, 2016) ⇒ Thomas N. Kipf, and Max Welling. (2016). “Variational Graph Auto-Encoders.” Bayesian Deep Learning (NIPS Workshops 2016)
    • ABSTRACT: We introduce the variational graph auto-encoder (VGAE), a framework for unsupervised learning on graph-structured data based on the variational auto-encoder (VAE). This model makes use of latent variables and is capable of learning interpretable latent representations for undirected graphs. We demonstrate this model using a graph convolutional network (GCN) encoder and a simple inner product decoder. Our model achieves competitive results on a link prediction task in citation networks. In contrast to most existing models for unsupervised learning on graph-structured data and link prediction, our model can naturally incorporate node features, which significantly improves predictive performance on a number of benchmark datasets.