2017 WassersteinAutoEncoders
- (Tolstikhin et al., 2017) ⇒ Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. (2017). “Wasserstein Auto-Encoders.” In: Proceedings of 6th International Conference on Learning Representations (ICLR-2018).
Subject Headings: Wasserstein Auto-Encoder, Variational Auto-Encoder, Wasserstein GAN.
Notes
Cited By
Quotes
Abstract
We propose the Wasserstein Auto-Encoder (WAE) --- a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE). This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality, as measured by the FID score.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2017 WassersteinAutoEncoders | Bernhard Schölkopf Olivier Bousquet Sylvain Gelly Ilya Tolstikhin | Wasserstein Auto-Encoders | 2017 |