Generative Adversarial Network (GAN) Model

From GM-RKB
Jump to navigation Jump to search

A Generative Adversarial Network (GAN) is an Modular Neural Network that is composed by a Generator Neural Network and Discriminator Neural Network trained simultaneously by an GAN Training System.



References

2020a

  • (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Generative_adversarial_network Retrieved:2020-11-29.
    • A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014.[1] Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss).

      Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proven useful for semi-supervised learning,[2] fully supervised learning, and reinforcement learning. The core idea of a GAN is based on the "indirect" training through the discriminator, which itself is also being updated dynamically. This basically means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner.

  1. Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). Generative Adversarial Networks (PDF). Proceedings of the International Conference on Neural Information Processing Systems (NIPS 2014). pp. 2672–2680.
  2. Salimans, Tim; Goodfellow, Ian; Zaremba, Wojciech; Cheung, Vicki; Radford, Alec; Chen, Xi (2016). “Improved Techniques for Training GANs". arXiv:1606.03498

2020b

2019

2018a

2018 MaskGANBetterTextGenerationviaF Fig1.png
Figure 1: seq2seq generator architecture. Blue boxes represent known tokens and purple boxes are imputed tokens. We demonstrate a sampling operation via the dotted line. The encoder reads in a masked sequence, where masked tokens are denoted by an underscore, and then the decoder imputes the missing tokens by using the encoder hidden states. In this example, the generator should fill in the alphabetical ordering, (a, b, c, d, e).

2018b

2018 LongTextGenerationviaAdversaria Fig1.png
Figure 1: An overview of our LeakGAN text generation framework. While the generator is responsible to generate the next word, the discriminator adversarially judges the generated sentence once it is complete. The chief novelty lies in that, unlike conventional adversarial training, during the process, the discriminator reveals its internal state (feature $f_t$) in order to guide the generator more informatively and frequently. (See Methodology Section for more details.)

2018c

2017a

2017b

2017c

2017d

2016a

2016b

2014