Link to the paper
Contribution
- This paper introduces a new generative model that overcomes the difficulties faced by older models like Deep Belief Networks, Variational Autoencoders, etc.
Background
- Multilayer Perceptron Networks: A multilayer perceptron is a class of feedforward artificial neural network.
- Backpropagation: Backpropagation is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network.
Description
- This paper proposes the idea of adversarial training, in which the generative network competes with the discriminative network, which leads to both the networks becoming more efficient in their tasks.
- The generative network tries to learn a data distribution, and the discriminative network tries to classify between samples from original data and the generative network.
Methodology
- Two multilayer perceptron networks are used as Generator and Discriminator.
- The generator takes in noise z and gives as output a sample x=G(z).
- Discriminator takes in sample x and gives as output a probability D(x).
- D(x) represents whether sample x came from the generator or original data.
- Generator tries to maximize the probability of Discriminator making a mistake.
- Discriminator tries to maximize the probability of assigning correct labels to the samples.
- Only backpropagation is used for obtaining gradients.
- After several iterations, the distribution of generated samples will overlap the distribution of original data.
- At this point, the discriminator fails to classify, giving output D(x)=1/2.
Experiments
- The networks were trained on datasets like MNIST, the Toronto Face Database, and CIFAR-10.
- The authors believe that the results obtained are at least competitive with other models in the literature.
Areas of Application
- Interactive Image Generation
- Text to Image Generation
- Image Editing
- Domain Transfer
Related Papers