VQ-GAN | Paper Explanation
Outlier Outlier
14.1K subscribers
20,216 views
709

 Published On Feb 16, 2022

Vector Quantized Generative Adversarial Networks (VQGAN) is a generative model for image modeling. It was introduced in Taming Transformers for High-Resolution Image Synthesis. The concept is build upon two stages. The first stage learns in an autoencoder-like fashion by encoding images into a low-dimensional latent space, then applying vector quantization by making use of a codebook. Afterwards, the quantized latent vectors are projected back to the original image space by using a decoder. Encoder and Decoder are fully convolutional. The second stage is learning a transformer for the latent space. Over the course of training it learns which codebook vectors go along together and which not. This can then be used in an autoregressive fashion to generate before unseen images from the data distribution.

#deeplearning #gan #generative # vqgan

0:00 Introduction
0:42 Idea & Theory
9:20 Implementation Details
13:37 Outro

Further Reading:
• VAE: https://towardsdatascience.com/unders...
• VQVAE: https://arxiv.org/pdf/1711.00937.pdf
• Why CNNS are invariant to sizes: https://www.quora.com/How-are-variabl...
• NonLocal NN: https://arxiv.org/pdf/1711.07971.pdf
• PatchGAN: https://arxiv.org/pdf/1611.07004.pdf

PyTorch Code: https://github.com/dome272/VQGAN

Follow me on instagram lol:   / dome271  

show more

Share/Embed