Understanding Variational Autoencoders (VAEs) | Deep Learning
DeepBean DeepBean
4.06K subscribers
10,008 views
419

 Published On Apr 9, 2024

Here we delve into the core concepts behind the Variational Autoencoder (VAE), a widely used representation learning technique that uncovers the hidden factors of variation throughout a dataset.

Timestamps
--------------------
Introduction 0:00
Latent variables 01:53
Intractability of the marginal likelihood 05:08
Bayes' rule 06:35
Variational inference 09:01
KL divergence and ELBO 10:14
ELBO via Jensen's inequality 12:06
Maximizing the ELBO 12:57
Analyzing the ELBO gradient 14:34
Reparameterization trick 15:55
KL divergence of Gaussians 17:40
Estimating the log-likelihood 19:04
Computing the log-likelihood 19:58
The Gaussian case 20:17
The Bernoulli case 21:56
VAE architecture 23:33
Regularizing the latent space 25:37
Balance of losses 28:00

Useful links
------------------------
Original VAE paper: https://arxiv.org/abs/1312.6114
More detailed explanation: https://arxiv.org/abs/1906.02691
Nice discussion of the reparameterization trick: https://gregorygundersen.com/blog/201...
Intro to variational inference and the ELBO: https://www.cs.cmu.edu/~epxing/Class/...
On the problem of learnt variance in the decoder: https://arxiv.org/abs/2006.13202
VAE tutorial in Keras: https://keras.io/examples/generative/...
MIT lecture on deep generative modelling:    • MIT 6.S191 (2023): Deep Generative Mo...  
Deriving the KL divergence for Gaussians: https://leenashekhar.github.io/2019-0...
Article with a nice discussion of regularized latent spaces: https://towardsdatascience.com/unders...

show more

Share/Embed