Skip to content

Instantly share code, notes, and snippets.

@larsr
Created October 30, 2025 07:53
Show Gist options
  • Select an option

  • Save larsr/31031493b219bb2a84ffcd0091dc47c0 to your computer and use it in GitHub Desktop.

Select an option

Save larsr/31031493b219bb2a84ffcd0091dc47c0 to your computer and use it in GitHub Desktop.

VAE Derivation: From KL Divergence to ELBO

Showing that minimizing KL divergence is equivalent to maximizing ELBO

$$\text{(1)} \quad \text{KL}(q_\phi(z|x) | p(z|x)) = \mathbb{E}_{q_\phi(z|x)}\left[\log \frac{q_\phi(z|x)}{p(z|x)}\right] \quad \text{[definition of KL]}$$

$$\text{(2)} \quad = \mathbb{E}_{q_\phi(z|x)}[\log q_\phi(z|x)] - \mathbb{E}_{q_\phi(z|x)}[\log p(z|x)] \quad \text{[split logarithm]}$$

$$\text{(3)} \quad = \mathbb{E}_{q_\phi(z|x)}[\log q_\phi(z|x)] - \mathbb{E}_{q_\phi(z|x)}\left[\log \frac{p(x,z)}{p(x)}\right] \quad \text{[Bayes rule]}$$

$$\text{(4)} \quad = \mathbb{E}_{q_\phi(z|x)}[\log q_\phi(z|x)] - \mathbb{E}_{q_\phi(z|x)}[\log p(x,z)] + \mathbb{E}_{q_\phi(z|x)}[\log p(x)] \quad \text{[split logarithm]}$$

$$\text{(5)} \quad = \mathbb{E}_{q_\phi(z|x)}[\log q_\phi(z|x)] - \mathbb{E}_{q_\phi(z|x)}[\log p(x,z)] + \log p(x) \quad \text{[}p(x) \text{ constant w.r.t. } z\text{]}$$

$$\text{(6)} \quad = -\left(\mathbb{E}_{q_\phi(z|x)}[\log p(x,z)] - \mathbb{E}_{q_\phi(z|x)}[\log q_\phi(z|x)]\right) + \log p(x) \quad \text{[rearrange]}$$

$$\text{(7)} \quad = -\text{ELBO} + \log p(x) \quad \text{[definition of ELBO]}$$

$$\text{(8)} \quad \therefore \quad \min_\phi \text{KL}(q_\phi(z|x) | p(z|x)) \equiv \max_\phi \text{ELBO} \quad \text{[}\log p(x) \text{ constant w.r.t. } \phi\text{]}$$

Conclusion

Since $\log p(x)$ does not depend on the encoder parameters $\phi$, minimizing the KL divergence between the approximate posterior $q_\phi(z|x)$ and the true posterior $p(z|x)$ is equivalent to maximizing the Evidence Lower Bound (ELBO).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment