Recognition: unknown
Improving Variational Inference with Inverse Autoregressive Flow
read the original abstract
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.
This paper has not been read by Pith yet.
Forward citations
Cited by 3 Pith papers
-
Density estimation using Real NVP
Real NVP uses affine coupling layers to create invertible transformations that support exact density estimation, sampling, and latent inference without approximations.
-
Dartmouth Stellar Evolution Emulator (DSEE) 1: Generative Stellar Evolution Model Database
DSEE is a flow-based emulator that generates stellar evolution tracks and isochrones as probabilistic outputs from a single model trained on millions of simulations, enabling fast interpolation and uncertainty-aware analyses.
-
Machine Learning for neutron source distributions
Generative models including VAEs, normalizing flows, GANs, and diffusion models can learn neutron source distributions from Monte Carlo lists for fast, memory-free sampling after training.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.