pith. machine review for the scientific record. sign in

arxiv: 1401.0118 · v1 · submitted 2013-12-31 · 📊 stat.ML · cs.LG· stat.CO· stat.ME

Recognition: unknown

Black Box Variational Inference

Authors on Pith no claims yet
classification 📊 stat.ML cs.LGstat.COstat.ME
keywords variationalinferencemodelsblackmethodmethodsquicklyalgorithm
0
0 comments X
read the original abstract

Variational inference has become a widely used method to approximate posteriors in complex latent variables models. However, deriving a variational inference algorithm generally requires significant model-specific analysis, and these efforts can hinder and deter us from quickly developing and exploring a variety of models for a problem at hand. In this paper, we present a "black box" variational inference algorithm, one that can be quickly applied to many models with little additional derivation. Our method is based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the variational distribution. We develop a number of methods to reduce the variance of the gradient, always maintaining the criterion that we want to avoid difficult model-based derivations. We evaluate our method against the corresponding black box sampling based methods. We find that our method reaches better predictive likelihoods much faster than sampling methods. Finally, we demonstrate that Black Box Variational Inference lets us easily explore a wide space of models by quickly constructing and evaluating several models of longitudinal healthcare data.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. The Theorems of Dr. David Blackwell and Their Contributions to Artificial Intelligence

    cs.GL 2026-04 unverdicted novelty 2.0

    Blackwell's Rao-Blackwell, Approachability, and Informativeness theorems provide frameworks for variance reduction, sequential decisions under uncertainty, and comparing information sources that remain relevant to AI.