PGM replaces the intractable likelihood score in diffusion models with a closed-form Moreau score computed via proximal operators, enabling non-asymptotic sampling for inverse problems trained only on prior data.
Nearlyd-linear convergence bounds for diffusion models via stochastic local- ization.CoRR, abs/2308.03686
4 Pith papers cite this work. Polarity classification is still indexing.
years
2026 4verdicts
UNVERDICTED 4representative citing papers
A polynomial-time algorithm samples the SK model Gibbs measure with o(1) TVD error for β < 1/2 by combining potential Hessian ascent, stochastic localization, Jarzynski equality, and Glauber dynamics.
Training and sampling in static scalar energy generative models are two instances of the same Lyapunov-driven density transport dynamics on Wasserstein space, differing only by initial condition, which yields a finite stopping criterion for Langevin sampling and additive composition rules that keep
A plug-in estimator for tilted distributions is minimax-optimal, with Wasserstein closeness bounds to the true tilted distribution and TV-accuracy guarantees when running diffusion on the estimated samples.
citing papers explorer
-
Proximal-Based Generative Modeling for Bayesian Inverse Problems
PGM replaces the intractable likelihood score in diffusion models with a closed-form Moreau score computed via proximal operators, enabling non-asymptotic sampling for inverse problems trained only on prior data.
-
Potential Hessian Ascent III: Sampling the Sherrington--Kirkpatrick Model at Beta < 1/2
A polynomial-time algorithm samples the SK model Gibbs measure with o(1) TVD error for β < 1/2 by combining potential Hessian ascent, stochastic localization, Jarzynski equality, and Glauber dynamics.
-
Energy Generative Modeling: A Lyapunov-based Energy Matching Perspective
Training and sampling in static scalar energy generative models are two instances of the same Lyapunov-driven density transport dynamics on Wasserstein space, differing only by initial condition, which yields a finite stopping criterion for Langevin sampling and additive composition rules that keep
-
Generating DDPM-based Samples from Tilted Distributions
A plug-in estimator for tilted distributions is minimax-optimal, with Wasserstein closeness bounds to the true tilted distribution and TV-accuracy guarantees when running diffusion on the estimated samples.