Metropolis-adjusted Langevin correctors using score-based acceptance probabilities, including an exact Bernoulli factory method and a Simpson's rule approximation, reduce sampling bias in diffusion models and improve FID scores.
Denoising diffusion probabilistic models
5 Pith papers cite this work. Polarity classification is still indexing.
verdicts
UNVERDICTED 5representative citing papers
Sec2Drum-DAC renders drum audio from symbolic inputs via diffusion on PCA-reduced DAC latents, improving spectral and transient metrics over regression baselines on 1733 held-out windows.
LAMP adds a lagged temporal correction derived from second-order discretization to diffusion posterior samplers, yielding consistent gains over DiffPIR and DDRM on imaging tasks via a bias-variance trade-off.
Aligning noisy hidden states in diffusion transformers to clean features from pretrained visual encoders speeds up training over 17x and reaches FID 1.42.
A confidence-guided diffusion framework generates synthetic Bangla compound characters that, when filtered and added to training data, raise classifier accuracy to 89.2% on the AIBangla dataset.
citing papers explorer
-
Metropolis-Adjusted Diffusion Models
Metropolis-adjusted Langevin correctors using score-based acceptance probabilities, including an exact Bernoulli factory method and a Simpson's rule approximation, reduce sampling bias in diffusion models and improve FID scores.
-
Seconds-Aligned PCA-DAC Latent Diffusion for Symbolic-to-Audio Drum Rendering
Sec2Drum-DAC renders drum audio from symbolic inputs via diffusion on PCA-reduced DAC latents, improving spectral and transient metrics over regression baselines on 1733 held-out windows.
-
Improving Diffusion Posterior Samplers with Lagged Temporal Corrections for Image Restoration
LAMP adds a lagged temporal correction derived from second-order discretization to diffusion posterior samplers, yielding consistent gains over DiffPIR and DDRM on imaging tasks via a bias-variance trade-off.
-
Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think
Aligning noisy hidden states in diffusion transformers to clean features from pretrained visual encoders speeds up training over 17x and reaches FID 1.42.
-
Confidence-Guided Diffusion Augmentation for Enhanced Bangla Compound Character Recognition
A confidence-guided diffusion framework generates synthetic Bangla compound characters that, when filtered and added to training data, raise classifier accuracy to 89.2% on the AIBangla dataset.