W-Flow achieves state-of-the-art one-step ImageNet 256x256 generation at 1.29 FID by training a static neural network to follow a Wasserstein gradient flow that minimizes Sinkhorn divergence, delivering roughly 100x faster sampling than comparable multi-step models.
LightSBB-M: Bridging Schr\"odinger and Bass for Generative Diffusion Modeling
4 Pith papers cite this work. Polarity classification is still indexing.
abstract
The Schrodinger Bridge and Bass (SBB) formulation, which jointly controls drift and volatility, is an established extension of the classical Schrodinger Bridge (SB). Building on this framework, we introduce LightSBB-M, an algorithm that computes the optimal SBB transport plan in only a few iterations. The method exploits a dual representation of the SBB objective to obtain analytic expressions for the optimal drift and volatility, and it incorporates a tunable parameter beta greater than zero that interpolates between pure drift (the Schrodinger Bridge) and pure volatility (Bass martingale transport). We show that LightSBB-M achieves the lowest 2-Wasserstein distance on synthetic datasets against state-of-the-art SB and diffusion baselines with up to 32 percent improvement. We also illustrate the generative capability of the framework on an unpaired image-to-image translation task (adult to child faces in FFHQ). These findings demonstrate that LightSBB-M provides a scalable, high-fidelity SBB solver that outperforms existing SB and diffusion baselines across both synthetic and real-world generative tasks. The code is available at https://github.com/alexouadi/LightSBB-M.
years
2026 4representative citing papers
SBBTS creates a diffusion process that jointly models drift and stochastic volatility in financial time series via a tractable decomposition into conditional transport problems, recovering parameters missed by prior Schrödinger bridge methods and improving downstream ML performance on S&P 500 data.
QDSB computes Schrödinger bridge couplings on anchor-quantized endpoint distributions and lifts the plan back to original points, matching baseline quality with substantially lower training time.
A McKean-Vlasov FBSDE generative model learns stochastic path laws that match observed terminal and time-marginal distributions via soft energy constraints rather than hard interpolation.
citing papers explorer
-
One-Step Generative Modeling via Wasserstein Gradient Flows
W-Flow achieves state-of-the-art one-step ImageNet 256x256 generation at 1.29 FID by training a static neural network to follow a Wasserstein gradient flow that minimizes Sinkhorn divergence, delivering roughly 100x faster sampling than comparable multi-step models.
-
SBBTS: A Unified Schr\"odinger-Bass Framework for Synthetic Financial Time Series
SBBTS creates a diffusion process that jointly models drift and stochastic volatility in financial time series via a tractable decomposition into conditional transport problems, recovering parameters missed by prior Schrödinger bridge methods and improving downstream ML performance on S&P 500 data.
-
QDSB: Quantized Diffusion Schr\"odinger Bridges
QDSB computes Schrödinger bridge couplings on anchor-quantized endpoint distributions and lifts the plan back to original points, matching baseline quality with substantially lower training time.
-
Learning Generative Dynamics with Soft Law Constraints: A McKean-Vlasov FBSDE Approach
A McKean-Vlasov FBSDE generative model learns stochastic path laws that match observed terminal and time-marginal distributions via soft energy constraints rather than hard interpolation.