pith. machine review for the scientific record. sign in

arxiv: 2605.06538 · v1 · submitted 2026-05-07 · 💻 cs.LG

Recognition: unknown

Diffusion-Based Posterior Sampling: A Feynman-Kac Analysis of Bias and Stability

Advait Parulekar, Matias G. Delgadino, Sanjay Shakkottai, Sebastien Motsch, William Porteous

Pith reviewed 2026-05-08 12:29 UTC · model grok-4.3

classification 💻 cs.LG
keywords diffusion modelsposterior samplingFeynman-Kac representationbias analysisinverse problemsRadon-Nikodym correctionstability of discretizations
0
0 comments X

The pith

A Feynman-Kac path expectation quantifies the exact bias in diffusion posterior samplers even when prior scores are exact.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that diffusion-based posterior samplers remain biased despite exact prior scores because their generated paths deviate from the true posterior in a measurable way. It introduces a surrogate path from the posterior back to a standard Gaussian and derives a parabolic PDE for the density ratio between this path and the sampler path. The solution of that PDE is expressed via Feynman-Kac as an explicit expectation over trajectories, which directly identifies regions that the sampler over- or under-represents. This same representation explains the behavior of existing correctives such as STSL and early guidance stopping, and it supplies concrete formulas for the bias in common algorithms like DPS.

Core claim

By comparing the sampler trajectory to a tractable surrogate path that connects the true posterior to a Gaussian, the density ratio satisfies a parabolic PDE whose reaction term accumulates the accumulated bias. A Feynman-Kac representation then writes the Radon-Nikodym correction as an explicit path expectation. For DPS this expectation couples the data-conditional covariance with the reward curvature; for STSL it corresponds to an auxiliary drift that flattens the spatially varying part of the reaction term.

What carries the argument

The surrogate path from posterior to standard Gaussian together with the Feynman-Kac representation of the solution to the governing parabolic PDE for the density ratio.

If this is right

  • For DPS the bias correction reduces to an Ornstein-Uhlenbeck path expectation that couples conditional covariance with reward curvature.
  • STSL corresponds to an auxiliary drift that removes the spatially varying component of the reaction term.
  • Early stopping of guidance prevents forward-Euler instabilities that arise in low-temperature regimes.
  • The same framework supplies a systematic way to design new variants whose reaction term is closer to zero.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The path-expectation formula could be used to derive a low-cost importance-weighting correction that is added after sampling.
  • Similar surrogate-path constructions may apply to score-based samplers outside the diffusion setting.
  • The stability analysis suggests that higher-order integrators or implicit schemes would reduce the need for early stopping.

Load-bearing premise

The chosen surrogate path stays tractable and the associated parabolic PDE admits a well-behaved Feynman-Kac representation when the exact prior score is used.

What would settle it

Compute the explicit path expectation for a low-dimensional Gaussian posterior with known reward, run the DPS sampler on the same instance, and check whether the empirical frequency of samples in each region matches the predicted correction factor.

Figures

Figures reproduced from arXiv: 2605.06538 by Advait Parulekar, Matias G. Delgadino, Sanjay Shakkottai, Sebastien Motsch, William Porteous.

Figure 1
Figure 1. Figure 1: The blue dotted line illustrates the path taken by the standard forward OU process from ρ∗, ⃗ρt and its reversal ←−ρ t. The violet line illustrates the OU process ←−µ OU t , whose reversal ←−µ t we cannot track at inference time. The red line illustrates the surrogate path −→µ DP S t = e Ry(ˆxt)ρ∗/Z we construct, with the same beginning and end points as ⃗µt. The orange line denotes the algorithm path ν DP… view at source ↗
Figure 2
Figure 2. Figure 2: True Posterior versus DPS Samples: Dashed line is measurement constraint view at source ↗
Figure 3
Figure 3. Figure 3: (Top Left) A pictorial depiction of instability - as the trajectory approaches the data manifold, the large effective guidance schedule triggers oscillations in the trajectory. (Top Right) An exhibition of these oscillation on a posterior sampling task with an MNIST prior. (Bottom) A plot of the last four iterates of DPS, re-centered about their mean. The guidance tilted the distribution towards the digit … view at source ↗
Figure 4
Figure 4. Figure 4: We plot a projected discrepancy (f(xt) − 1k) · 1 ((Columns 1 and 3)) and the reward ∥f(xt) − 1k∥ ((Columns 2 and 4)) across t where denoising proceeds from left (most noise) to right (least noise). The second row depicts a close-up plot of just the last 10 steps to highlight the oscillations. (Columns 1 and 2) are run with a constant guidance schedule Algorithm 1, while (Column 3 and 4) are run with early … view at source ↗
Figure 5
Figure 5. Figure 5: Top Are plots associated to the standard DPS algorithm Algorithm 1, Top Left We plot αt = (P :k t δt) · (P :k t δt−1) along the DPS trajectories for a constant guidance schedule ζ = 0.1. Top Middle A close-up of steps 525 → 500. Top Right A close-up of steps 25 → 0. Note that αt is close to 0 at the intermediate noise levels, but drops to ≈ −1 towards the low noise levels. Bottom Are the same plots associa… view at source ↗
read the original abstract

Diffusion-based posterior samplers use pretrained diffusion priors to sample from measurement- or reward-conditioned posteriors, and are widely used for inverse problems. Yet their theoretical behavior remains poorly understood: even with exact prior scores, their outputs are biased, and in low-temperature regimes their discretizations can become unstable. We characterize this bias by introducing a tractable surrogate path connecting the true posterior to a standard Gaussian and comparing it to the sampler's path. Their density ratio satisfies a parabolic PDE whose reaction term measures the accumulated bias. A Feynman-Kac representation then expresses the Radon-Nikodym correction as an explicit path expectation, identifying which posterior regions are over- or under-sampled. We apply this framework to DPS and STSL, a related sampler. For DPS, the correction is an Ornstein-Uhlenbeck path expectation coupling the data conditional covariance with the reward curvature, revealing where DPS over- or under-samples. Next, we reinterpret STSL as an auxiliary drift that steers trajectories toward low-uncertainty regions, flattening the spatially varying part of the DPS reaction term. Finally, we characterize early guidance-stopping, a common mitigation for low-temperature instabilities caused by forward-Euler integration of the vector field. Together, these results clarify sampler bias, explain existing correctives, and guide stable variant designs.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper introduces a surrogate path connecting the true posterior to a standard Gaussian to analyze bias in diffusion-based posterior samplers (e.g., DPS and STSL). It derives a parabolic PDE for the density ratio along this path, with a reaction term capturing accumulated bias, and applies a Feynman-Kac representation to express the Radon-Nikodym correction as an explicit path expectation. This is used to characterize over-/under-sampling in DPS via an Ornstein-Uhlenbeck expectation, reinterpret STSL as an auxiliary drift, and analyze early guidance-stopping for stability in low-temperature regimes.

Significance. If the central derivation holds with the required regularity, the framework offers a principled way to quantify and mitigate bias and instability in widely used diffusion posterior samplers for inverse problems, potentially informing more stable variants and correctives. The explicit path-expectation form for the correction and the reinterpretation of existing methods are notable strengths.

major comments (1)
  1. [§3 and §4] §3 (surrogate path construction) and §4 (PDE derivation): The central claim requires that the surrogate path remains tractable and that the reaction term (accumulated bias) satisfies growth conditions ensuring the Feynman-Kac path expectation is finite and the Radon-Nikodym correction is well-defined pointwise. The manuscript asserts tractability under the exact prior score but does not explicitly verify these conditions for the low-temperature or high-curvature regimes highlighted in the abstract and applications to DPS/STSL; without this, the representation may fail to hold globally.
minor comments (2)
  1. [Introduction] Notation for the surrogate path and reaction term could be clarified with an explicit definition or diagram early in the manuscript to aid readability.
  2. [Applications section] The abstract mentions applications to DPS and STSL but the main text would benefit from a brief comparison table of the resulting bias expressions.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive and insightful comments, which help clarify the scope and assumptions of our analysis. We address the major comment point by point below.

read point-by-point responses
  1. Referee: [§3 and §4] §3 (surrogate path construction) and §4 (PDE derivation): The central claim requires that the surrogate path remains tractable and that the reaction term (accumulated bias) satisfies growth conditions ensuring the Feynman-Kac path expectation is finite and the Radon-Nikodym correction is well-defined pointwise. The manuscript asserts tractability under the exact prior score but does not explicitly verify these conditions for the low-temperature or high-curvature regimes highlighted in the abstract and applications to DPS/STSL; without this, the representation may fail to hold globally.

    Authors: We agree that an explicit discussion of the requisite growth conditions is valuable for rigor. The derivation in §§3–4 proceeds under the standing assumption that the prior score is exact and Lipschitz continuous (standard in the diffusion literature) and that the measurement model induces a reaction term with at most linear growth along the surrogate path. In the revised manuscript we will insert a short paragraph in §3 that invokes standard Feynman-Kac theory: when the potential (accumulated bias) satisfies |V(t,x)| ≤ C(1 + |x|) uniformly in t, the path expectation remains finite by Gronwall-type bounds. For the low-temperature and high-curvature regimes highlighted in the abstract, we note that the DPS correction reduces to an Ornstein–Uhlenbeck expectation whose moments are explicitly finite (Gaussian integrals), while STSL’s auxiliary drift only flattens the spatially varying component without introducing super-linear growth. These observations are already implicit in the explicit forms derived in §5, but we will state the growth condition and its verification explicitly. We therefore plan to revise the manuscript to include this clarification. revision: yes

Circularity Check

0 steps flagged

No circularity: standard Feynman-Kac application to derived PDE

full rationale

The paper constructs a surrogate path from posterior to Gaussian, derives that the density ratio obeys a parabolic PDE whose reaction term encodes accumulated bias, and then applies the classical Feynman-Kac theorem to represent the Radon-Nikodym correction as an explicit path expectation. This chain relies on the external Feynman-Kac theorem rather than redefining the bias in terms of itself or fitting parameters to the target quantity. No load-bearing self-citation, ansatz smuggling, or uniqueness theorem imported from the authors' prior work appears in the derivation; the framework is applied to DPS and STSL by direct substitution into the general representation. The tractability assumption is stated explicitly rather than smuggled in as a tautology, leaving the central bias expression independent of the inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The framework rests on standard stochastic analysis (Feynman-Kac for parabolic PDEs) and introduces one new construct, the surrogate path. No data-fitted parameters appear.

axioms (2)
  • domain assumption The density ratio between sampler path and true posterior satisfies a parabolic PDE whose reaction term measures accumulated bias
    Invoked to connect the surrogate Gaussian path to the bias measurement.
  • standard math Feynman-Kac representation converts the PDE solution into an explicit path expectation
    Standard tool from stochastic processes used to obtain the Radon-Nikodym correction.
invented entities (1)
  • surrogate path connecting true posterior to standard Gaussian no independent evidence
    purpose: To render the density ratio tractable and admit a parabolic PDE
    New object introduced so that bias can be expressed as a path expectation; no independent evidence outside the paper is provided.

pith-pipeline@v0.9.0 · 5552 in / 1496 out tokens · 60517 ms · 2026-05-08T12:29:45.486734+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

88 extracted references · 15 canonical work pages · 3 internal anchors

  1. [1]

    doi:10.1016/0304-4149(82)90051-5 Mikołaj Bi´nkowski, Danica J

    Brian D.O. Anderson , abstract =. Reverse-time diffusion equation models , journal =. 1982 , issn =. doi:https://doi.org/10.1016/0304-4149(82)90051-5 , url =

  2. [2]

    , TITLE =

    Karatzas, Ioannis and Shreve, Steven E. , TITLE =. 1991 , PAGES =. doi:10.1007/978-1-4612-0949-2 , URL =

  3. [3]

    2021 , eprint=

    Score-Based Generative Modeling through Stochastic Differential Equations , author=. 2021 , eprint=

  4. [4]

    2020 , eprint=

    Denoising Diffusion Probabilistic Models , author=. 2020 , eprint=

  5. [5]

    2015 , eprint=

    Deep Unsupervised Learning using Nonequilibrium Thermodynamics , author=. 2015 , eprint=

  6. [6]

    2024 , eprint=

    Diffusion Posterior Sampling for General Noisy Inverse Problems , author=. 2024 , eprint=

  7. [7]

    2023 , eprint=

    Beyond First-Order Tweedie: Solving Inverse Problems using Latent Diffusion , author=. 2023 , eprint=

  8. [8]

    The Fourteenth International Conference on Learning Representations , year=

    DriftLite: Lightweight Drift Control for Inference-Time Scaling of Diffusion Models , author=. The Fourteenth International Conference on Learning Representations , year=

  9. [9]

    arXiv preprint arXiv:2602.05533 , year=

    Conditional Diffusion Guidance under Hard Constraint: A Stochastic Analysis Approach , author=. arXiv preprint arXiv:2602.05533 , year=

  10. [10]

    2021 , eprint=

    Diffusion Models Beat GANs on Image Synthesis , author=. 2021 , eprint=

  11. [11]

    2021 , eprint=

    Classifier-Free Diffusion Guidance , author=. 2021 , eprint=

  12. [12]

    The Thirteenth International Conference on Learning Representations , year=

    Variational Diffusion Posterior Sampling with Midpoint Guidance , author=. The Thirteenth International Conference on Learning Representations , year=

  13. [13]

    Advances in Neural Information Processing Systems , editor=

    Improving Diffusion Models for Inverse Problems using Manifold Constraints , author=. Advances in Neural Information Processing Systems , editor=. 2022 , url=

  14. [14]

    The Twelfth International Conference on Learning Representations , year=

    Diffusion Posterior Sampling for Linear Inverse Problem Solving: A Filtering Perspective , author=. The Twelfth International Conference on Learning Representations , year=

  15. [15]

    International Conference on Learning Representations , year=

    Score-Based Generative Modeling through Stochastic Differential Equations , author=. International Conference on Learning Representations , year=

  16. [16]

    International Conference on Learning Representations , year=

    Diffusion Posterior Sampling for General Inverse Problems , author=. International Conference on Learning Representations , year=

  17. [17]

    Advances in Neural Information Processing Systems , volume=

    Denoising Diffusion Restoration Models , author=. Advances in Neural Information Processing Systems , volume=

  18. [18]

    An empirical Bayes approach to statistics

    Robbins, H. An empirical Bayes approach to statistics. Proc. 3rd Berkeley Symp. Math. Statist. Probab., 1956. 1956

  19. [19]

    Thirty-seventh Conference on Neural Information Processing Systems , year=

    Solving Inverse Problems Provably via Posterior Sampling with Latent Diffusion Models , author=. Thirty-seventh Conference on Neural Information Processing Systems , year=

  20. [20]

    Advances in Neural Information Processing Systems , volume=

    Denoising diffusion restoration models , author=. Advances in Neural Information Processing Systems , volume=

  21. [21]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Repaint: Inpainting using denoising diffusion probabilistic models , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  22. [22]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Style aligned image generation via shared attention , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  23. [23]

    The Thirteenth International Conference on Learning Representations , year=

    Semantic Image Inversion and Editing using Rectified Stochastic Differential Equations , author=. The Thirteenth International Conference on Learning Representations , year=

  24. [24]

    International Conference on Learning Representations , year=

    Pseudoinverse-Guided Diffusion Models for Inverse Problems , author=. International Conference on Learning Representations , year=

  25. [25]

    Black Forest Labs , year =

  26. [26]

    Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding

    Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding , author=. arXiv preprint arXiv:2205.11487 , year=

  27. [27]

    International Conference on Machine Learning , pages=

    Zero-shot text-to-image generation , author=. International Conference on Machine Learning , pages=. 2021 , organization=

  28. [28]

    Proceedings of the 41st International Conference on Machine Learning , pages =

    Diffusion Posterior Sampling is Computationally Intractable , author =. Proceedings of the 41st International Conference on Machine Learning , pages =. 2024 , editor =

  29. [29]

    Diffusion Posterior Sampling for General Noisy Inverse Problems

    Diffusion posterior sampling for general noisy inverse problems , author=. arXiv preprint arXiv:2209.14687 , year=

  30. [30]

    2024 , eprint=

    Provably Robust Score-Based Diffusion Posterior Sampling for Plug-and-Play Image Reconstruction , author=. 2024 , eprint=

  31. [31]

    2025 , url=

    Litu Rout and Yujia Chen and Nataniel Ruiz and Abhishek Kumar and Constantine Caramanis and Sanjay Shakkottai and Wen-Sheng Chu , booktitle=. 2025 , url=

  32. [32]

    Linear and Quasi-linear Equations of Parabolic Type , series =

    Lady. Linear and Quasi-linear Equations of Parabolic Type , series =

  33. [33]

    2023 , eprint=

    Plug-and-Play split Gibbs sampler: embedding deep generative priors in Bayesian inference , author=. 2023 , eprint=

  34. [34]

    2017 , eprint=

    The sample size required in importance sampling , author=. 2017 , eprint=

  35. [35]

    2023 , eprint=

    Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions , author=. 2023 , eprint=

  36. [36]

    2023 , eprint=

    Convergence for score-based generative modeling with polynomial complexity , author=. 2023 , eprint=

  37. [37]

    Journal of the ACM , volume=

    Faster high-accuracy log-concave sampling via algorithmic warm starts , author=. Journal of the ACM , volume=. 2024 , publisher=

  38. [38]

    2022 , eprint=

    High-Resolution Image Synthesis with Latent Diffusion Models , author=. 2022 , eprint=

  39. [39]

    2024 , eprint=

    Posterior Sampling with Denoising Oracles via Tilted Transport , author=. 2024 , eprint=

  40. [40]

    Bernoulli , volume=

    Functional inequalities for perturbed measures with applications to log-concave measures and to some Bayesian problems , author=. Bernoulli , volume=. 2022 , publisher=

  41. [41]

    2024 , eprint=

    Target Score Matching , author=. 2024 , eprint=

  42. [42]

    Non-asymptotic analysis of diffusion annealed langevin monte carlo for generative modelling.arXiv preprint arXiv:2502.09306, 2025

    Non-asymptotic Analysis of Diffusion Annealed Langevin Monte Carlo for Generative Modelling , author=. arXiv preprint arXiv:2502.09306 , year=

  43. [43]

    arXiv preprint arXiv:2407.16936 , year=

    Provable benefit of annealed langevin monte carlo for non-log-concave sampling , author=. arXiv preprint arXiv:2407.16936 , year=

  44. [44]

    Stochastic Processes and their Applications , year=

    Reverse-time diffusion equation models , author=. Stochastic Processes and their Applications , year=

  45. [45]

    2022 , eprint=

    Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo , author=. 2022 , eprint=

  46. [46]

    2018 , eprint=

    Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem , author=. 2018 , eprint=

  47. [47]

    2022 , eprint=

    Improved analysis for a proximal algorithm for sampling , author=. 2022 , eprint=

  48. [48]

    SIAM Journal on Mathematical Analysis , volume=

    Jordan, Richard and Kinderlehrer, David and Otto, Felix , title=. SIAM Journal on Mathematical Analysis , volume=. 1998 , doi=

  49. [49]

    2025 , eprint=

    Mixing Time of the Proximal Sampler in Relative Fisher Information via Strong Data Processing Inequality , author=. 2025 , eprint=

  50. [50]

    Proceedings of The 34th International Conference on Algorithmic Learning Theory , pages =

    Fisher information lower bounds for sampling , author =. Proceedings of The 34th International Conference on Algorithmic Learning Theory , pages =. 2023 , editor =

  51. [51]

    Holley and D

    Holley, Richard and Stroock, Daniel , date =. Logarithmic Sobolev inequalities and stochastic Ising models , url =. Journal of Statistical Physics , number =. 1987 , bdsk-url-1 =. doi:10.1007/BF01011161 , id =

  52. [52]

    2025 , eprint=

    Fast Convergence of -Divergence Along the Unadjusted Langevin Algorithm and Proximal Sampler , author=. 2025 , eprint=

  53. [53]

    2021 , eprint=

    Dimension-free log-Sobolev inequalities for mixture distributions , author=. 2021 , eprint=

  54. [54]

    2022 , eprint=

    Diffusion Posterior Sampling for General Noisy Inverse Problems , author=. 2022 , eprint=

  55. [55]

    2007 , issn =

    Chapter 1 - Gradient Flows of Probability Measures , editor =. 2007 , issn =. doi:https://doi.org/10.1016/S1874-5717(07)80004-1 , url =

  56. [56]

    Transformation of measure on Wiener space , year =

    Zakai, Moshe and Üstünel, Ali Süleyman , keywords =. Transformation of measure on Wiener space , year =

  57. [57]

    R. H. Cameron and W. T. Martin , journal =. Transformations of Weiner Integrals Under Translations , urldate =

  58. [58]

    2022 , eprint=

    Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices , author=. 2022 , eprint=

  59. [59]

    Marinari and G

    Marinari, E and Parisi, G , year=. Simulated Tempering: A New Monte Carlo Scheme , volume=. Europhysics Letters (EPL) , publisher=. doi:10.1209/0295-5075/19/6/002 , number=

  60. [60]

    1989 , issn =

    Simulated annealing — to cool or not , journal =. 1989 , issn =. doi:https://doi.org/10.1016/0167-6911(89)90081-9 , url =

  61. [61]

    Book draft available at https://chewisinho

    Log-concave sampling , author=. Book draft available at https://chewisinho. github. io , volume=

  62. [62]

    2022 , eprint=

    Solving Inverse Problems in Medical Imaging with Score-Based Generative Models , author=. 2022 , eprint=

  63. [63]

    2022 , eprint=

    Denoising Diffusion Implicit Models , author=. 2022 , eprint=

  64. [64]

    2020 , eprint=

    Generative Modeling by Estimating Gradients of the Data Distribution , author=. 2020 , eprint=

  65. [65]

    arXiv preprint arXiv:2410.00083 , year=

    A survey on diffusion models for inverse problems , author=. arXiv preprint arXiv:2410.00083 , year=

  66. [66]

    2024 , eprint=

    Practical and Asymptotically Exact Conditional Sampling in Diffusion Models , author=. 2024 , eprint=

  67. [67]

    2019 , eprint=

    Langevin Monte Carlo and JKO splitting , author=. 2019 , eprint=

  68. [68]

    2021 , eprint=

    Structured Logconcave Sampling with a Restricted Gaussian Oracle , author=. 2021 , eprint=

  69. [69]

    2014 , MONTH = Jan, HAL_ID =

    Bakry, Dominique and Gentil, Ivan and Ledoux, Michel , URL =. 2014 , MONTH = Jan, HAL_ID =

  70. [70]

    2024 , eprint=

    Tweedie Moment Projected Diffusions For Inverse Problems , author=. 2024 , eprint=

  71. [71]

    International Conference on Learning Representations (ICLR) , year=

    Pseudoinverse-Guided Diffusion Models for Inverse Problems , author=. International Conference on Learning Representations (ICLR) , year=

  72. [72]

    2003 , publisher=

    Stochastic differential equations , author=. 2003 , publisher=

  73. [73]

    Otto and C

    F. Otto and C. Villani , abstract =. Generalization of an Inequality by Talagrand and Links with the Logarithmic Sobolev Inequality , journal =. 2000 , issn =. doi:https://doi.org/10.1006/jfan.1999.3557 , url =

  74. [74]

    2022 , eprint=

    Stability of hypercontractivity, the logarithmic Sobolev inequality, and Talagrand's cost inequality , author=. 2022 , eprint=

  75. [75]

    Gradient Flows in Metric Spaces and in the Space of Probability Measures , year =

    Luigi Ambrosio and Nicola Gigli and Giuseppe Savar. Gradient Flows in Metric Spaces and in the Space of Probability Measures , year =

  76. [76]

    2017 , eprint=

    Convergence of Langevin MCMC in KL-divergence , author=. 2017 , eprint=

  77. [77]

    SIAM Journal on Control and Optimization , volume =

    Geman, Stuart and Hwang, Chii-Ruey , title =. SIAM Journal on Control and Optimization , volume =. 1986 , doi =

  78. [78]

    Advances in Neural Information Processing Systems , volume=

    Snips: Solving noisy inverse problems stochastically , author=. Advances in Neural Information Processing Systems , volume=

  79. [79]

    2026 , eprint=

    Steering diffusion models with quadratic rewards: a fine-grained analysis , author=. 2026 , eprint=

  80. [80]

    2025 , eprint=

    Efficient Approximate Posterior Sampling with Annealed Langevin Monte Carlo , author=. 2025 , eprint=

Showing first 80 references.