pith. machine review for the scientific record. sign in

arxiv: 2509.25630 · v3 · submitted 2025-09-30 · 📊 stat.ML · cs.LG· cs.NA· math.NA

Recognition: unknown

When Langevin Monte Carlo Meets Randomization: New Sampling Algorithms with Non-asymptotic Error Bounds beyond Log-Concavity and Gradient Lipschitzness

Bin Yang, Xiaojie Wang

Authors on Pith no claims yet
classification 📊 stat.ML cs.LGcs.NAmath.NA
keywords samplingalgorithmscarloerrorgradientlangevinlog-concavitymonte
0
0 comments X
read the original abstract

Efficient sampling from complex and high dimensional target distributions turns out to be a fundamental task in diverse disciplines such as scientific computing, statistics and machine learning. In this paper, we propose a new kind of randomized splitting Langevin Monte Carlo (RSLMC) algorithm for sampling from high dimensional distributions without log-concavity. Compared with the existing randomized Langevin Monte Carlo (RLMC), the newly proposed RSLMC algorithm requires less evaluations of gradients and is thus computationally cheaper. Under the gradient Lipschitz condition and the log-Sobolev inequality, we prove a uniform-in-time error bound in $\mathcal{W}_2$-distance of order $O(\sqrt{d}h)$ for both RLMC and RSLMC sampling algorithms, which matches the best one in the literature under the log-concavity condition. Moreover, when the gradient of the potential $U$ is non-globally Lipschitz with superlinear growth, new modified R(S)LMC algorithms are introduced and analyzed, with non-asymptotic error bounds established. Numerical examples are finally reported to corroborate the theoretical findings.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Accelerating Langevin Monte Carlo via Efficient Stochastic Runge--Kutta Methods beyond Log-Concavity

    math.ST 2026-05 unverdicted novelty 6.0

    A Hessian-free stochastic Runge-Kutta LMC algorithm achieves strong order 1.5 with two gradient evaluations per step and uniform-in-time convergence O(d^{3/2} h^{3/2}) in non-log-concave settings.