Nonasymptotic convergence analysis for the tamed unadjusted stochastic Langevin algorithm

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In this work, we consider sampling from a target distribution $\pi_{\beta}$ characterized by the density function $ \pi_{\beta}( \theta) = e^{-\beta U(\theta) }/ \int_{\mathbb R^d} e^{-\beta U(\theta)} \, \mathrm{d} \theta$ with $\beta>0$. It is well-known that the Euler-Maruyama discretization of overdamped Langevin stochastic differential equations (SDEs) exhibits instability when the potentials exhibit superlinear growth. Building upon the approach proposed in \cite{brosse2019tamed} for mitigating the impact of superlinear drift coefficients in SDEs, we propose a novel Langevin dynamics-based algorithm, termed the Tamed Unadjusted Stochastic Langevin Algorithm (TUSLA), to address the aforementioned sampling problem and establish rigorous performance guarantees. Specifically, we establish a sharp non-asymptotic convergence guarantee in Kullback–Leibler (KL) divergence with the optimal rate of order one, by combining tools from the logarithmic Sobolev inequality (LSI) and the Fokker–Planck equation. As a direct consequence, we further obtain an $O(\lambda^{1/2})$ convergence rate in both Wasserstein-2 and total variation distances, thereby strengthening and generalizing the best-known results in the current literature. Our theoretical findings are supported by comprehensive high-dimensional experiments.

Article activity feed