Accelerating proximal Markov chain Monte Carlo by using an explicit stabilized method

Marcelo Pereyra, Luis Vargas Mieles, Konstantinos C. Zygalakis

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)
295 Downloads (Pure)


We present a highly efficient proximal Markov chain Monte Carlo methodology to perform Bayesian computation in imaging problems. Similarly to previous proximal Monte Carlo approaches, the proposed method is derived from an approximation of the Langevin diffusion. However, instead of the conventional Euler-Maruyama approximation that underpins existing proximal Monte Carlo methods, here we use a state-of-the-art orthogonal Runge–Kutta–Chebyshev stochastic approximation [A. Abdulle, I. Aimuslimani, and G. Vilmart, SIAM/ASA J. Uncertain. Quantif., 6 (2018), pp. 937-964] that combines several gradient evaluations to significantly accelerate its convergence speed, similarly to accelerated gradient optimization methods. The proposed methodology is demonstrated via a range of numerical experiments, including non-blind image deconvolution, hyperspectral unmixing, and tomographic reconstruction, with total-variation and  ℓ1-type priors. Comparisons with Eulertype proximal Monte Carlo methods confirm that the Markov chains generated with our method exhibit significantly faster convergence speeds, achieve larger effective sample sizes, and produce lower mean-square estimation errors at equal computational budget.

Original languageEnglish
Pages (from-to)905-935
Number of pages31
JournalSIAM Journal on Imaging Sciences
Issue number2
Early online date26 May 2020
Publication statusPublished - 2020


  • Bayesian inference
  • Inverse problems
  • Markov chain Monte Carlo methods
  • Mathematical imaging
  • Proximal algorithms

ASJC Scopus subject areas

  • General Mathematics
  • Applied Mathematics


Dive into the research topics of 'Accelerating proximal Markov chain Monte Carlo by using an explicit stabilized method'. Together they form a unique fingerprint.

Cite this