TY - JOUR
T1 - Maximum likelihood estimation of regularization parameters in high-dimensional inverse problems
T2 - An empirical bayesian approach. part ii: Theoretical analysis
AU - De Bortoli, Valentin
AU - Durmus, Alain
AU - Pereyra, Marcelo
AU - Fernandez Vidal, Ana
N1 - Funding Information:
∗Received by the editors May 26, 2020; accepted for publication (in revised form) July 20, 2020; published electronically November 18, 2020. https://doi.org/10.1137/20M1339842 Funding: The work of the second author was supported by the Polish National Science Center grant NCN UMO-2018/31/B/ST1/00253. The work of the third author was supported by the EPSRC grant EP/T007346/1. †CMLA - École normale supérieure Paris-Saclay, CNRS, UniversitéParis-Saclay, 94235 Cachan, France ([email protected], [email protected]). ‡Maxwell Institute for Mathematical Sciences and School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK ([email protected], [email protected]).
Publisher Copyright:
© 2020 Society for Industrial and Applied Mathematics.
PY - 2020/11/18
Y1 - 2020/11/18
N2 - This paper presents a detailed theoretical analysis of the three stochastic approximation proximal gradient algorithms proposed in our companion paper [A. F. Vidal et al., SIAM J. Imaging Sci., 13 (2020), pp. 1945–1989] to set regularization parameters by marginal maximum likelihood estimation. We prove the convergence of a more general stochastic approximation scheme that includes the three algorithms of [A. F. Vidal et al., SIAM J. Imaging Sci., 13 (2020), pp. 1945–1989] as special cases. This includes asymptotic and nonasymptotic convergence results with natural and easily verifiable conditions, as well as explicit bounds on the convergence rates. Importantly, the theory is also general in that it can be applied to other intractable optimization problems. A main novelty of the work is that the stochastic gradient estimates of our scheme are constructed from inexact proximal Markov chain Monte Carlo samplers. This allows the use of samplers that scale efficiently to large problems and for which we have precise theoretical guarantees.
AB - This paper presents a detailed theoretical analysis of the three stochastic approximation proximal gradient algorithms proposed in our companion paper [A. F. Vidal et al., SIAM J. Imaging Sci., 13 (2020), pp. 1945–1989] to set regularization parameters by marginal maximum likelihood estimation. We prove the convergence of a more general stochastic approximation scheme that includes the three algorithms of [A. F. Vidal et al., SIAM J. Imaging Sci., 13 (2020), pp. 1945–1989] as special cases. This includes asymptotic and nonasymptotic convergence results with natural and easily verifiable conditions, as well as explicit bounds on the convergence rates. Importantly, the theory is also general in that it can be applied to other intractable optimization problems. A main novelty of the work is that the stochastic gradient estimates of our scheme are constructed from inexact proximal Markov chain Monte Carlo samplers. This allows the use of samplers that scale efficiently to large problems and for which we have precise theoretical guarantees.
KW - Empirical Bayes
KW - Image processing
KW - Inverse problems
KW - Markov chain Monte Carlo methods
KW - Proximal algorithms
KW - Statistical inference
KW - Stochastic optimization
UR - http://www.scopus.com/inward/record.url?scp=85099017049&partnerID=8YFLogxK
U2 - 10.1137/20M1339842
DO - 10.1137/20M1339842
M3 - Article
AN - SCOPUS:85099017049
SN - 1936-4954
VL - 13
SP - 1990
EP - 2028
JO - SIAM Journal on Imaging Sciences
JF - SIAM Journal on Imaging Sciences
IS - 4
ER -