Sharper Bounds for Proximal-Gradient Algorithms with Errors

Anis Hamadouche, Yun Wu, Andrew Michael Wallace, João F. C. Mota

Research output: Contribution to journalArticlepeer-review

22 Downloads (Pure)

Abstract

We analyze the convergence of the proximal gradient algorithm for convex composite problems in the presence of gradient and proximal computational inaccuracies. We generalize the deterministic analysis to the quasi-Fejér case and quantify the uncertainty incurred from approximate computing and early termination errors. We propose new probabilistic tighter bounds that we use to verify a simulated Model Predictive Control (MPC) with sparse controls problem solved with early termination, reduced precision, and proximal errors. We also show how the probabilistic bounds are more suitable than the deterministic ones for algorithm verification and more accurate for application performance guarantees. Under mild statistical assumptions, we also prove that some cumulative error terms follow a martingale property. And conforming to observations, e.g., in [M. Schmidt, N. L. Roux, and F. R. Bach, Convergence rates of inexact proximal-gradient methods for convex optimization, in Advances in Neural Information Processing Systems, 2011, pp. 1458–1466], we also show how the acceleration of the algorithm amplifies the gradient and proximal computational errors.
Original languageEnglish
Pages (from-to)278-305
Number of pages28
JournalSIAM Journal on Optimization
Volume34
Issue number1
Early online date19 Jan 2024
DOIs
Publication statusPublished - Mar 2024

Keywords

  • approximate algorithms
  • convex optimization
  • proximal gradient descent

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Sharper Bounds for Proximal-Gradient Algorithms with Errors'. Together they form a unique fingerprint.

Cite this