Abstract
We analyze the convergence of the proximal gradient algorithm for convex composite problems in the presence of gradient and proximal computational inaccuracies. We generalize the deterministic analysis to the quasi-Fejér case and quantify the uncertainty incurred from approximate computing and early termination errors. We propose new probabilistic tighter bounds that we use to verify a simulated Model Predictive Control (MPC) with sparse controls problem solved with early termination, reduced precision, and proximal errors. We also show how the probabilistic bounds are more suitable than the deterministic ones for algorithm verification and more accurate for application performance guarantees. Under mild statistical assumptions, we also prove that some cumulative error terms follow a martingale property. And conforming to observations, e.g., in [M. Schmidt, N. L. Roux, and F. R. Bach, Convergence rates of inexact proximal-gradient methods for convex optimization, in Advances in Neural Information Processing Systems, 2011, pp. 1458–1466], we also show how the acceleration of the algorithm amplifies the gradient and proximal computational errors.
Original language | English |
---|---|
Pages (from-to) | 278-305 |
Number of pages | 28 |
Journal | SIAM Journal on Optimization |
Volume | 34 |
Issue number | 1 |
Early online date | 19 Jan 2024 |
DOIs | |
Publication status | Published - Mar 2024 |
Keywords
- approximate algorithms
- convex optimization
- proximal gradient descent
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Applied Mathematics