Data-Driven Nonsmooth Optimization

Sebastian Banert, Axel Ringh, Jonas Adler, Johan Karlsson, Ozan Öktem

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)


In this work, we consider methods for solving large-scale optimization problems with a possibly nonsmooth objective function. The key idea is to first parametrize a class of optimization methods using a generic iterative scheme involving only linear operations and applications of proximal operators. This scheme contains some modern primal-dual first-order algorithms like the Douglas--Rachford and hybrid gradient methods as special cases. Moreover, we show weak convergence of the iterates to an optimal point for a new method which also belongs to this class. Next, we interpret the generic scheme as a neural network and use unsupervised training to learn the best set of parameters for a specific class of objective functions while imposing a fixed number of iterations. In contrast to other approaches of “learning to optimize," we present an approach which learns parameters only in the set of convergent schemes. Finally, we illustrate the approach on optimization problems arising in tomographic reconstruction and image deconvolution, and train optimization algorithms for optimal performance given a fixed number of iterations.
Original languageEnglish
Pages (from-to)102-131
Number of pages30
JournalSIAM Journal on Optimization
Issue number1
Early online date7 Jan 2020
Publication statusPublished - 2020


  • Computerized tomography
  • Inverse problems
  • Machine learning
  • Monotone operators
  • Proximal algorithms
  • \bfK \bfe \bfy \bfw \bfo \bfr \bfd \bfs . convex optimization

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science


Dive into the research topics of 'Data-Driven Nonsmooth Optimization'. Together they form a unique fingerprint.

Cite this