Learning truly monotone operators with applications to nonlinear inverse problems

  • Younes Belkouchi
  • , Jean-Christophe Pesquet
  • , Audrey Repetti
  • , Hugues Talbot

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)
130 Downloads (Pure)

Abstract

This article introduces a novel approach to learning monotone neural networks (NNs) through a newly defined penalization loss. The proposed method is particularly effective in solving classes of variational problems, specifically monotone inclusion problems, commonly encountered in image processing tasks. The forward-backward-forward (FBF) algorithm is employed to address these problems, offering a solution even when the Lipschitz constant of the NN is unknown. Notably, the FBF algorithm provides convergence guarantees under the condition that the learned operator is monotone. Building on plug-and-play methodologies, our objective is to apply these newly learned operators to solving nonlinear inverse problems. To achieve this, we initially formulate the problem as a variational inclusion problem. Subsequently, we train a monotone NN to approximate an operator that may not inherently be monotone. Leveraging the FBF algorithm, we then show simulation examples where the nonlinear inverse problem is successfully solved.

Original languageEnglish
Pages (from-to)735-764
Number of pages30
JournalSIAM Journal on Imaging Sciences
Volume18
Issue number1
Early online date27 Mar 2025
DOIs
Publication statusPublished - Mar 2025

Keywords

  • deep learning
  • forward-backward-forward
  • inverse problem
  • monotone operator
  • optimization
  • plug and play (PnP)

ASJC Scopus subject areas

  • General Mathematics
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Learning truly monotone operators with applications to nonlinear inverse problems'. Together they form a unique fingerprint.

Cite this