Abstract
This article introduces a novel approach to learning monotone neural networks (NNs) through a newly defined penalization loss. The proposed method is particularly effective in solving classes of variational problems, specifically monotone inclusion problems, commonly encountered in image processing tasks. The forward-backward-forward (FBF) algorithm is employed to address these problems, offering a solution even when the Lipschitz constant of the NN is unknown. Notably, the FBF algorithm provides convergence guarantees under the condition that the learned operator is monotone. Building on plug-and-play methodologies, our objective is to apply these newly learned operators to solving nonlinear inverse problems. To achieve this, we initially formulate the problem as a variational inclusion problem. Subsequently, we train a monotone NN to approximate an operator that may not inherently be monotone. Leveraging the FBF algorithm, we then show simulation examples where the nonlinear inverse problem is successfully solved.
| Original language | English |
|---|---|
| Pages (from-to) | 735-764 |
| Number of pages | 30 |
| Journal | SIAM Journal on Imaging Sciences |
| Volume | 18 |
| Issue number | 1 |
| Early online date | 27 Mar 2025 |
| DOIs | |
| Publication status | Published - Mar 2025 |
Keywords
- deep learning
- forward-backward-forward
- inverse problem
- monotone operator
- optimization
- plug and play (PnP)
ASJC Scopus subject areas
- General Mathematics
- Applied Mathematics