Image restoration has long been one of the key research topics in image processing. Many mathematical approaches have been developed to solve this problem, e.g., variational methods, wavelet techniques, or Bayesian methods. With the widespread of neural network (NN) models in all the subdomains of data science, the performance limits of these methods are further pushed. One of the most successful strategies consists of plugging NNs in existing optimization algorithms. However, so doing raises several mathematical and practical challenges. One of the main issues is to secure the convergence of the resulting iterative scheme. Further questions concerning the characterization of the reached limit are also worth being addressed. In this paper, we show that the theory of maximally monotone operators allows us to bring insightful answers to these problems and to design firmly nonexpansive NNs; combining these with postprocessing NNs leads to excellent global restoration quality.