TY - GEN
T1 - Deep Decomposition Learning for Inverse Imaging Problems
AU - Chen, Dongdong
AU - Davies, Mike E.
N1 - @inproceedings{chen2020deep,
title={Deep decomposition learning for inverse imaging problems},
author={Chen, Dongdong and Davies, Mike E},
booktitle={Computer Vision--ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part XXVIII 16},
pages={510--526},
year={2020},
organization={Springer}
}
PY - 2020/11/3
Y1 - 2020/11/3
N2 - Deep learning is emerging as a new paradigm for solving inverse imaging problems. However, the deep learning methods often lack the assurance of traditional physics-based methods due to the lack of physical information considerations in neural network training and deploying. The appropriate supervision and explicit calibration by the information of the physic model can enhance the neural network learning and its practical performance. In this paper, inspired by the geometry that data can be decomposed by two components from the null-space of the forward operator and the range space of its pseudo-inverse, we train neural networks to learn the two components and therefore learn the decomposition, i.e. we explicitly reformulate the neural network layers as learning range-nullspace decomposition functions with reference to the layer inputs, instead of learning unreferenced functions. We empirically show that the proposed framework demonstrates superior performance over recent deep residual learning, unrolled learning and nullspace learning on tasks including compressive sensing medical imaging and natural image super-resolution. Our code is available at https://github.com/edongdongchen/DDN.
AB - Deep learning is emerging as a new paradigm for solving inverse imaging problems. However, the deep learning methods often lack the assurance of traditional physics-based methods due to the lack of physical information considerations in neural network training and deploying. The appropriate supervision and explicit calibration by the information of the physic model can enhance the neural network learning and its practical performance. In this paper, inspired by the geometry that data can be decomposed by two components from the null-space of the forward operator and the range space of its pseudo-inverse, we train neural networks to learn the two components and therefore learn the decomposition, i.e. we explicitly reformulate the neural network layers as learning range-nullspace decomposition functions with reference to the layer inputs, instead of learning unreferenced functions. We empirically show that the proposed framework demonstrates superior performance over recent deep residual learning, unrolled learning and nullspace learning on tasks including compressive sensing medical imaging and natural image super-resolution. Our code is available at https://github.com/edongdongchen/DDN.
UR - https://www.scopus.com/pages/publications/85097069363
U2 - 10.1007/978-3-030-58604-1_31
DO - 10.1007/978-3-030-58604-1_31
M3 - Conference contribution
SN - 978-3-030-58603-4
VL - XXVIII
T3 - Lecture Notes in Computer Science
SP - 510
EP - 526
BT - Computer Vision – ECCV 2020
A2 - Vedaldi, Andrea
A2 - Bischof, Horst
A2 - Brox, Thomas
A2 - Frahm, Jan-Michael
T2 - 16th European Conference on Computer Vision 2020
Y2 - 23 August 2020 through 28 August 2020
ER -