TY - JOUR
T1 - Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis
AU - Wang, Chengjia
AU - Yang, Guang
AU - Papanastasiou, Giorgos
N1 - Funding Information:
Funding: This study was supported in part by the British Heart Foundation (TG/18/5/34111, PG/16/78/32402), the European Research Council Innovative Medicines Initiative (101005122), the European Commission H2020 (952172), the Medical Research Council (MC/PC/21013) and the UKRI Future Leaders Fellowship (MR/V023799/1).
Publisher Copyright:
© 2022 by the authors. Licensee MDPI, Basel, Switzerland.
PY - 2022/3/9
Y1 - 2022/3/9
N2 - Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as “modalities”). As each modality is designed to offer different anatomical and functional clinical in-formation, there are evident disparities in the imaging content across modalities. Inter‐ and intra-modality affine and non‐rigid image registration is an essential medical image analysis process in clinical imaging, as for example before imaging biomarkers need to be derived and clinically evaluated across different MRI modalities, time phases and slices. Although commonly needed in real clinical scenarios, affine and non‐rigid image registration is not extensively investigated using a single unsupervised model architecture. In our work, we present an unsupervised deep learning registration methodology that can accurately model affine and non‐rigid transformations, simulta-neously. Moreover, inverse‐consistency is a fundamental inter‐modality registration property that is not considered in deep learning registration algorithms. To address inverse consistency, our methodology performs bi‐directional cross‐modality image synthesis to learn modality‐invariant latent representations, and involves two factorised transformation networks (one per each encoder‐de-coder channel) and an inverse‐consistency loss to learn topology‐preserving anatomical transfor-mations. Overall, our model (named “FIRE”) shows improved performances against the reference standard baseline method (i.e., Symmetric Normalization implemented using the ANTs toolbox) on multi‐modality brain 2D and 3D MRI and intra‐modality cardiac 4D MRI data experiments. We focus on explaining model‐data components to enhance model explainability in medical image reg-istration. On computational time experiments, we show that the FIRE model performs on a memory‐saving mode, as it can inherently learn topology‐preserving image registration directly in the training phase. We therefore demonstrate an efficient and versatile registration technique that can have merit in multi‐modal image registrations in the clinical setting.
AB - Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as “modalities”). As each modality is designed to offer different anatomical and functional clinical in-formation, there are evident disparities in the imaging content across modalities. Inter‐ and intra-modality affine and non‐rigid image registration is an essential medical image analysis process in clinical imaging, as for example before imaging biomarkers need to be derived and clinically evaluated across different MRI modalities, time phases and slices. Although commonly needed in real clinical scenarios, affine and non‐rigid image registration is not extensively investigated using a single unsupervised model architecture. In our work, we present an unsupervised deep learning registration methodology that can accurately model affine and non‐rigid transformations, simulta-neously. Moreover, inverse‐consistency is a fundamental inter‐modality registration property that is not considered in deep learning registration algorithms. To address inverse consistency, our methodology performs bi‐directional cross‐modality image synthesis to learn modality‐invariant latent representations, and involves two factorised transformation networks (one per each encoder‐de-coder channel) and an inverse‐consistency loss to learn topology‐preserving anatomical transfor-mations. Overall, our model (named “FIRE”) shows improved performances against the reference standard baseline method (i.e., Symmetric Normalization implemented using the ANTs toolbox) on multi‐modality brain 2D and 3D MRI and intra‐modality cardiac 4D MRI data experiments. We focus on explaining model‐data components to enhance model explainability in medical image reg-istration. On computational time experiments, we show that the FIRE model performs on a memory‐saving mode, as it can inherently learn topology‐preserving image registration directly in the training phase. We therefore demonstrate an efficient and versatile registration technique that can have merit in multi‐modal image registrations in the clinical setting.
KW - Deep learning
KW - Explainable deep learning
KW - Inverse‐consistency
KW - Multi‐modality image registration
KW - Unsupervised image registration
UR - http://www.scopus.com/inward/record.url?scp=85125934187&partnerID=8YFLogxK
U2 - 10.3390/s22062125
DO - 10.3390/s22062125
M3 - Article
C2 - 35336295
AN - SCOPUS:85125934187
SN - 1424-8220
VL - 22
JO - Sensors
JF - Sensors
IS - 6
M1 - 2125
ER -