Abstract
Immunofluorescence microscopy is routinely used to visualise the spatial distribution of proteins that dictates their cellular function. However, unspecific antibody binding often results in high cytosolic background signals, decreasing the image contrast of a target structure. Recently, convolutional neural networks (CNNs) were successfully employed for image restoration in immunofluorescence microscopy, but current methods cannot correct for those background signals. We report a new method that trains a CNN to reduce unspecific signals in immunofluorescence images; we name this method label2label (L2L). In L2L, a CNN is trained with image pairs of two non-identical labels that target the same cellular structure. We show that after L2L training a network predicts images with significantly increased contrast of a target structure, which is further improved after implementing a multiscale structural similarity loss function. Here, our results suggest that sample differences in the training data decrease hallucination effects that are observed with other methods. We further assess the performance of a cycle generative adversarial network, and show that a CNN can be trained to separate structures in superposed immunofluorescence images of two targets.
Original language | English |
---|---|
Article number | jcs258994 |
Journal | Journal of Cell Science |
Volume | 135 |
Issue number | 3 |
Early online date | 13 Jan 2022 |
DOIs | |
Publication status | Published - 10 Feb 2022 |
Keywords
- Antibody labelling
- Cellular structures
- Content-aware image restoration
- Convolutional neural networks
- Fluorescence microscopy
- Noise2noise
ASJC Scopus subject areas
- Cell Biology