Label2label: Training a Neural Network as Image Content Filter in Immunofluorescence Microscopy
- Abstract number
- 160
- Presentation Form
- Submitted Talk
- Corresponding Email
- [email protected]
- Session
- Stream 2: Machine Learning for Image Analysis
- Authors
- Lisa Sophie Kölln (3, 2, 4), Omar Salem (2, 4), Dr Jessica Valli (1), Dr Carsten Gram Hansen (2, 4), Prof Gail McConnell (3)
- Affiliations
-
1. Heriot-Watt University, Edinburgh Super-Resolution Imaging Consortium
2. University of Edinburgh, Centre for Inflammation Research
3. University of Strathclyde, Department of Physics
4. University of Edinburgh, Institute for Regeneration and Repair
- Keywords
fluorescence microscopy, image content filter, antibody-labelling, deep learning, convolutional neural networks, content-aware image restoration, cellular structures, focal adhesions, microtubules, actin
- Abstract text
We present a new deep learning based method to correct fluorescence microscopy images of cellular structures from non-structural background signal; we name this method label2label (L2L).
Precise identification and spatio-temporal localisation of proteins is critical to obtain insights into their cellular functions. Lipid and protein specific stains and antibodies are most commonly used to visualise the location of cellular targets in vitro in fluorescence microscopy, but these markers often exhibit low specificity and therefore additional targets. As a consequence, images of cellular structures often suffer from low contrast and high background noise, making image quantification difficult. Here, we show that a neural network can be trained as selective image content filter for specific cellular structures to enhance the contrast of the target and reduce image background. Notably, L2L does not require clean benchmark data.
In L2L, a fully convolutional neural network (CNN) is trained with fluorescence images of cells that are dual-labelled with two markers that label the same structure of interest. We utilise the difference in performance of pairs of commercially available labels that leads to quantifiably different images of the same structure, for instance, due to different binding sites in the protein or differing specificity. The underlying principle of L2L resembles the denoising method noise2noise (N2N) where a network is trained with noisy image pairs of the same sample, while in L2L image differences between the input and benchmark data are not only caused by image noise but also systematic sample differences. For the training, we utilise a CNN with a so-called U-Net architecture that was previously developed for content-aware image restoration of cell images.
Choosing appropriate marker pairs, we show that the CNN learns to filter out undesired image signal during L2L training, while contrast of cellular structures such as focal adhesions, microtubules or actin filaments is enhanced. In comparison, with N2N, sample artefacts that are present in both input and ground truth data are still present in the predicted images. By using a multi-scale structural similarity index loss function, we found that structural contrast is further enhanced by the network, and discuss how L2L training decreases the likelihood of hallucination effects for this loss function. We also show that a cycle generative adversarial network can be trained as content filter in the absence of paired image pairs.
To our knowledge, this is the first successful attempt to train a network as content filter in fluorescence microscopy with image pairs that stem from different labels. The selective recovery of cellular structures in images by a network after L2L training could be a useful pre-processing step in the future to quantify cell images more easily, as well as a helpful tool for multiplex or live cell imaging where the choice of labels is limited.
- References
Kölln, L. S.; Salem, O.; Valli, J.; Hansen, C. G.; McConnell, G., 'Label2label: Using deep learning and dual-labelling to retrieve cellular structures in fluorescence images', bioRxiv: https://doi.org/10.1101/2020.12.21.423789 (preprint, 2020)