A deep learning-based workflow for high-throughput and high-quality widefield fluorescent imaging of 3D samples
- Abstract number
- 142
- Presentation Form
- Submitted Talk
- DOI
- 10.22443/rms.mmc2021.142
- Corresponding Email
- [email protected]
- Session
- Stream 3: 3D+ Image Analysis
- Authors
- Edvin Forsgren (3), Christoffer Edlund (2), Rickard Sjögren (2), Timothy Jackson (1), Kalpana Barnes (1), Minnie Oliver (1)
- Affiliations
-
1. Sartorius, BioAnalytics
2. Sartorius, Data Analytics
3. Umeå University - Department of Chemistry
- Keywords
Convolutional Neural Network, Fluorescent 3D cell imaging, Conditional Generative Adversarial Network, Fluorescent live-cell imaging, High-throughput, Z-sweep, OSA-blocks, Tumor Spheroids
- Abstract text
In this project, we present a novel workflow that enables high-throughput 3D widefield fluorescent imaging, which has previously been impossible, and apply it to fluorescent indicators of cell health within 3D tumor spheroids. The workflow combines deep learning with state-of-the-art live-cell imaging techniques to speed up the acquisition of a fluorescent image of a three-dimensional sample by a factor of a hundred.
3D widefield fluorescent imaging of tumor spheroids is challenging because in order to visualize the entire sample, many images must be acquired at different focal depths to obtain a so-called Z-stack. Z-stack acquisitions have two major drawbacks: visualizing large samples can be very time-consuming, and the longer cells are exposed to fluorescent light the greater the risk of phototoxicity and photobleaching. These problems become more severe when samples are analyzed over time, requiring repeated Z-stack acquisitions, limiting the overall throughput capabilities of live-cell 3D widefield fluorescent imaging. We aim to solve this issue by developing a workflow that can acquire high-quality images without sacrificing throughput.
We present a workflow combining fast Z-sweep acquisition and Convolutional Neural Network (CNN)-based image processing. A Z-sweep is acquired by opening the shutter and zooming through the 3D-sample to acquire an image of the specimen up to a hundred times faster than using a Z-stack, producing a blurry image of fluorescence integrated over the z-dimension. We then train a CNN to predict a high-quality 2D-projection calculated from a Z-stack based on the corresponding blurry Z-sweep image by minimizing the L1-difference norm between the predicted projection compared to the true one. The CNN is a modified variant of U-net (Ronneberger 2015) that we call (OSA-U-net), which uses one-shot aggregation (OSA) and effective Squeeze-and-Excitation blocks (Lee and Park 2019). Additionally, we train our CNN using a second discriminator CNN in a conditional adversarial generative adversarial network (cGAN)-fashion to improve the predicted projection appearance.
We find that our CNN reliably predicts high-quality projection images based on the blurry Z-sweeps, providing a way for both high-throughput and high-quality 3D fluorescent imaging. Our CNN, OSA-U-Net cGAN with a PatchGAN discriminator (Isola et al. 2016), achieves the best predictions in visual quality as evaluated by the peak signal to noise ratio and Frechet inception distance (Heusel et al. 2018) for both single- and multi-spheroid assays compared to the baselines. We also validated fluorescent signal intensity trends from the Z-sweep/CNN-based workflow against the traditional project Z-stack measurements, meaning that we can draw the same biological conclusions as when using the much more time-consuming Z-stack acquisition. This is further exemplified in an experiment where embedded tumor multi-spheroids are treated with cytotoxic compounds, and we can use the Z-sweep/CNN-based workflow to quantify reduced cell health over time in response to drug treatment.
To conclude, we show how fast Z-sweep acquisition combined with state-of-the-art deep learning methodologies provide a way to acquire high-quality fluorescent images of 3D-samples with 10-100 times higher throughput than the standard Z-stack acquisition.
- References
Ronneberger, O., P.Fischer, Brox, T., 2015. U-net: Convolutional networks for biomedical image segmentation.
Lee, Y., Park, J., 2019. Centermask: Real-time anchor-free instance segmentation.
Isola, P., Zhu, J., Zhou, T., Efros, A.A., 2016. Image-to-image translation with conditional adversarial networks.
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S., 2018. Gans trained by a two time-scale update rule converge to a local nash equilibrium.