Deep Learning in Confocal Fluorescence Microscopy for Synthetic Image Generation and Processing Pipeline
- Abstract number
- 274
- Presentation Form
- Poster & Flash Talk
- DOI
- 10.22443/rms.mmc2023.274
- Corresponding Email
- [email protected]
- Session
- Artificial Intelligence
- Authors
- Ee Diego Silvera (1), Ee María José Millán (1), Ee Emiliano Merlo (1), PhD Federico Lecumberry (1), PhD Álvaro Gomez (1), PhD Patricia Cassina (2), MSc Erik Winiarski (2)
- Affiliations
-
1. Facultad de Ingeniería - UdelaR
2. Facultad de Medicina - UdelaR
- Keywords
Confocal microscopy, data generation, deconvolution, segmentation, morphological characteristics and classification
- Abstract text
Machine Learning has had a significant impact on microscopy, enabling faster and more accurate analysis of biological imaging data. In particular, Generative Adversarial Networks (GANs) and U-Net have emerged as powerful tools in this field.
GANs (I. Goodfellow et al. 2020) are a type of deep learning model that consists of two neural networks, a generator and a discriminator. The generator creates synthetic images while the discriminator attempts to differentiate between the synthetic images and real images. Through this adversarial process, the generator improves its ability to generate realistic images, while the discriminator improves its ability to differentiate between real and synthetic images. In microscopy, GANs can be used to generate synthetic microscopy images or to fill in missing or degraded image data as we show in this work.
U-Net (O. Ronneberger et al. 2017) is a type of convolutional neural network that is specifically designed for image segmentation tasks. The architecture of U-Net consists of an encoder and a decoder, with skip connections between corresponding layers in the encoder and decoder. In microscopy, U-Net has been used to segment objects of interest in microscopy images, such as cells or subcellular structures, enabling more accurate analysis of the images.
Overall, the integration of machine learning techniques, particularly GANs and U-Net, into microscopy has enabled researchers to analyze imaging data more efficiently and effectively, leading to new insights and advances in the field of biology(K. Dunn 2019, F.Long 2020).
In this work, a GAN architecture is trained to generate confocal fluorescence microscopy synthetic images from blood monocyte stacks from control individuals and patients, where nuclei and mitochondria were marked with different fluorescent probes. These images are then processed by an own implemented pipeline consisting of deconvolution, segmentation and feature extraction for mitochondria classification
In the deconvolution stage, the methods implemented in the ImageJ plugin "DeconvolutionLab2" (D. Sage et al. 2017) are used, where their performance is analyzed based on the parameters used and their characteristics, such as whether they are regularized algorithms or if they are iterative or non-iterative. .
For segmentation, different approaches are evaluated, starting with traditional histogram-based thresholding methods (Otsu, Huang, Li, among others), non-supervised clustering methods such as K-Means (Lloyd 1957; MacQueen 1967), and Deep Learning methods such as the U-Net neural network. .
In feature extraction, morphological and connectivity features are obtained. The morphological characteristics obtained are the usual ones (volume, area, sphericity, among others). The connectivity characteristics are found from skeletonization, pruning and graph modeling (M. Zanin et al 2020). The parameters found are the number of nodes, the density of links and the efficiency, among others.
Finally, for the mitochondrial classification, classical approaches such as Decision Tree, Logistic Regression and Support Vector Machine (SVM) were used.
The work was done in the Python programming language. We are currently working on making this framework publicly available.
The final result of the work is an end-to-end pipeline with different processing options in the deconvolution and segmentation stages usable for different microscopy data, a synthetic data generator that achieves performance when it comes to simulating the effect of fluorescence in binary masks, and an application of both products for the mitochondrial classification with an accuracy result greater than 70%.
It is concluded that neural networks have a fundamental role in the processing of medical and biological images, and can be used for data augmentation, segmentation and classification.
- References
1] B. R. Masters, J. G. Fujimoto, and D. L. Farkas, “Biomedical optical ima-
ging,” Journal of Biomedical Optics, vol. 15, no. 5, pp. 3 – 28, 2010.
[2] A. L ́opez-Macay, J. Fern ́andez-Torres, A. Zepeda, and U. Nacional,
“Principios y aplicaciones de la microscopia l ́aser confocal en la
investigaci ́on biom ́edica Principles and applications of laser confocal
microscopy in biomedical research,” Investigaci ́on en Discapacidad, vol. 5,
pp. 139–145, 2016. [Online]. Available: www.medigraphic.org.mxhttp:
//www.medigraphic.com/ridwww.medigraphic.org.mx
[3] “Imagina.” [Online]. Available: https://www.imagina.ei.udelar.edu.uy/
[4] “Classic maximum likelihood estimation.” [Online]. Available: https:
//svi.nl/MaximumLikelihoodEstimation
[5] “Good roughness maximum likelihood estimation.” [Online]. Available:
https://svi.nl/GoodRoughnessMaximumLikelihoodEstimation
[6] M. Boulakroune, D. Benatia, N. Slougui, and A. E. Oualkadi, “Tikhonov-
Miller regularization with a denoisy and deconvolved signal as model of solu-
tion for improvement of depth resolution in SIMS analysis,” 2008 3rd Inter-
national Conference on Information and Communication Technologies: From
Theory to Applications, ICTTA, no. May, 2008.
[7] M. K. Khan, S. Morigi, L. Reichel, and F. Sgallari, “Iterative methods of
Richardson-Lucy-type for image deblurring,” Numerical Mathematics, vol. 6,
no. 1, pp. 262–275, 2013.
[8] “Richardson-Lucy algorithm with total variation regularization for 3D confo-
cal microscope deconvolution,” Microscopy Research and Technique, vol. 69,
no. 4, pp. 260–266, 2006.
[9] N. S. Punn and S. Agarwal, Modality specific U-Net variants for biomedical
image segmentation: a survey. Springer Netherlands, 2022, no. 0123456789.
[Online]. Available: https://doi.org/10.1007/s10462-022-10152-1
[10] J. Schindelin, I. Arganda-Carreras et al., “Fiji: an open-source platform for
biological-image analysis,” Nature Methods, vol. 9, no. 7, pp. 676–682, Jul
2012. [Online]. Available: https://doi.org/10.1038/nmeth.2019
[11] C. T. Rueden, J. Schindelin, M. C. Hiner, B. E. DeZonia, A. E. Walter,
E. T. Arena, and K. W. Eliceiri, “Imagej2: Imagej for the next generation of
scientific image data,” BMC Bioinformatics, vol. 18, no. 1, p. 529, Nov 2017.
[Online]. Available: https://doi.org/10.1186/s12859-017-1934-z
[12] D. Sage et al., “DeconvolutionLab2: An open-source software for
deconvolution microscopy,” Methods, vol. 115, pp. 28–41, 2017. [Online].
Available: http://dx.doi.org/10.1016/j.ymeth.2016.12.015
[13] K. W. Dunn, C. Fu, D. J. Ho, S. Lee, S. Han, P. Salama, and E. J. Delp,
“Deepsynth: Three-dimensional nuclear segmentation of biological images
using neural networks trained with synthetic data,” Scientific Reports, vol. 9,
no. 1, p. 18295, Dec 2019.
[14] M. Zanin, B. Santos, and P. A. et al, “Mitochondria interaction networks
show altered topological patterns in Parkinson’s disease,” npj Systems
Biology and Applications, vol. 6, no. 38, 2020. [Online]. Available:
http://dx.doi.org/10.1038/s41540-020-00156-4
[15] “Microscop ́ıa de fluorescencia.” [Online]. Available: https://www.leica-micr
osystems.com/es/aplicaciones/ciencias-biologicas/fluorescencia/
[16] C. Greb, “Fluorescent dyes,” Jan 2022. [Online]. Available: https:
//www.leica-microsystems.com/science-lab/fluorescent-dyes/
[17] “Confocal microscopes.” [Online]. Available: https://www.britannica.com/t
echnology/microscope/Confocal-microscopes
[18] J. Waters, “The point spread function,” Dec 2018. [Online]. Available:
https://www.youtube.com/watch?v=Tkc GOCjx7E
[19] D. B. Schmolze, C. Standley, K. E. Fogarty, and A. H. Fischer, “Advances in
microscopy techniques,” Archives of Pathology and Laboratory Medicine, vol.
135, no. 2, pp. 255–263, 2011.
[20] M. Wilson, “Collecting light: The importance of numerical aperture in
microscopy,” Aug 2017. [Online]. Available: https://www.leica-microsystems
.com/science-lab/collecting-light-the-importance-of-numerical-aperture-in-
microscopy/
[21] “Immersion objectives: Using oil, glycerol, or water to overcome
some of the limits of resolution,” Nov 2021. [Online]. Available:
https://www.leica-microsystems.com/science-lab/immersion-objectives-usin
g-oil-glycerol-or-water-to-overcome-some-of-the-limits-of-resolution/
[22] “Psf generator.” [Online]. Available: http://bigwww.epfl.ch/algorithms/psfg
enerator/
[23] E. W. M.Born, Principles of Optics, Cambridge University Press, 2003, vol.
7th edition.
[24] H. W. Dan, W. C. Hung, Y. S. Tsai, and C. C. Lin, “3D image processing
for mitochondria morphology variation analysis,” 2014 IEEE International
Symposium on Bioelectronics and Bioinformatics, IEEE ISBB 2014, pp. 0–3,
2014.
[25] D. J. Ho, C. Fu, P. Salama, K. W. Dunn, and E. J. Delp, “Nuclei detection
and segmentation of fluorescence microscopy images using three dimensional
convolutional neural networks,” Proceedings - International Symposium on
Biomedical Imaging, vol. 2018-April, pp. 418–422, 2018.
[26] S. Nesmachnow and S. Iturriaga, “Cluster-uy: Collaborative scientific high
performance computing in uruguay,” in Supercomputing, M. Torres and
J. Klapp, Eds. Cham: Springer International Publishing, 2019, pp. 188–202.
[27] I. Goodfellow, J. Pouget-Abadie, M. Mirza et al., “Generative adversarial
networks.” Communications of the ACM, vol. 63, no. 11, pp. 139 – 144, 2020.
[28] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image
translation using cycle-consistent adversarial networks,” in Proceedings of the
IEEE International Conference on Computer Vision (ICCV), Oct 2017.
[29] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation
with conditional adversarial networks,” CVPR, 2017.
[30] T. Z. A. A. E. Phillip Isola, Jun-Yan Zhu, “Image-to-image translation with
conditional adversarial nets.” [Online]. Available: https://phillipi.github.io/p
ix2pix/
[31] C. Fu, S. Lee, D. J. Ho, S. Han, P. Salama, K. W. Dunn, and E. J. Delp, “Three
dimensional fluorescence microscopy image synthesis and segmentation,” in
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Workshops (CVPRW), June 2018, pp. 2302–23 028.
[32] T. Falk, D. Mai, R. Bensch, C ̧ i ̧cek, A. Abdulkadir, Y. Marrakchi,
A. B ̈ohm, J. Deubner, Z. J ̈ackel, K. Seiwald, A. Dovzhenko, O. Tietz,
C. Dal Bosco, S. Walsh, D. Saltukoglu, T. L. Tay, M. Prinz,
K. Palme, M. Simons, I. Diester, T. Brox, and O. Ronneberger,
“U-net: deep learning for cell counting, detection, and morphometry.”
Nature methods, vol. 16, no. 1, pp. 67 – 70, 2019. [Online]. Available:
http://proxy.timbo.org.uy/login?url=https://search.ebscohost.com/login.a
spx?direct=true&db=cmedm&AN=30559429&lang=es&site=eds-live
[33] J.-B. Sibarita, “Deconvolution microscopy,” Microscopy Techniques, pp. 201–
243, 2005.
[34] J. G. McNally, T. Karpova, J. Cooper, and J. A. Conchello, “Three-
dimensional imaging by deconvolution microscopy,” Methods, vol. 19, no. 3,
pp. 373–385, 1999.
[35] P. Sarder and A. Nehorai, “Deconvolution methods for 3-d fluorescence mi-
croscopy images,” IEEE Signal Processing Magazine, vol. 23, no. 3, pp. 32–45,
2006.
[36] N. Wiener, “Extrapolation, interpolation, and smoothing of stationary time
series, vol. 2,” 1949.
[37] S. Horii, “A gentle introduction to maximum likelihood estimation
and maximum a posteriori estimation,” Oct 2019. [Online]. Available:
https://towardsdatascience.com/a-gentle-introduction-to-maximum-likeliho
od-estimation-and-maximum-a-posteriori-estimation-d7c318f9d22d
[38] J. Pillow, “Lecture 22: Linear Shift-Invariant (LSI) Systems and Convolution
Linear Shift-Invariant (aka ”time-invariant”) Systems,” Mathematical Tools
for Neuroscience, no. Neu 314, 2016.
[39] R. M. Gray, “Toeplitz and circulant matrices: A review,” Foundations and
Trends in Communications and Information Theory, vol. 2, no. 3, pp. 155–
239, 2006.
[40] “Pyimagej.” [Online]. Available: https://pypi.org/project/pyimagej/
[41] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2005.
[42] J. P. Pluim, J. A. Maintz, and M. A. Viergever, “Mutual-information-based
registration of medical images: a survey,” IEEE transactions on medical ima-
ging, vol. 22, no. 8, pp. 986–1004, 2003.
[43] A. Gron, Hands-On Machine Learning with Scikit-Learn and TensorFlow:
Concepts, Tools, and Techniques to Build Intelligent Systems, 2nd ed.
O’Reilly Media, Inc., 2019.
[44] K. Miura, “An Introduction to Maximum Likelihood Estimation and Infor-
mation Geometry,” Interdisciplinary Information Sciences, vol. 17, no. 3, pp.
155–174, 2011.
[45] C. E. Shannon, “A mathematical theory of communication Bell Syst,” tech.
J, vol. 27, no. 379, p. 623, 1948.
[46] L.-K. H. Wang and M.-J. J, “Image thresholding by minimizing the measure
of fuzziness.” Pattern Recognition Letters, vol. 28, no. 1, pp. 41–45, 1995.
[47] C. H. Li and P. K. Tam, “An iterative algorithm for minimum cross entropy
thresholding,” Pattern Recognition Letters, vol. 19, no. 8, pp. 771–776, 1998.
[48] T. Ridler and S. Calvard, “Picture Thresholding Using An Interactive Se-
lection Method,” IEEE Transactions on Systems, Man and Cybernetics, vol.
smc-8, no. 8, pp. 630–632, 1978.
[49] T. Pun, “A new method for grey-level picture thresholding using the entropy
of the histogram,” pp. 223–237, 1980.
[50] Y. C. Liang and J. R. Cuevas, “An automatic multilevel image thresholding
using relative entropy and meta-heuristic algorithms,” Entropy, vol. 15, no. 6,
pp. 2181–2209, 2013.
[51] “Analisis de redes mitocondriales.” [Online]. Available: https://prueba-tim
ag.webnode.com.uy/
[52] “Algoritmo k-means: Clustering de forma sencilla,” May 2021. [Online].
Available: https://www.themachinelearners.com/k-means/
[53] O. Ronneberger, P. Fischer, and T. Brox, “2015-U-Net,” arXiv, pp. 1–8,
2015. [Online]. Available: http://lmb.informatik.uni-freiburg.de/0Aarxiv:
1505.04597v1
[54] R. C. Gonzalez and R. E. Woods, 2008, vol. 3rd Editio, no. 3.
[55] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recog-
nition,” Proceedings of the IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, vol. 2016-December, pp. 770–778, 2016.
[56] “Unet with resblock for semantic segmentation.” [Online]. Available:
https://medium.com/@nishanksingla/unet-with-resblock-for-semantic-seg
mentation-dd1766b4ff66c
[57] L. R. Dice, “Measures of the amount of ecologic association between species,”
vol. 3, 1945.
[58] A. A. Taha and A. Hanbury, “Metrics for evaluating 3D medical image
segmentation: Analysis, selection, and tool,” BMC Medical Imaging, vol. 15,
no. 1, 2015. [Online]. Available: http://dx.doi.org/10.1186/s12880-015-0068-x
[59] “Skeletonize.” [Online]. Available: https://scikit-image.org/docs/stable/aut
o examples/edges/plot skeleton.html
[60] Z. Yaniv, B. C. Lowekamp, H. J. Johnson, and R. Beare, “SimpleITK
image-analysis notebooks: a collaborative environment for education and
reproducible research,” Journal of Digital Imaging, vol. 31, no. 3, pp. 290–303,
Nov. 2017. [Online]. Available: https://doi.org/10.1007/s10278-017-0037-8
[61] P. Kollmannsberger, M. Kerschnitzki, F. Repp, W. Wagermaier, R. Wein-
kamer, and P. Fratzl, “The small world of osteocytes: Connectomics of the
lacuno-canalicular network in bone,” New Journal of Physics, vol. 19, no. 7,
2017.
[62] G. Lehmann, “Label object representation and manipulation with itk,” Insight
J., 01 2008.
[63] V. Latora and M. Marchiori, “Efficient behavior of small-world networks,”
Physical Review Letters, vol. 87, no. 19, oct 2001. [Online]. Available:
https://doi.org/10.1103%2Fphysrevlett.87.198701
[64] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre, “Fast
unfolding of communities in large networks,” Journal of Statistical Mechanics:
Theory and Experiment, vol. 2008, no. 10, p. P10008, oct 2008. [Online].
Available: https://doi.org/10.1088%2F1742-5468%2F2008%2F10%2Fp10008
[65] M. E. J. Newman, “Assortative mixing in networks,” Physical Review
Letters, vol. 89, no. 20, oct 2002. [Online]. Available: https://doi.org/10.110
3%2Fphysrevlett.89.208701
[66] A. Hagberg, P. Swart, and D. Chult, “Exploring network structure, dynamics,
and function using networkx,” 01 2008.
[67] M. Gustineli, “A survey on recently proposed activation functions for Deep
Learning,” pp. 1–4, 2022. [Online]. Available: http://arxiv.org/abs/2204.02921
[68] “A gentle introduction to mini-batch gradient descent and how to configure
batch size.” [Online]. Available: https://machinelearningmastery.com/gentl
e-introduction-mini-batch-gradient-descent-configure-batch-size/
[69] P. Munro et al., “Backprop,” pp. 69–73, 2011.
[70] “Lecture 4: Backpropagation and neural networks.” [Online]. Available:
http://cs231n.stanford.edu/slides/2017/cs231n 2017 lecture4.pdf
[71] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpas-
sing human-level performance on imagenet classification,” Proceedings of the
IEEE International Conference on Computer Vision, vol. 2015 International
Conference on Computer Vision, ICCV 2015, pp. 1026–1034, 2015.
[72] K. M. A. Adweb, N. Cavus, and B. Sekeroglu, “Cervical Cancer Diagnosis
Using Very Deep Networks over Different Activation Functions,” IEEE Access,
vol. 9, pp. 46 612–46 625, 2021.
[73] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R.
Salakhutdinov, “Improving neural networks by preventing co-adaptation of
feature detectors,” pp. 1–18, 2012. [Online]. Available: http://arxiv.org/abs/
1207.0580
[74] B. Mele and G. Altarelli, “Lepton spectra as a measure of b quark polarization
at LEP,” Physics Letters B, vol. 299, no. 3-4, pp. 345–350, 1993.
[75] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network
training by reducing internal covariate shift,” 32nd International Conference
on Machine Learning, ICML 2015, vol. 1, pp. 448–456, 2015.
[76] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang,
A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “Ima-
geNet Large Scale Visual Recognition Challenge,” International Journal of
Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015.
[77] Universidad de Sevilla, “Procesamiento de im ́agenes con MATLAB,” Lectura
de Imagenes, vol. 6, no. lenguaje M, p. 20, 2014. [Online]. Available:
http://asignatura.us.es/imagendigital/Matlab PID 1314.pdf%0Ahttp:
//lonely113.blogspot.com
[78] R. P. Grimaldi, Matem ́aticas discretas y combinatoria : una introducci ́on
con aplicaciones. Pearson Educaci ́on, 1998. [Online]. Available: https:
//books.google.com.uy/books?id=lHqqjoR0b1YC
[79] Y. S. Abu-Mostafa, M. Magdon-Ismail, and H. Lin, Learning from Data: A
Short Course. AMLBook, 2012.
[80] M. Banko and E. Brill, “Scaling to very very large corpora for natural langua-
ge disambiguation,” in Proceedings of the 39th Annual Meeting on Association
for Computational Linguistics, ser. ACL ’01. USA: Association for Compu-
tational Linguistics, 2001, p. 26–33.
[81] S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Che-
mometrics and intelligent laboratory systems, vol. 2, no. 1-3, pp. 37–52, 1987.
[82] H. B. Curry, “The method of steepest descent for non-linear minimization
problems,” Quarterly of Applied Mathematics, vol. 2, no. 3, pp. 258–261, 1944.
[83] S. Wright, J. Nocedal et al., “Numerical optimization,” Springer Science,
vol. 35, no. 67-68, p. 7, 1999.
[84] M. Hofmann, “Support vector machines-kernels and the kernel trick,” Notes,
vol. 26, no. 3, pp. 1–16, 2006.