SKOOTS: Skeleton oriented object segmentation for mitochondria

mai 08, 2023

Christopher J. Buswinka (1) (2) (3), Hidetomi Nitta (1), Richard T. Osgood (1) (2), Artur A. Indzhykulian (1) (2)
bioRxiv. (8 May 2023). DOI: https://doi.org/10.1101/2023.05.05.539611


Abstract

The segmentation of individual instances of mitochondria from imaging datasets is informative, yet time-consuming to do by hand, sparking interest in developing automated algorithms using deep neural networks. Existing solutions for various segmentation tasks are largely optimized for one of two types of biomedical imaging: high resolution three-dimensional (whole neuron segmentation in volumetric electron microscopy datasets) or two-dimensional low resolution (whole cell segmentation of light microscopy images). The former requires consistently predictable boundaries to segment large structures, while the latter is boundary invariant but struggles with segmentation of large 3D objects without downscaling. Mitochondria in whole cell 3D EM datasets often occupy the challenging middle ground: large with ambiguous borders, limiting accuracy with existing tools. To rectify this, we have developed skeleton oriented object segmentation (SKOOTS); a new segmentation approach which efficiently handles large, densely packed mitochondria. We show that SKOOTS can accurately, and efficiently, segment 3D mitochondria in previously difficult situations. Furthermore, we will release a new, manually annotated, 3D mitochondria segmentation dataset. Finally, we show this approach can be extended to segment objects in 3D light microscopy datasets. These results bridge the gap between existing segmentation approaches and increases the accessibility for three-dimensional biomedical image analysis.


How Our Software Was Used

Dragonfly’s Segmentation Wizard was used to separate all voxels into foreground (mitochondria) and background. A watershed algorithm was then used to fill the distance map basins and generate instance maps of mitochondria. The population morphology statistics were calculated in Dragonfly for both ground truth segmentation masks and predicted segmentation masks and further analyzed in Python.


Author Affiliation

(1) Eaton Peabody Laboratories, Mass Eye and Ear, Boston, MA, USA
(2) Department of Otolaryngology, Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
(3) Speech and Hearing Biosciences and Technology graduate program, Harvard University, Cambridge, MA, USA