Forget about cleaning up your micrographs: Deep learning segmentation is robust to image artifacts

July 30, 2020

Peng Dong (1), Benjamin Provencher (2), Nabil Basim (1), Nicolas Piché (2), Mike Marsh (3)
Microscopy and Microanalysis, 26, Supplement 2, July 2020: 1468-1469. DOI: 10.1017/S1431927620018231


Abstract

Quantitative image analysis almost always requires a thorough segmentation, i.e. a pixel-wise labeling of the different phases of the imaged material. With traditional tools, that segmentation can proceed only after complicating image artifacts have been filtered away or otherwise compensated for so they do not lead to spurious labeling. Identifying the right image processing operations to mitigate artifacts and restore images can be tedious, and executing those operations may introduce side-effects in the form of image processing artifacts. We show here that Deep Learning segmentation, image processing by convolutional neural networks, produces high quality labeling without any prior image cleanup, greatly simplifying the workflow for yielding valuable quantitative analyses.


How Our Software Was Used

Dragonfly was used to perform the manual segmentation, as well as configuration, training, and evaluation of the neural network models.


Author Affiliation

(1) McMaster University, Hamilton, Ontario, Canada.
(2) Object Research Systems, Montreal, Quebec, Canada.
(3) Object Research Systems, Denver, Colorado, United States.