Abstract:Segmenting overlapping cytoplasm of cells in cervical smear images is a clinically essential task, for quantitatively measuring cell-level features in order to diagnose cervical cancer. This task, however, remains rather challenging, mainly due to the deficiency of intensity (or color) information in the overlapping region. Although shape prior-based models that compensate intensity deficiency by introducing prior shape information (shape priors) about cytoplasm are firmly established, they often yield visually implausible results, mainly because they model shape priors only by limited shape hypotheses about cytoplasm, exploit cytoplasm-level shape priors alone, and impose no shape constraint on the resulting shape of the cytoplasm. In this paper, we present a novel and effective shape prior-based approach, called constrained multi-shape evolution, that segments all overlapping cytoplasms in the clump simultaneously by jointly evolving each cytoplasm's shape guided by the modeled shape priors. We model local shape priors (cytoplasm--level) by an infinitely large shape hypothesis set which contains all possible shapes of the cytoplasm. In the shape evolution, we compensate intensity deficiency for the segmentation by introducing not only the modeled local shape priors but also global shape priors (clump--level) modeled by considering mutual shape constraints of cytoplasms in the clump. We also constrain the resulting shape in each evolution to be in the built shape hypothesis set, for further reducing implausible segmentation results. We evaluated the proposed method in two typical cervical smear datasets, and the extensive experimental results show that the proposed method is effective to segment overlapping cytoplasm, consistently outperforming the state-of-the-art methods.
Abstract:Exploiting more information from ground truth (GT) images now is a new research direction for further improving CNN's performance in CT image segmentation. Previous methods focus on devising the loss function for fulfilling such a purpose. However, it is rather difficult to devise a general and optimization-friendly loss function. We here present a novel and practical method that exploits GT images beyond the loss function. Our insight is that feature maps of two CNNs trained respectively on GT and CT images should be similar on some metric space, because they both are used to describe the same objects for the same purpose. We hence exploit GT images by enforcing such two CNNs' feature maps to be consistent. We assess the proposed method on two data sets, and compare its performance to several competitive methods. Extensive experimental results show that the proposed method is effective, outperforming all the compared methods.