Active Organs Segmentation in Metastatic Breast Cancer Images combining Superpixels and Deep Learning Methods

Constance Fourcade, PhD student
Feb 18, 2020

Ce travail a été présenté à l’oral le jeudi 13 Février 2020 lors du 4ème Nuclear Technologies for Health Symposium (NTHS2020).

Constance Fourcade1,2, Gianmarco Santini2 PhD, Ludovic Ferrer3,4 PhD, Caroline Rousseau3,4 MD PhD, Mathilde Colombié4 MD, Mario Campone3,4 MD PhD, Mathieu Rubeaux2 PhD and Diana Mateus1 PhD

1 LS2N, Centrale Nantes, Nantes, France
2 Keosys, Saint Herblain, France
3 University of Nantes, CRCINA, INSERM UMR1232, CNRS-ERL6001, Nantes, France
4 ICO Gauducheau Cancer Center, Saint Herblain, France

Hypothesis
In the clinical follow-up of metastatic breast cancer patients, semi-automatic measurements are performed on 18FDG PET/CT images to monitor the evolution of the main metastatic sites. Apart from being time-consuming and prone to subjective approximations, semi-automatic tools cannot make the difference between cancerous regions and active organs, presenting a high 18FDG uptake.
In the context of the Epicure project, we developed and compared fully automatic deep learning-based methods segmenting the main active organs (brain, heart, bladder), from full-body PET images.

Methods
We combine deep learning-based approaches with superpixels segmentation methods. In particular, we integrate a superpixel SLIC segmentation algorithm [1] at different depths of a convolutional neural network [2], i.e. as input U-Net-SP-Input and within the network loss U-Net-SP-Loss.


3D U-Net, deep learning network used for the three compared approaches

Superpixels reduce the resolution of the images, keeping sharp the boundaries of the larger target organs while the lesions, mostly smaller, are blurred. Results are compared with a deep learning segmentation network alone U-Net.


PET image segmented into superpixels of (approximate) size 12mm x 12mm x 10 mm and compactness 5 using the SLIC algorithm.

The methods are cross-validated on full-body PET images of 36 acquisitions from the ongoing EPICUREseinmeta study. The similarity between the manually defined ground truth masks of the organs and the results is evaluated with the Dice score. The ground truth masks were delineated on the Keosys Viewer [3]. Moreover, these methods being preliminary to tumor segmentation, the precision of the networks is defined by monitoring the number of segmented voxels labelled as “active organ”, but belonging to a lesion.

Results
Although the methods present similar high Dice scores (0.96 ± 0.006), the ones using superpixels present a higher precision (on average 6, 16 and 27 selected voxels belonging to a tumor, for the CNN integrating superpixels in input, in optimization and not using them, respectively).
Moreover, looking at the following images, the U-Net method not integrating superpixel information, segmented tumor parts instead of active organs.

Conclusion
Combining deep learning with superpixels allows to segment organs presenting a high 18FDG uptake on PET images without selecting cancerous lesions. This improves the precision of the semi-automatic tools monitoring the evolution of breast cancer metastasis.

Bibliography
[1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 2274–2281, 2012.
[2] F. Isensee et al., “nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation,” Inform. aktuell, p. 22, 2019.
[3] Keosys Medical Imaging, https://www.keosys.com/read-system.

Publié dans Actualité, Articles Keosys, Epicure

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *