Noémie Moreau, PhD student
Feb 18, 2020
Ce travail a été présenté à l’oral le jeudi 13 Février 2020 lors du 4ème Nuclear Technologies for Health Symposium (NTHS2020).
Noémie Moreau1,2, Caroline Rousseau3,4 MD PhD, Ludovic Ferrer3,4 PhD, Mario Campone3,4 MD PhD, Mathilde Colombié4 MD, Nicolas Normand1 PhD and Mathieu Rubeaux2 PhD
1 LS2N, Centrale Nantes, Nantes, France
2 Keosys, Saint Herblain, France
3 University of Nantes, CRCINA, INSERM UMR1232, CNRS-ERL6001, Nantes, France
4 ICO Gauducheau Cancer Center, Saint Herblain, France
Breast cancer is the most common cancer in women and the second most frequent cancer overall. One in eight women will be diagnosed with invasive breast cancer in their lifetime. Breast cancer survival varies according to cancer staging at diagnosis. If detected early, the overall 5-years survival rate is 98% but it goes down to 27% with metastatic involvement. 18FDG positron emission tomography combined with computed tomography (18FDG PET/CT) whole-body imaging is widely used for diagnosis and follow-up. Based on these imaging techniques, lesion segmentation can provide information to assess a treatment effect and to adapt the treatment over time. However, manual segmentation methods are time-consuming and subject to inter and intra-observer variability.
In a first step towards automating segmentation, we compare the performances of semi-automatic traditional and deep learning-based segmentation methods.
Figure 1 : Global workflow, the SUVmax is used to mimic the user click to compute the bounding box. The bounding box is used to defined a region as interest on which we applied the thresholding methods and the deep learning approach.
308 metastatic lesions from 16 patients of the EPICUREseinmeta study were manually delineated by an ICO nuclear medicine physician using the Keosys viewer. Only PET images converted in SUV were used.
The SUVmax seed point was extracted from the delineations to mimic a one click user initialization, and was employed to automatically build a bounding box around each lesion. This bounding box was then used to define a region of interest on which the 5 traditional region growing-based algorithms (Nestle, 41% SUVmax, Daisne, Black and a STAPLE of the 4 previous methods) and the deep learning one were applied (Fig. 1).
A 3D U-Net implementation called nnU-net was used for the deep learning. A 3-fold cross validation was done: 2/3 of the patients were used for training and 1/3 for validation. During training, the network was fed with the region of interest around the lesion and it ground truth mask. After a couple hundred of epochs, the network parameters and weights are set to segment the lesions. For validation, the same network with the same network parameters and weights is used to segment new lesions (Fig. 2).
Segmentation performances were evaluated calculating the Dice score between the semi-automatic segmentations and ground-truth expert manual segmentations.
Figure 2 : During training, the network was fed with the region of interest around the lesion and it ground truth mask. After a couple hundred of epochs, the network parameters and weights are set to segment as well as possible the lesions. For validation, the same network with the same network parameters and weights is used to segment new lesions.
The 5 traditional methods give very heterogeneous results depending on the location and contrast of the lesions, with a Dice score of 0.33 ± 0.24, 0.12 ± 0.07, 0.20 ± 0.15, 0.19 ± 0.13, 0.16 ± 0.14 for Nestle, Daisne, Black, 41% SUVmax and STAPLE methods, respectively. On the other hand, the deep learning approach obtained a Dice score of 0.48 ± 0.19. The Fig. 3 shows visual results of all methods.