Abstract:Training segmentation networks requires large annotated datasets, which in medical imaging can be hard to obtain. Despite this fact, data augmentation has in our opinion not been fully explored for brain tumor segmentation (a possible explanation is that the number of training subjects (369) is rather large in the BraTS 2020 dataset). Here we apply different types of data augmentation (flipping, rotation, scaling, brightness adjustment, elastic deformation) when training a standard 3D U-Net, and demonstrate that augmentation significantly improves performance on the validation set (125 subjects) in many cases. Our conclusion is that brightness augmentation and elastic deformation works best, and that combinations of different augmentation techniques do not provide further improvement compared to only using one augmentation technique.
Abstract:We propose a 3D volume-to-volume Generative Adversarial Network (GAN) for segmentation of brain tumours. The proposed model, called Vox2Vox, generates segmentations from multi-channel 3D MR images. The best results are obtained when the generator loss (a 3D U-Net) is weighted 5 times higher compared to the discriminator loss (a 3D GAN). For the BraTS 2018 training set we obtain (after ensembling 5 models) the following dice scores and Hausdorff 95 percentile distances: 90.66%, 82.54%, 78.71%, and 4.04 mm, 6.07 mm, 5.00 mm, for whole tumour, core tumour and enhancing tumour respectively. The proposed model is shown to compare favorably to the winners of the BraTS 2018 challenge, but a direct comparison is not possible.