Abstract:Classifying subjects as healthy or diseased using neuroimaging data has gained a lot of attention during the last 10 years. Here we apply deep learning to derivatives from resting state fMRI data, and investigate how different 3D augmentation techniques affect the test accuracy. Specifically, we use resting state derivatives from 1,112 subjects in ABIDE preprocessed to train a 3D convolutional neural network (CNN) to perform the classification. Our results show that augmentation only provide minor improvements to the test accuracy.
Abstract:Training segmentation networks requires large annotated datasets, which in medical imaging can be hard to obtain. Despite this fact, data augmentation has in our opinion not been fully explored for brain tumor segmentation (a possible explanation is that the number of training subjects (369) is rather large in the BraTS 2020 dataset). Here we apply different types of data augmentation (flipping, rotation, scaling, brightness adjustment, elastic deformation) when training a standard 3D U-Net, and demonstrate that augmentation significantly improves performance on the validation set (125 subjects) in many cases. Our conclusion is that brightness augmentation and elastic deformation works best, and that combinations of different augmentation techniques do not provide further improvement compared to only using one augmentation technique.
Abstract:We propose a 3D volume-to-volume Generative Adversarial Network (GAN) for segmentation of brain tumours. The proposed model, called Vox2Vox, generates segmentations from multi-channel 3D MR images. The best results are obtained when the generator loss (a 3D U-Net) is weighted 5 times higher compared to the discriminator loss (a 3D GAN). For the BraTS 2018 training set we obtain (after ensembling 5 models) the following dice scores and Hausdorff 95 percentile distances: 90.66%, 82.54%, 78.71%, and 4.04 mm, 6.07 mm, 5.00 mm, for whole tumour, core tumour and enhancing tumour respectively. The proposed model is shown to compare favorably to the winners of the BraTS 2018 challenge, but a direct comparison is not possible.
Abstract:Registration between an fMRI volume and a T1-weighted volume is challenging, since fMRI volumes contain geometric distortions. Here we present preliminary results showing that 3D CycleGAN can be used to synthesize fMRI volumes from T1-weighted volumes, and vice versa, which can facilitate registration.
Abstract:Anonymization of medical images is necessary for protecting the identity of the test subjects, and is therefore an essential step in data sharing. However, recent developments in deep learning may raise the bar on the amount of distortion that needs to be applied to guarantee anonymity. To test such possibilities, we have applied the novel CycleGAN unsupervised image-to-image translation framework on sagittal slices of T1 MR images, in order to reconstruct facial features from anonymized data. We applied the CycleGAN framework on both face-blurred and face-removed images. Our results show that face blurring may not provide adequate protection against malicious attempts at identifying the subjects, while face removal provides more robust anonymization, but is still partially reversible.