Abstract:Purpose: To investigate feasibility of accelerating prostate diffusion-weighted imaging (DWI) by reducing the number of acquired averages and denoising the resulting image using a proposed guided denoising convolutional neural network (DnCNN). Materials and Methods: Raw data from the prostate DWI scans were retrospectively gathered (between July 2018 and July 2019) from six single-vendor MRI scanners. 118 data sets were used for training and validation (age: 64.3 +- 8 years) and 37 - for testing (age: 65.1 +- 7.3 years). High b-value diffusion-weighted (hb-DW) data were reconstructed into noisy images using two averages and reference images using all sixteen averages. A conventional DnCNN was modified into a guided DnCNN, which uses the low b-value DWI image as a guidance input. Quantitative and qualitative reader evaluations were performed on the denoised hb-DW images. A cumulative link mixed regression model was used to compare the readers scores. The agreement between the apparent diffusion coefficient (ADC) maps (denoised vs reference) was analyzed using Bland Altman analysis. Results: Compared to the DnCNN, the guided DnCNN produced denoised hb-DW images with higher peak signal-to-noise ratio and structural similarity index and lower normalized mean square error (p < 0.001). Compared to the reference images, the denoised images received higher image quality scores (p < 0.0001). The ADC values based on the denoised hb-DW images were in good agreement with the reference ADC values. Conclusion: Accelerating prostate DWI by reducing the number of acquired averages and denoising the resulting image using the proposed guided DnCNN is technically feasible.
Abstract:Accurate detection of pulmonary nodules with high sensitivity and specificity is essential for automatic lung cancer diagnosis from CT scans. Although many deep learning-based algorithms make great progress for improving the accuracy of nodule detection, the high false positive rate is still a challenging problem which limits the automatic diagnosis in routine clinical practice. Moreover, the CT scans collected from multiple manufacturers may affect the robustness of Computer-aided diagnosis (CAD) due to the differences in intensity scales and machine noises. In this paper, we propose a novel self-supervised learning assisted pulmonary nodule detection framework based on a 3D Feature Pyramid Network (3DFPN) to improve the sensitivity of nodule detection by employing multi-scale features to increase the resolution of nodules, as well as a parallel top-down path to transit the high-level semantic features to complement low-level general features. Furthermore, a High Sensitivity and Specificity (HS2) network is introduced to eliminate the false positive nodule candidates by tracking the appearance changes in continuous CT slices of each nodule candidate on Location History Images (LHI). In addition, in order to improve the performance consistency of the proposed framework across data captured by different CT scanners without using additional annotations, an effective self-supervised learning schema is applied to learn spatiotemporal features of CT scans from large-scale unlabeled data. The performance and robustness of our method are evaluated on several publicly available datasets with significant performance improvements. The proposed framework is able to accurately detect pulmonary nodules with high sensitivity and specificity and achieves 90.6% sensitivity with 1/8 false positive per scan which outperforms the state-of-the-art results 15.8% on LUNA16 dataset.
Abstract:Accurate detection of pulmonary nodules with high sensitivity and specificity is essential for automatic lung cancer diagnosis from CT scans. Although many deep learning-based algorithms make great progress for improving the accuracy of nodule detection, the high false positive rate is still a challenging problem which limited the automatic diagnosis in routine clinical practice. In this paper, we propose a novel pulmonary nodule detection framework based on a 3D Feature Pyramid Network (3DFPN) to improve the sensitivity of nodule detection by employing multi-scale features to increase the resolution of nodules, as well as a parallel top-down path to transit the high-level semantic features to complement low-level general features. Furthermore, a High Sensitivity and Specificity (HS$^2$) network is introduced to eliminate the falsely detected nodule candidates by tracking the appearance changes in continuous CT slices of each nodule candidate. The proposed framework is evaluated on the public Lung Nodule Analysis (LUNA16) challenge dataset. Our method is able to accurately detect lung nodules at high sensitivity and specificity and achieves $90.4\%$ sensitivity with 1/8 false positive per scan which outperforms the state-of-the-art results $15.6\%$.
Abstract:Lung segmentation in computerized tomography (CT) images is an important procedure in various lung disease diagnosis. Most of the current lung segmentation approaches are performed through a series of procedures with manually empirical parameter adjustments in each step. Pursuing an automatic segmentation method with fewer steps, in this paper, we propose a novel deep learning Generative Adversarial Network (GAN) based lung segmentation schema, which we denote as LGAN. Our proposed schema can be generalized to different kinds of neural networks for lung segmentation in CT images and is evaluated on a dataset containing 220 individual CT scans with two metrics: segmentation quality and shape similarity. Also, we compared our work with current state of the art methods. The results obtained with this study demonstrate that the proposed LGAN schema can be used as a promising tool for automatic lung segmentation due to its simplified procedure as well as its good performance.