Abstract:The most common type of lung cancer, lung adenocarcinoma (LUAD), has been increasingly detected since the advent of low-dose computed tomography screening technology. In clinical practice, pre-invasive LUAD (Pre-IAs) should only require regular follow-up care, while invasive LUAD (IAs) should receive immediate treatment with appropriate lung cancer resection, based on the cancer subtype. However, prior research on diagnosing LUAD has mainly focused on classifying Pre-IAs/IAs, as techniques for distinguishing different subtypes of IAs have been lacking. In this study, we proposed a multi-head attentional feature fusion (MHA-FF) model for not only distinguishing IAs from Pre-IAs, but also for distinguishing the different subtypes of IAs. To predict the subtype of each nodule accurately, we leveraged both radiomics and deep features extracted from computed tomography images. Furthermore, those features were aggregated through an adaptive fusion module that can learn attention-based discriminative features. The utility of our proposed method is demonstrated here by means of real-world data collected from a multi-center cohort.
Abstract:Lung cancer, particularly in its advanced stages, remains a leading cause of death globally. Though early detection via low-dose computed tomography (CT) is promising, the identification of high-risk factors crucial for surgical mode selection remains a challenge. Addressing this, our study introduces an Attention-Enhanced Graph Convolutional Network (AE-GCN) model to classify whether there are high-risk factors in stage I lung cancer based on the preoperative CT images. This will aid surgeons in determining the optimal surgical method before the operation. Unlike previous studies that relied on 3D patch techniques to represent nodule spatial features, our method employs a GCN model to capture the spatial characteristics of pulmonary nodules. Specifically, we regard each slice of the nodule as a graph vertex, and the inherent spatial relationships between slices form the edges. Then, to enhance the expression of nodule features, we integrated both channel and spatial attention mechanisms with a pre-trained VGG model for adaptive feature extraction from pulmonary nodules. Lastly, the effectiveness of the proposed method is demonstrated using real-world data collected from the hospitals, thereby emphasizing its potential utility in the clinical practice.