Abstract:Malignant mesothelioma is classified into three histological subtypes, Epithelioid, Sarcomatoid, and Biphasic according to the relative proportions of epithelioid and sarcomatoid tumor cells present. Biphasic tumors display significant populations of both cell types. This subtyping is subjective and limited by current diagnostic guidelines and can differ even between expert thoracic pathologists when characterising the continuum of relative proportions of epithelioid and sarcomatoid components using a three class system. In this work, we develop a novel dual-task Graph Neural Network (GNN) architecture with ranking loss to learn a model capable of scoring regions of tissue down to cellular resolution. This allows quantitative profiling of a tumor sample according to the aggregate sarcomatoid association score of all the cells in the sample. The proposed approach uses only core-level labels and frames the prediction task as a dual multiple instance learning (MIL) problem. Tissue is represented by a cell graph with both cell-level morphological and regional features. We use an external multi-centric test set from Mesobank, on which we demonstrate the predictive performance of our model. We validate our model predictions through an analysis of the typical morphological features of cells according to their predicted score, finding that some of the morphological differences identified by our model match known differences used by pathologists. We further show that the model score is predictive of patient survival with a hazard ratio of 2.30. The code for the proposed approach, along with the dataset, is available at: https://github.com/measty/MesoGraph.
Abstract:This paper is responding to the MIA-COV19 challenge to classify COVID from non-COVID based on CT lung images. The COVID-19 virus has devastated the world in the last eighteen months by infecting more than 182 million people and causing over 3.9 million deaths. The overarching aim is to predict the diagnosis of the COVID-19 virus from chest radiographs, through the development of explainable vision transformer deep learning techniques, leading to population screening in a more rapid, accurate and transparent way. In this competition, there are 5381 three-dimensional (3D) datasets in total, including 1552 for training, 374 for evaluation and 3455 for testing. While most of the data volumes are in axial view, there are a number of subjects' data are in coronal or sagittal views with 1 or 2 slices are in axial view. Hence, while 3D data based classification is investigated, in this competition, 2D images remains the main focus. Two deep learning methods are studied, which are vision transformer (ViT) based on attention models and DenseNet that is built upon conventional convolutional neural network (CNN). Initial evaluation results based on validation datasets whereby the ground truth is known indicate that ViT performs better than DenseNet with F1 scores being 0.76 and 0.72 respectively. Codes are available at GitHub at <https://github/xiaohong1/COVID-ViT>.