Abstract:Lung cancer is one of the significant causes of cancer-related deaths globally. Early detection and treatment improve the chances of survival. Traditionally CT scans have been used to extract the most significant lung infection information and diagnose cancer. This process is carried out manually by an expert radiologist. The imbalance in the radiologists-to-population ratio in a country like India implies significant work pressure on them and thus raises the need to automate a few of their responsibilities. The tendency of modern-day Deep Neural networks to make overconfident mistakes limit their usage to detect cancer. In this paper, we propose a new task-specific loss function to calibrate the neural network to reduce the risk of overconfident mistakes. We use the state-of-the-art Multi-class Difference in Confidence and Accuracy (MDCA) loss in conjunction with the proposed task-specific loss function to achieve the same. We also integrate post-hoc calibration by performing temperature scaling on top of the train-time calibrated model. We demonstrate 5.98% improvement in the Expected Calibration Error (ECE) and a 17.9% improvement in Maximum Calibration Error (MCE) as compared to the best-performing SOTA algorithm.
Abstract:While utilizing machine learning models, one of the most crucial aspects is how bias and fairness affect model outcomes for diverse demographics. This becomes especially relevant in the context of machine learning for medical imaging applications as these models are increasingly being used for diagnosis and treatment planning. In this paper, we study biases related to sex when developing a machine learning model based on brain magnetic resonance images (MRI). We investigate the effects of sex by performing brain age prediction considering different experimental designs: model trained using only female subjects, only male subjects and a balanced dataset. We also perform evaluation on multiple MRI datasets (Calgary-Campinas(CC359) and CamCAN) to assess the generalization capability of the proposed models. We found disparities in the performance of brain age prediction models when trained on distinct sex subgroups and datasets, in both final predictions and decision making (assessed using interpretability models). Our results demonstrated variations in model generalizability across sex-specific subgroups, suggesting potential biases in models trained on unbalanced datasets. This underlines the critical role of careful experimental design in generating fair and reliable outcomes.