Abstract:Unsupervised anomaly detection in brain images is crucial for identifying injuries and pathologies without access to labels. However, the accurate localization of anomalies in medical images remains challenging due to the inherent complexity and variability of brain structures and the scarcity of annotated abnormal data. To address this challenge, we propose a novel approach that incorporates masking within diffusion models, leveraging their generative capabilities to learn robust representations of normal brain anatomy. During training, our model processes only normal brain MRI scans and performs a forward diffusion process in the latent space that adds noise to the features of randomly-selected patches. Following a dual objective, the model learns to identify which patches are noisy and recover their original features. This strategy ensures that the model captures intricate patterns of normal brain structures while isolating potential anomalies as noise in the latent space. At inference, the model identifies noisy patches corresponding to anomalies and generates a normal counterpart for these patches by applying a reverse diffusion process. Our method surpasses existing unsupervised anomaly detection techniques, demonstrating superior performance in generating accurate normal counterparts and localizing anomalies. The code is available at hhttps://github.com/farzad-bz/MAD-AD.
Abstract:Semi-supervised learning has emerged as an appealing strategy to train deep models with limited supervision. Most prior literature under this learning paradigm resorts to dual-based architectures, typically composed of a teacher-student duple. To drive the learning of the student, many of these models leverage the aleatoric uncertainty derived from the entropy of the predictions. While this has shown to work well in a binary scenario, we demonstrate in this work that this strategy leads to suboptimal results in a multi-class context, a more realistic and challenging setting. We argue, indeed, that these approaches underperform due to the erroneous uncertainty approximations in the presence of inter-class overlap. Furthermore, we propose an alternative solution to compute the uncertainty in a multi-class setting, based on divergence distances and which account for inter-class overlap. We evaluate the proposed solution on a challenging multi-class segmentation dataset and in two well-known uncertainty-based segmentation methods. The reported results demonstrate that by simply replacing the mechanism used to compute the uncertainty, our proposed solution brings substantial improvement on tested setups.