Abstract:Mechanistic interpretability of deep learning models has emerged as a crucial research direction for understanding the functioning of neural networks. While significant progress has been made in interpreting discriminative models like transformers, understanding generative models such as Variational Autoencoders (VAEs) remains challenging. This paper introduces a comprehensive causal intervention framework for mechanistic interpretability of VAEs. We develop techniques to identify and analyze "circuit motifs" in VAEs, examining how semantic factors are encoded, processed, and disentangled through the network layers. Our approach uses targeted interventions at different levels: input manipulations, latent space perturbations, activation patching, and causal mediation analysis. We apply our framework to both synthetic datasets with known causal relationships and standard disentanglement benchmarks. Results show that our interventions can successfully isolate functional circuits, map computational graphs to causal graphs of semantic factors, and distinguish between polysemantic and monosemantic units. Furthermore, we introduce metrics for causal effect strength, intervention specificity, and circuit modularity that quantify the interpretability of VAE components. Experimental results demonstrate clear differences between VAE variants, with FactorVAE achieving higher disentanglement scores (0.084) and effect strengths (mean 4.59) compared to standard VAE (0.064, 3.99) and Beta-VAE (0.051, 3.43). Our framework advances the mechanistic understanding of generative models and provides tools for more transparent and controllable VAE architectures.
Abstract:In medical imaging, anomaly detection is a vital element of healthcare diagnostics, especially for neurological conditions which can be life-threatening. Conventional deterministic methods often fall short when it comes to capturing the inherent uncertainty of anomaly detection tasks. This paper introduces a Bayesian Variational Autoencoder (VAE) equipped with multi-head attention mechanisms for detecting anomalies in brain magnetic resonance imaging (MRI). For the purpose of improving anomaly detection performance, we incorporate both epistemic and aleatoric uncertainty estimation through Bayesian inference. The model was tested on the BraTS2020 dataset, and the findings were a 0.83 ROC AUC and a 0.83 PR AUC. The data in our paper suggests that modeling uncertainty is an essential component of anomaly detection, enhancing both performance and interpretability and providing confidence estimates, as well as anomaly predictions, for clinicians to leverage in making medical decisions.