Abstract:Survival analysis, a foundational tool for modeling time-to-event data, has seen growing integration with machine learning (ML) approaches to handle the complexities of censored data and time-varying risks. Despite these advances, leveraging state-of-the-art survival models remains a challenge due to the fragmented nature of existing implementations, which lack standardized interfaces and require extensive preprocessing. We introduce SurvHive, a Python-based framework designed to unify survival analysis methods within a coherent and extensible interface modeled on scikit-learn. SurvHive integrates classical statistical models with cutting-edge deep learning approaches, including transformer-based architectures and parametric survival models. Using a consistent API, SurvHive simplifies model training, evaluation, and optimization, significantly reducing the barrier to entry for ML practitioners exploring survival analysis. The package includes enhanced support for hyper-parameter tuning, time-dependent risk evaluation metrics, and cross-validation strategies tailored to censored data. With its extensibility and focus on usability, SurvHive provides a bridge between survival analysis and the broader ML community, facilitating advancements in time-to-event modeling across domains. The SurvHive code and documentation are available freely at https://github.com/compbiomed-unito/survhive.
Abstract:Saliency Maps (SMs) have been extensively used to interpret deep learning models decision by highlighting the features deemed relevant by the model. They are used on highly nonlinear problems, where linear feature selection (FS) methods fail at highlighting relevant explanatory variables. However, the reliability of gradient-based feature attribution methods such as SM has mostly been only qualitatively (visually) assessed, and quantitative benchmarks are currently missing, partially due to the lack of a definite ground truth on image data. Concerned about the apophenic biases introduced by visual assessment of these methods, in this paper we propose a synthetic quantitative benchmark for Neural Networks (NNs) interpretation methods. For this purpose, we built synthetic datasets with nonlinearly separable classes and increasing number of decoy (random) features, illustrating the challenge of FS in high-dimensional settings. We also compare these methods to conventional approaches such as mRMR or Random Forests. Our results show that our simple synthetic datasets are sufficient to challenge most of the benchmarked methods. TreeShap, mRMR and LassoNet are the best performing FS methods. We also show that, when quantifying the relevance of a few non linearly-entangled predictive features diluted in a large number of irrelevant noisy variables, neural network-based FS and interpretation methods are still far from being reliable.