Abstract:Link prediction algorithms for multilayer networks are in principle required to effectively account for the entire layered structure while capturing the unique contexts offered by each layer. However, many existing approaches excel at predicting specific links in certain layers but struggle with others, as they fail to effectively leverage the diverse information encoded across different network layers. In this paper, we present MoE-ML-LP, the first Mixture-of-Experts (MoE) framework specifically designed for multilayer link prediction. Building on top of multilayer heuristics for link prediction, MoE-ML-LP synthesizes the decisions taken by diverse experts, resulting in significantly enhanced predictive capabilities. Our extensive experimental evaluation on real-world and synthetic networks demonstrates that MoE-ML-LP consistently outperforms several baselines and competing methods, achieving remarkable improvements of +60% in Mean Reciprocal Rank, +82% in Hits@1, +55% in Hits@5, and +41% in Hits@10. Furthermore, MoE-ML-LP features a modular architecture that enables the seamless integration of newly developed experts without necessitating the re-training of the entire framework, fostering efficiency and scalability to new experts, paving the way for future advancements in link prediction.
Abstract:EXplainable AI has received significant attention in recent years. Machine learning models often operate as black boxes, lacking explainability and transparency while supporting decision-making processes. Local post-hoc explainability queries attempt to answer why individual inputs are classified in a certain way by a given model. While there has been important work on counterfactual explanations, less attention has been devoted to semifactual ones. In this paper, we focus on local post-hoc explainability queries within the semifactual `even-if' thinking and their computational complexity among different classes of models, and show that both linear and tree-based models are strictly more interpretable than neural networks. After this, we introduce a preference-based framework that enables users to personalize explanations based on their preferences, both in the case of semifactuals and counterfactuals, enhancing interpretability and user-centricity. Finally, we explore the complexity of several interpretability problems in the proposed preference-based framework and provide algorithms for polynomial cases.