https://github.com/KurtButler/2024_attributions_paper.
The field of explainable artificial intelligence (XAI) attempts to develop methods that provide insight into how complicated machine learning methods make predictions. Many methods of explanation have focused on the concept of feature attribution, a decomposition of the model's prediction into individual contributions corresponding to each input feature. In this work, we explore the problem of feature attribution in the context of Gaussian process regression (GPR). We take a principled approach to defining attributions under model uncertainty, extending the existing literature. We show that although GPR is a highly flexible and non-parametric approach, we can derive interpretable, closed-form expressions for the feature attributions. When using integrated gradients as an attribution method, we show that the attributions of a GPR model also follow a Gaussian process distribution, which quantifies the uncertainty in attribution arising from uncertainty in the model. We demonstrate, both through theory and experimentation, the versatility and robustness of this approach. We also show that, when applicable, the exact expressions for GPR attributions are both more accurate and less computationally expensive than the approximations currently used in practice. The source code for this project is freely available under MIT license at