Abstract:An ensuing challenge in Artificial Intelligence (AI) is the perceived difficulty in interpreting sophisticated machine learning models, whose ever-increasing complexity makes it hard for such models to be understood, trusted and thus accepted by human beings. The lack, if not complete absence, of interpretability for these so-called black-box models can lead to serious economic and ethical consequences, thereby hindering the development and deployment of AI in wider fields, particularly in those involving critical and regulatory applications. Yet, the building services industry is a highly-regulated domain requiring transparency and decision-making processes that can be understood and trusted by humans. To this end, the design and implementation of autonomous Heating, Ventilation and Air Conditioning systems for the automatic but concurrently interpretable optimisation of energy efficiency and room thermal comfort is of topical interest. This work therefore presents an interpretable machine learning model aimed at predicting room temperature in non-domestic buildings, for the purpose of optimising the use of the installed HVAC system. We demonstrate experimentally that the proposed model can accurately forecast room temperatures eight hours ahead in real-time by taking into account historical RT information, as well as additional environmental and time-series features. In this paper, an enhanced feature engineering process is conducted based on the Exploratory Data Analysis results. Furthermore, beyond the commonly used Interpretable Machine Learning techniques, we propose a Permutation Feature-based Frequency Response Analysis (PF-FRA) method for quantifying the contributions of the different predictors in the frequency domain. Based on the generated reason codes, we find that the historical RT feature is the dominant factor that has most impact on the model prediction.