Abstract:Data pre-processing is a significant step in machine learning to improve the performance of the model and decreases the running time. This might include dealing with missing values, outliers detection and removing, data augmentation, dimensionality reduction, data normalization and handling the impact of confounding variables. Although it is found the steps improve the accuracy of the model, but they might hinder the explainability of the model if they are not carefully considered especially in medicine. They might block new findings when missing values and outliers removal are implemented inappropriately. In addition, they might make the model unfair against all the groups in the model when making the decision. Moreover, they turn the features into unitless and clinically meaningless and consequently not explainable. This paper discusses the common steps of the data preprocessing in machine learning and their impacts on the explainability and interpretability of the model. Finally, the paper discusses some possible solutions that improve the performance of the model while not decreasing its explainability.
Abstract:Explainable artificial intelligence (XAI) is a set of tools and algorithms that applied or embedded to machine learning models to understand and interpret the models. They are recommended especially for complex or advanced models including deep neural network because they are not interpretable from human point of view. On the other hand, simple models including linear regression are easy to implement, has less computational complexity and easy to visualize the output. The common notion in the literature that simple models including linear regression are considered as "white box" because they are more interpretable and easier to understand. This is based on the idea that linear regression models have several favorable outcomes including the effect of the features in the model and whether they affect positively or negatively toward model output. Moreover, uncertainty of the model can be measured or estimated using the confidence interval. However, we argue that this perception is not accurate and linear regression models are not easy to interpret neither easy to understand considering common XAI metrics and possible challenges might face. This includes linearity, local explanation, multicollinearity, covariates, normalization, uncertainty, features contribution and fairness. Consequently, we recommend the so-called simple models should be treated equally to complex models when it comes to explainability and interpretability.