Abstract:Explainable Artificial Intelligence (XAI) plays an important role in improving the transparency and reliability of complex machine learning models, especially in critical domains such as cybersecurity. Despite the prevalence of heuristic interpretation methods such as SHAP and LIME, these techniques often lack formal guarantees and may produce inconsistent local explanations. To fulfill this need, few tools have emerged that use formal methods to provide formal explanations. Among these, XReason uses a SAT solver to generate formal instance-level explanation for XGBoost models. In this paper, we extend the XReason tool to support LightGBM models as well as class-level explanations. Additionally, we implement a mechanism to generate and detect adversarial examples in XReason. We evaluate the efficiency and accuracy of our approach on the CICIDS-2017 dataset, a widely used benchmark for detecting network attacks.
Abstract:Robotic cell injection is used for automatically delivering substances into a cell and is an integral component of drug development, genetic engineering and many other areas of cell biology. Traditionally, the correctness of functionality of these systems is ascertained using paper-and-pencil proof and computer simulation methods. However, the paper based proofs can be human-error prone and the simulation provides an incomplete analysis due to its sampling based nature and the inability to capture continuous behaviors in computer based models. Model checking has been recently advocated for the analysis of cell injection systems as well. However, it involves the discretization of the differential equations that are used for modeling the dynamics of the system and thus compromises on the completeness of the analysis as well. In this paper, we propose to use higher-order-logic theorem proving for the modeling and analysis of the dynamical behaviour of the robotic cell injection systems. The high expressiveness of the underlying logic allows us to capture the continuous details of the model in their true form. Then, the model can be analyzed using deductive reasoning within the sound core of a proof assistant.