Abstract:In recent years, machine learning (ML) and neural networks (NNs) have gained widespread use and attention across various domains, particularly in transportation for achieving autonomy, including the emergence of flying taxis for urban air mobility (UAM). However, concerns about certification have come up, compelling the development of standardized processes encompassing the entire ML and NN pipeline. This paper delves into the inference stage and the requisite hardware, highlighting the challenges associated with IEEE 754 floating-point arithmetic and proposing alternative number representations. By evaluating diverse summation and dot product algorithms, we aim to mitigate issues related to non-associativity. Additionally, our exploration of fixed-point arithmetic reveals its advantages over floating-point methods, demonstrating significant hardware efficiencies. Employing an empirical approach, we ascertain the optimal bit-width necessary to attain an acceptable level of accuracy, considering the inherent complexity of bit-width optimization.
Abstract:Parametrized Quantum Circuits (PQCs) enable a novel method for machine learning (ML). However, from a computational point of view they present a challenge to existing eXplainable AI (xAI) methods. On the one hand, measurements on quantum circuits introduce probabilistic errors which impact the convergence of these methods. On the other hand, the phase space of a quantum circuit expands exponentially with the number of qubits, complicating efforts to execute xAI methods in polynomial time. In this paper we will discuss the performance of established xAI methods, such as Baseline SHAP and Integrated Gradients. Using the internal mechanics of PQCs we study ways to speed up their computation.