Abstract:The application of Deep Neural Networks (DNNs) to a broad variety of tasks demands methods for coping with the complex and opaque nature of these architectures. The analysis of performance can be pursued in two ways. On one side, model interpretation techniques aim at "opening the box" to assess the relationship between the input, the inner layers, and the output. For example, saliency and attention models exploit knowledge of the architecture to capture the essential regions of the input that have the most impact on the inference process and output. On the other hand, models can be analysed as "black boxes", e.g., by associating the input samples with extra annotations that do not contribute to model training but can be exploited for characterizing the model response. Such performance-driven meta-annotations enable the detailed characterization of performance metrics and errors and help scientists identify the features of the input responsible for prediction failures and focus their model improvement efforts. This paper presents a structured survey of the tools that support the "black box" analysis of DNNs and discusses the gaps in the current proposals and the relevant future directions in this research field.