Abstract:Artificial intelligence (AI) is currently based largely on black-box machine learning models which lack interpretability. The field of eXplainable AI (XAI) strives to address this major concern, being critical in high-stakes areas such as the finance, legal and health sectors. We present an approach to defining AI models and their interpretability based on category theory. For this we employ the notion of a compositional model, which sees a model in terms of formal string diagrams which capture its abstract structure together with its concrete implementation. This comprehensive view incorporates deterministic, probabilistic and quantum models. We compare a wide range of AI models as compositional models, including linear and rule-based models, (recurrent) neural networks, transformers, VAEs, and causal and DisCoCirc models. Next we give a definition of interpretation of a model in terms of its compositional structure, demonstrating how to analyse the interpretability of a model, and using this to clarify common themes in XAI. We find that what makes the standard 'intrinsically interpretable' models so transparent is brought out most clearly diagrammatically. This leads us to the more general notion of compositionally-interpretable (CI) models, which additionally include, for instance, causal, conceptual space, and DisCoCirc models. We next demonstrate the explainability benefits of CI models. Firstly, their compositional structure may allow the computation of other quantities of interest, and may facilitate inference from the model to the modelled phenomenon by matching its structure. Secondly, they allow for diagrammatic explanations for their behaviour, based on influence constraints, diagram surgery and rewrite explanations. Finally, we discuss many future directions for the approach, raising the question of how to learn such meaningfully structured models in practice.
Abstract:Scientific studies of consciousness rely on objects whose existence is independent of any consciousness. This theoretical-assumption leads to the "hard problem" of consciousness. We avoid this problem by assuming consciousness to be fundamental, and the main feature of consciousness is characterized as being other-dependent. We set up a framework which naturally subsumes the other-dependent feature by defining a compact closed category where morphisms represent conscious processes. These morphisms are a composition of a set of generators, each being specified by their relations with other generators, and therefore other-dependent. The framework is general enough, i.e. parameters in the morphisms take values in arbitrary commutative semi-rings, from which any finitely dimensional system can be dealt with. Our proposal fits well into a compositional model of consciousness and is an important step forward that addresses both the hard problem of consciousness and the combination problem of (proto)-panpsychism.