Abstract:In this paper, we propose Evidential Conformal Prediction (ECP) method for image classifiers to generate the conformal prediction sets. Our method is designed based on a non-conformity score function that has its roots in Evidential Deep Learning (EDL) as a method of quantifying model (epistemic) uncertainty in DNN classifiers. We use evidence that are derived from the logit values of target labels to compute the components of our non-conformity score function: the heuristic notion of uncertainty in CP, uncertainty surprisal, and expected utility. Our extensive experimental evaluation demonstrates that ECP outperforms three state-of-the-art methods for generating CP sets, in terms of their set sizes and adaptivity while maintaining the coverage of true labels.
Abstract:Image-to-image translation has gained popularity in the medical field to transform images from one domain to another. Medical image synthesis via domain transformation is advantageous in its ability to augment an image dataset where images for a given class is limited. From the learning perspective, this process contributes to data-oriented robustness of the model by inherently broadening the model's exposure to more diverse visual data and enabling it to learn more generalized features. In the case of generating additional neuroimages, it is advantageous to obtain unidentifiable medical data and augment smaller annotated datasets. This study proposes the development of a CycleGAN model for translating neuroimages from one field strength to another (e.g., 3 Tesla to 1.5). This model was compared to a model based on DCGAN architecture. CycleGAN was able to generate the synthetic and reconstructed images with reasonable accuracy. The mapping function from the source (3 Tesla) to target domain (1.5 Tesla) performed optimally with an average PSNR value of 25.69 $\pm$ 2.49 dB and an MAE value of 2106.27 $\pm$ 1218.37.
Abstract:Precise estimation of predictive uncertainty in deep neural networks is a critical requirement for reliable decision-making in machine learning and statistical modeling, particularly in the context of medical AI. Conformal Prediction (CP) has emerged as a promising framework for representing the model uncertainty by providing well-calibrated confidence levels for individual predictions. However, the quantification of model uncertainty in conformal prediction remains an active research area, yet to be fully addressed. In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations. We propose a probabilistic approach in quantifying the model uncertainty derived from the produced prediction sets in conformal prediction and provide certified boundaries for the computed uncertainty. By doing so, we allow model uncertainty measured by CP to be compared by other uncertainty quantification methods such as Bayesian (e.g., MC-Dropout and DeepEnsemble) and Evidential approaches.
Abstract:The rapid integration of artificial intelligence across traditional research domains has generated an amalgamation of nomenclature. As cross-discipline teams work together on complex machine learning challenges, finding a consensus of basic definitions in the literature is a more fundamental problem. As a step in the Delphi process to define issues with trust and barriers to the adoption of autonomous systems, our study first collected and ranked the top concerns from a panel of international experts from the fields of engineering, computer science, medicine, aerospace, and defence, with experience working with artificial intelligence. This document presents a summary of the literature definitions for nomenclature derived from expert feedback.
Abstract:In this paper we provide an approach for deep learning that protects against adversarial examples in image classification-type networks. The approach relies on two mechanisms:1) a mechanism that increases robustness at the expense of accuracy, and, 2) a mechanism that improves accuracy but does not always increase robustness. We show that an approach combining the two mechanisms can provide protection against adversarial examples while retaining accuracy. We formulate potential attacks on our approach and provide experimental results to demonstrate the effectiveness of our approach.
Abstract:In this paper we use game theory to model poisoning attack scenarios. We prove the non-existence of pure strategy Nash Equilibrium in the attacker and defender game. We then propose a mixed extension of our game model and an algorithm to approximate the Nash Equilibrium strategy for the defender. We then demonstrate the effectiveness of the mixed defence strategy generated by the algorithm, in an experiment.