Picture for Cristina Conati

Cristina Conati

Relevant Irrelevance: Generating Alterfactual Explanations for Image Classifiers

Add code
May 08, 2024
Figure 1 for Relevant Irrelevance: Generating Alterfactual Explanations for Image Classifiers
Figure 2 for Relevant Irrelevance: Generating Alterfactual Explanations for Image Classifiers
Figure 3 for Relevant Irrelevance: Generating Alterfactual Explanations for Image Classifiers
Figure 4 for Relevant Irrelevance: Generating Alterfactual Explanations for Image Classifiers
Viaarxiv icon

Personalizing explanations of AI-driven hints to users' cognitive abilities: an empirical evaluation

Add code
Mar 09, 2024
Figure 1 for Personalizing explanations of AI-driven hints to users' cognitive abilities: an empirical evaluation
Figure 2 for Personalizing explanations of AI-driven hints to users' cognitive abilities: an empirical evaluation
Figure 3 for Personalizing explanations of AI-driven hints to users' cognitive abilities: an empirical evaluation
Figure 4 for Personalizing explanations of AI-driven hints to users' cognitive abilities: an empirical evaluation
Viaarxiv icon

Classification of Alzheimers Disease with Deep Learning on Eye-tracking Data

Add code
Sep 22, 2023
Figure 1 for Classification of Alzheimers Disease with Deep Learning on Eye-tracking Data
Figure 2 for Classification of Alzheimers Disease with Deep Learning on Eye-tracking Data
Figure 3 for Classification of Alzheimers Disease with Deep Learning on Eye-tracking Data
Figure 4 for Classification of Alzheimers Disease with Deep Learning on Eye-tracking Data
Viaarxiv icon

Evaluating the overall sensitivity of saliency-based explanation methods

Add code
Jun 21, 2023
Viaarxiv icon

GANonymization: A GAN-based Face Anonymization Framework for Preserving Emotional Expressions

Add code
May 03, 2023
Viaarxiv icon

A Theoretical Framework for AI Models Explainability

Add code
Dec 29, 2022
Viaarxiv icon

Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy

Add code
Nov 15, 2022
Figure 1 for Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy
Figure 2 for Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy
Figure 3 for Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy
Figure 4 for Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy
Viaarxiv icon

Cascading Convolutional Temporal Colour Constancy

Add code
Jun 15, 2021
Figure 1 for Cascading Convolutional Temporal Colour Constancy
Figure 2 for Cascading Convolutional Temporal Colour Constancy
Figure 3 for Cascading Convolutional Temporal Colour Constancy
Figure 4 for Cascading Convolutional Temporal Colour Constancy
Viaarxiv icon

A Framework to Counteract Suboptimal User-Behaviors in Exploratory Learning Environments: an Application to MOOCs

Add code
Jun 14, 2021
Figure 1 for A Framework to Counteract Suboptimal User-Behaviors in Exploratory Learning Environments: an Application to MOOCs
Figure 2 for A Framework to Counteract Suboptimal User-Behaviors in Exploratory Learning Environments: an Application to MOOCs
Figure 3 for A Framework to Counteract Suboptimal User-Behaviors in Exploratory Learning Environments: an Application to MOOCs
Figure 4 for A Framework to Counteract Suboptimal User-Behaviors in Exploratory Learning Environments: an Application to MOOCs
Viaarxiv icon

A Neural Architecture for Detecting Confusion in Eye-tracking Data

Add code
Mar 13, 2020
Figure 1 for A Neural Architecture for Detecting Confusion in Eye-tracking Data
Figure 2 for A Neural Architecture for Detecting Confusion in Eye-tracking Data
Figure 3 for A Neural Architecture for Detecting Confusion in Eye-tracking Data
Figure 4 for A Neural Architecture for Detecting Confusion in Eye-tracking Data
Viaarxiv icon