Picture for Zana Buçinca

Zana Buçinca

Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills

Add code
Oct 05, 2024
Figure 1 for Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Figure 2 for Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Figure 3 for Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Figure 4 for Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Viaarxiv icon

Learning Interpretable Fair Representations

Add code
Jun 24, 2024
Viaarxiv icon

Towards Optimizing Human-Centric Objectives in AI-Assisted Decision-Making With Offline Reinforcement Learning

Add code
Mar 09, 2024
Viaarxiv icon

Adaptive interventions for both accuracy and time in AI-assisted human decision making

Add code
Jun 12, 2023
Viaarxiv icon

How Different Groups Prioritize Ethical Values for Responsible AI

Add code
May 16, 2022
Figure 1 for How Different Groups Prioritize Ethical Values for Responsible AI
Figure 2 for How Different Groups Prioritize Ethical Values for Responsible AI
Figure 3 for How Different Groups Prioritize Ethical Values for Responsible AI
Figure 4 for How Different Groups Prioritize Ethical Values for Responsible AI
Viaarxiv icon

To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making

Add code
Feb 19, 2021
Figure 1 for To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
Figure 2 for To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
Figure 3 for To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
Figure 4 for To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
Viaarxiv icon

Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems

Add code
Jan 22, 2020
Figure 1 for Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Figure 2 for Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Figure 3 for Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Figure 4 for Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Viaarxiv icon