Picture for David Martens

David Martens

Would a Large Language Model Pay Extra for a View? Inferring Willingness to Pay from Subjective Choices

Add code
Feb 10, 2026
Viaarxiv icon

On the Definition and Detection of Cherry-Picking in Counterfactual Explanations

Add code
Jan 08, 2026
Viaarxiv icon

From What Ifs to Insights: Counterfactuals in Causal Inference vs. Explainable AI

Add code
May 19, 2025
Viaarxiv icon

Exploring the generalization of LLM truth directions on conversational formats

Add code
May 14, 2025
Viaarxiv icon

Beware of "Explanations" of AI

Add code
Apr 09, 2025
Viaarxiv icon

How good is my story? Towards quantitative metrics for evaluating LLM-generated XAI narratives

Add code
Dec 13, 2024
Viaarxiv icon

GraphXAIN: Narratives to Explain Graph Neural Networks

Add code
Nov 04, 2024
Figure 1 for GraphXAIN: Narratives to Explain Graph Neural Networks
Figure 2 for GraphXAIN: Narratives to Explain Graph Neural Networks
Figure 3 for GraphXAIN: Narratives to Explain Graph Neural Networks
Figure 4 for GraphXAIN: Narratives to Explain Graph Neural Networks
Viaarxiv icon

Exposing Image Classifier Shortcuts with Counterfactual Frequency (CoF) Tables

Add code
May 24, 2024
Viaarxiv icon

Beyond Accuracy-Fairness: Stop evaluating bias mitigation methods solely on between-group metrics

Add code
Jan 24, 2024
Figure 1 for Beyond Accuracy-Fairness: Stop evaluating bias mitigation methods solely on between-group metrics
Figure 2 for Beyond Accuracy-Fairness: Stop evaluating bias mitigation methods solely on between-group metrics
Figure 3 for Beyond Accuracy-Fairness: Stop evaluating bias mitigation methods solely on between-group metrics
Figure 4 for Beyond Accuracy-Fairness: Stop evaluating bias mitigation methods solely on between-group metrics
Viaarxiv icon

Tell Me a Story! Narrative-Driven XAI with Large Language Models

Add code
Sep 29, 2023
Viaarxiv icon