Picture for Hendrik Schuff

Hendrik Schuff

LLM Roleplay: Simulating Human-Chatbot Interaction

Add code
Jul 04, 2024
Viaarxiv icon

Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings

Add code
Mar 08, 2024
Figure 1 for Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings
Figure 2 for Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings
Figure 3 for Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings
Figure 4 for Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings
Viaarxiv icon

How are Prompts Different in Terms of Sensitivity?

Add code
Nov 13, 2023
Viaarxiv icon

How (Not) to Use Sociodemographic Information for Subjective NLP Tasks

Add code
Sep 13, 2023
Figure 1 for How (Not) to Use Sociodemographic Information for Subjective NLP Tasks
Figure 2 for How (Not) to Use Sociodemographic Information for Subjective NLP Tasks
Figure 3 for How (Not) to Use Sociodemographic Information for Subjective NLP Tasks
Figure 4 for How (Not) to Use Sociodemographic Information for Subjective NLP Tasks
Viaarxiv icon

Neighboring Words Affect Human Interpretation of Saliency Explanations

Add code
May 06, 2023
Viaarxiv icon

How (Not) To Evaluate Explanation Quality

Add code
Oct 13, 2022
Figure 1 for How (Not) To Evaluate Explanation Quality
Figure 2 for How (Not) To Evaluate Explanation Quality
Figure 3 for How (Not) To Evaluate Explanation Quality
Figure 4 for How (Not) To Evaluate Explanation Quality
Viaarxiv icon

Human Interpretation of Saliency-based Explanation Over Text

Add code
Jan 27, 2022
Figure 1 for Human Interpretation of Saliency-based Explanation Over Text
Figure 2 for Human Interpretation of Saliency-based Explanation Over Text
Figure 3 for Human Interpretation of Saliency-based Explanation Over Text
Figure 4 for Human Interpretation of Saliency-based Explanation Over Text
Viaarxiv icon

Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings

Add code
Oct 13, 2021
Figure 1 for Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings
Figure 2 for Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings
Figure 3 for Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings
Figure 4 for Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings
Viaarxiv icon

Thought Flow Nets: From Single Predictions to Trains of Model Thought

Add code
Jul 26, 2021
Figure 1 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Figure 2 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Figure 3 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Figure 4 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Viaarxiv icon

F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering

Add code
Oct 13, 2020
Figure 1 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 2 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 3 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 4 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Viaarxiv icon