Picture for Joris Baan

Joris Baan

Interpreting Predictive Probabilities: Model Confidence or Human Label Variation?

Add code
Feb 25, 2024
Viaarxiv icon

Uncertainty in Natural Language Generation: From Theory to Applications

Add code
Jul 28, 2023
Viaarxiv icon

What Comes Next? Evaluating Uncertainty in Neural Text Generators Against Human Production Variability

Add code
May 19, 2023
Viaarxiv icon

Stop Measuring Calibration When Humans Disagree

Add code
Oct 28, 2022
Viaarxiv icon

Understanding Multi-Head Attention in Abstractive Summarization

Add code
Nov 10, 2019
Figure 1 for Understanding Multi-Head Attention in Abstractive Summarization
Figure 2 for Understanding Multi-Head Attention in Abstractive Summarization
Figure 3 for Understanding Multi-Head Attention in Abstractive Summarization
Figure 4 for Understanding Multi-Head Attention in Abstractive Summarization
Viaarxiv icon

Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?

Add code
Jul 08, 2019
Figure 1 for Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?
Figure 2 for Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?
Figure 3 for Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?
Figure 4 for Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?
Viaarxiv icon

On the Realization of Compositionality in Neural Networks

Add code
Jun 06, 2019
Figure 1 for On the Realization of Compositionality in Neural Networks
Figure 2 for On the Realization of Compositionality in Neural Networks
Figure 3 for On the Realization of Compositionality in Neural Networks
Figure 4 for On the Realization of Compositionality in Neural Networks
Viaarxiv icon