Picture for Nitay Calderon

Nitay Calderon

The Alternative Annotator Test for LLM-as-a-Judge: How to Statistically Justify Replacing Human Annotators with LLMs

Add code
Jan 19, 2025
Viaarxiv icon

Are LLMs Better than Reported? Detecting Label Errors and Mitigating Their Effect on Model Performance

Add code
Oct 24, 2024
Viaarxiv icon

NL-Eye: Abductive NLI for Images

Add code
Oct 03, 2024
Viaarxiv icon

On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs

Add code
Jul 27, 2024
Viaarxiv icon

The Colorful Future of LLMs: Evaluating and Improving LLMs as Emotional Supporters for Queer Youth

Add code
Feb 19, 2024
Viaarxiv icon

Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals

Add code
Oct 01, 2023
Viaarxiv icon

Measuring the Robustness of Natural Language Processing Models to Domain Shifts

Add code
May 31, 2023
Viaarxiv icon

A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training

Add code
May 03, 2023
Figure 1 for A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training
Figure 2 for A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training
Figure 3 for A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training
Figure 4 for A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training
Viaarxiv icon

A Picture May Be Worth a Thousand Lives: An Interpretable Artificial Intelligence Strategy for Predictions of Suicide Risk from Social Media Images

Add code
Feb 19, 2023
Viaarxiv icon

A Functional Information Perspective on Model Interpretation

Add code
Jun 14, 2022
Figure 1 for A Functional Information Perspective on Model Interpretation
Figure 2 for A Functional Information Perspective on Model Interpretation
Figure 3 for A Functional Information Perspective on Model Interpretation
Figure 4 for A Functional Information Perspective on Model Interpretation
Viaarxiv icon