Picture for Katja Filippova

Katja Filippova

Google Research

Theoretical and Practical Perspectives on what Influence Functions Do

Add code
May 26, 2023
Viaarxiv icon

Dissecting Recall of Factual Associations in Auto-Regressive Language Models

Add code
Apr 28, 2023
Viaarxiv icon

Make Every Example Count: On Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets

Add code
Feb 27, 2023
Viaarxiv icon

Understanding Text Classification Data and Models Using Aggregated Input Salience

Add code
Nov 11, 2022
Viaarxiv icon

Diagnosing AI Explanation Methods with Folk Concepts of Behavior

Add code
Jan 27, 2022
Figure 1 for Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Figure 2 for Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Figure 3 for Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Figure 4 for Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Viaarxiv icon

"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification

Add code
Nov 14, 2021
Figure 1 for "Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Figure 2 for "Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Figure 3 for "Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Figure 4 for "Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Viaarxiv icon

Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data

Add code
Oct 12, 2020
Figure 1 for Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data
Figure 2 for Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data
Figure 3 for Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data
Figure 4 for Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data
Viaarxiv icon

The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?

Add code
Oct 12, 2020
Figure 1 for The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Viaarxiv icon

We Need to Talk About Random Splits

Add code
May 01, 2020
Figure 1 for We Need to Talk About Random Splits
Figure 2 for We Need to Talk About Random Splits
Figure 3 for We Need to Talk About Random Splits
Figure 4 for We Need to Talk About Random Splits
Viaarxiv icon

Eval all, trust a few, do wrong to none: Comparing sentence generation models

Add code
Oct 30, 2018
Figure 1 for Eval all, trust a few, do wrong to none: Comparing sentence generation models
Figure 2 for Eval all, trust a few, do wrong to none: Comparing sentence generation models
Figure 3 for Eval all, trust a few, do wrong to none: Comparing sentence generation models
Figure 4 for Eval all, trust a few, do wrong to none: Comparing sentence generation models
Viaarxiv icon