Picture for Martin Tutek

Martin Tutek

REVS: Unlearning Sensitive Information in Language Models via Rank Editing in the Vocabulary Space

Add code
Jun 13, 2024
Viaarxiv icon

Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs

Add code
Jan 18, 2024
Viaarxiv icon

Out-of-Distribution Detection by Leveraging Between-Layer Transformation Smoothness

Add code
Oct 04, 2023
Viaarxiv icon

CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration

Add code
Sep 15, 2023
Viaarxiv icon

Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods

Add code
Nov 15, 2022
Viaarxiv icon

Staying True to Your Word: Can Attention Become Explanation?

Add code
May 19, 2020
Figure 1 for Staying True to Your Word:  Can Attention Become Explanation?
Figure 2 for Staying True to Your Word:  Can Attention Become Explanation?
Figure 3 for Staying True to Your Word:  Can Attention Become Explanation?
Figure 4 for Staying True to Your Word:  Can Attention Become Explanation?
Viaarxiv icon

Iterative Recursive Attention Model for Interpretable Sequence Classification

Add code
Aug 30, 2018
Figure 1 for Iterative Recursive Attention Model for Interpretable Sequence Classification
Figure 2 for Iterative Recursive Attention Model for Interpretable Sequence Classification
Figure 3 for Iterative Recursive Attention Model for Interpretable Sequence Classification
Figure 4 for Iterative Recursive Attention Model for Interpretable Sequence Classification
Viaarxiv icon