Picture for Mohammad Taher Pilehvar

Mohammad Taher Pilehvar

RepMatch: Quantifying Cross-Instance Similarities in Representation Space

Add code
Oct 12, 2024
Viaarxiv icon

BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages

Add code
Jun 14, 2024
Viaarxiv icon

Tell Me Why: Explainable Public Health Fact-Checking with Large Language Models

Add code
May 15, 2024
Viaarxiv icon

DiFair: A Benchmark for Disentangled Assessment of Gender Knowledge and Bias

Add code
Oct 22, 2023
Viaarxiv icon

DecompX: Explaining Transformers Decisions by Propagating Token Decomposition

Add code
Jun 05, 2023
Viaarxiv icon

Guide the Learner: Controlling Product of Experts Debiasing Method Based on Token Attribution Similarities

Add code
Feb 06, 2023
Viaarxiv icon

An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning

Add code
Feb 01, 2023
Figure 1 for An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Figure 2 for An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Figure 3 for An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Figure 4 for An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Viaarxiv icon

BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning

Add code
Nov 10, 2022
Viaarxiv icon

Looking at the Overlooked: An Analysis on the Word-Overlap Bias in Natural Language Inference

Add code
Nov 07, 2022
Viaarxiv icon

GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers

Add code
May 06, 2022
Figure 1 for GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
Figure 2 for GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
Figure 3 for GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
Figure 4 for GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
Viaarxiv icon