Picture for Yangfeng Ji

Yangfeng Ji

Does Differential Privacy Impact Bias in Pretrained NLP Models?

Add code
Oct 24, 2024
Viaarxiv icon

The Mismeasure of Man and Models: Evaluating Allocational Harms in Large Language Models

Add code
Aug 02, 2024
Viaarxiv icon

Improve Temporal Awareness of LLMs for Sequential Recommendation

Add code
May 05, 2024
Figure 1 for Improve Temporal Awareness of LLMs for Sequential Recommendation
Figure 2 for Improve Temporal Awareness of LLMs for Sequential Recommendation
Figure 3 for Improve Temporal Awareness of LLMs for Sequential Recommendation
Figure 4 for Improve Temporal Awareness of LLMs for Sequential Recommendation
Viaarxiv icon

Addressing Both Statistical and Causal Gender Fairness in NLP Models

Add code
Mar 30, 2024
Viaarxiv icon

Blending Reward Functions via Few Expert Demonstrations for Faithful and Accurate Knowledge-Grounded Dialogue Generation

Add code
Nov 02, 2023
Viaarxiv icon

Secure and Effective Data Appraisal for Machine Learning

Add code
Oct 05, 2023
Viaarxiv icon

Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values

Add code
Jun 16, 2023
Viaarxiv icon

Pre-training Transformers for Knowledge Graph Completion

Add code
Mar 28, 2023
Viaarxiv icon

Improving Interpretability via Explicit Word Interaction Graph Layer

Add code
Feb 03, 2023
Figure 1 for Improving Interpretability via Explicit Word Interaction Graph Layer
Figure 2 for Improving Interpretability via Explicit Word Interaction Graph Layer
Figure 3 for Improving Interpretability via Explicit Word Interaction Graph Layer
Figure 4 for Improving Interpretability via Explicit Word Interaction Graph Layer
Viaarxiv icon

Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification

Add code
Dec 10, 2022
Viaarxiv icon