Picture for Xixiang Lyu

Xixiang Lyu

Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models

Add code
Jan 05, 2025
Figure 1 for Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models
Figure 2 for Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models
Figure 3 for Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models
Figure 4 for Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models
Viaarxiv icon

Reconstructive Neuron Pruning for Backdoor Defense

Add code
May 24, 2023
Viaarxiv icon

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

Add code
Oct 25, 2021
Figure 1 for Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Figure 2 for Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Figure 3 for Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Figure 4 for Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Viaarxiv icon

Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks

Add code
Jan 27, 2021
Figure 1 for Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
Figure 2 for Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
Figure 3 for Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
Figure 4 for Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
Viaarxiv icon