Picture for Shengwei An

Shengwei An

UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening

Add code
Jul 16, 2024
Viaarxiv icon

LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning

Add code
Mar 25, 2024
Figure 1 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Figure 2 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Figure 3 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Figure 4 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Viaarxiv icon

Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia

Add code
Feb 08, 2024
Viaarxiv icon

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift

Add code
Nov 27, 2023
Viaarxiv icon

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense

Add code
Jan 16, 2023
Viaarxiv icon

Backdoor Vulnerabilities in Normally Trained Deep Learning Models

Add code
Nov 29, 2022
Figure 1 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 2 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 3 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 4 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Viaarxiv icon

FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning

Add code
Oct 23, 2022
Figure 1 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 2 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 3 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 4 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Viaarxiv icon

Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer

Add code
Aug 13, 2022
Figure 1 for Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer
Figure 2 for Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer
Figure 3 for Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer
Figure 4 for Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer
Viaarxiv icon

DECK: Model Hardening for Defending Pervasive Backdoors

Add code
Jun 18, 2022
Figure 1 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 2 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 3 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 4 for DECK: Model Hardening for Defending Pervasive Backdoors
Viaarxiv icon

Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense

Add code
Feb 11, 2022
Figure 1 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 2 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 3 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 4 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Viaarxiv icon