Picture for Guangyu Shen

Guangyu Shen

UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening

Add code
Jul 16, 2024
Viaarxiv icon

LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning

Add code
Mar 25, 2024
Figure 1 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Figure 2 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Figure 3 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Figure 4 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Viaarxiv icon

Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia

Add code
Feb 08, 2024
Viaarxiv icon

Make Them Spill the Beans! Coercive Knowledge Extraction from LLMs

Add code
Dec 08, 2023
Viaarxiv icon

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift

Add code
Nov 27, 2023
Viaarxiv icon

ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP

Add code
Aug 04, 2023
Viaarxiv icon

Detecting Backdoors in Pre-trained Encoders

Add code
Mar 23, 2023
Figure 1 for Detecting Backdoors in Pre-trained Encoders
Figure 2 for Detecting Backdoors in Pre-trained Encoders
Figure 3 for Detecting Backdoors in Pre-trained Encoders
Figure 4 for Detecting Backdoors in Pre-trained Encoders
Viaarxiv icon

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense

Add code
Jan 16, 2023
Viaarxiv icon

Backdoor Vulnerabilities in Normally Trained Deep Learning Models

Add code
Nov 29, 2022
Figure 1 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 2 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 3 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 4 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Viaarxiv icon

FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning

Add code
Oct 23, 2022
Figure 1 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 2 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 3 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 4 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Viaarxiv icon