Picture for Yuwen Pu

Yuwen Pu

CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models

Add code
Sep 02, 2024
Figure 1 for CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models
Figure 2 for CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models
Figure 3 for CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models
Figure 4 for CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models
Viaarxiv icon

SUB-PLAY: Adversarial Policies against Partially Observed Multi-Agent Reinforcement Learning Systems

Add code
Feb 06, 2024
Viaarxiv icon

The Risk of Federated Learning to Skew Fine-Tuning Features and Underperform Out-of-Distribution Robustness

Add code
Jan 25, 2024
Viaarxiv icon

MEAOD: Model Extraction Attack against Object Detectors

Add code
Dec 22, 2023
Viaarxiv icon

Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention

Add code
Nov 30, 2023
Viaarxiv icon

Facial Data Minimization: Shallow Model as Your Privacy Filter

Add code
Oct 24, 2023
Viaarxiv icon

TextDefense: Adversarial Text Detection based on Word Importance Entropy

Add code
Feb 12, 2023
Viaarxiv icon

Hijack Vertical Federated Learning Models with Adversarial Embedding

Add code
Dec 01, 2022
Viaarxiv icon

All You Need Is Hashing: Defending Against Data Reconstruction Attack in Vertical Federated Learning

Add code
Dec 01, 2022
Viaarxiv icon

"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution

Add code
Sep 05, 2022
Figure 1 for "Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution
Figure 2 for "Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution
Figure 3 for "Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution
Figure 4 for "Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution
Viaarxiv icon