Picture for Changjiang Li

Changjiang Li

CopyrightMeter: Revisiting Copyright Protection in Text-to-image Models

Add code
Nov 20, 2024
Viaarxiv icon

RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction

Add code
Oct 25, 2024
Viaarxiv icon

On the Difficulty of Defending Contrastive Learning against Backdoor Attacks

Add code
Dec 14, 2023
Viaarxiv icon

Model Extraction Attacks Revisited

Add code
Dec 08, 2023
Viaarxiv icon

Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention

Add code
Nov 30, 2023
Viaarxiv icon

IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI

Add code
Oct 30, 2023
Viaarxiv icon

Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks

Add code
Sep 23, 2023
Viaarxiv icon

On the Security Risks of Knowledge Graph Reasoning

Add code
May 03, 2023
Figure 1 for On the Security Risks of Knowledge Graph Reasoning
Figure 2 for On the Security Risks of Knowledge Graph Reasoning
Figure 3 for On the Security Risks of Knowledge Graph Reasoning
Figure 4 for On the Security Risks of Knowledge Graph Reasoning
Viaarxiv icon

Hijack Vertical Federated Learning Models with Adversarial Embedding

Add code
Dec 01, 2022
Viaarxiv icon

Demystifying Self-supervised Trojan Attacks

Add code
Oct 13, 2022
Figure 1 for Demystifying Self-supervised Trojan Attacks
Figure 2 for Demystifying Self-supervised Trojan Attacks
Figure 3 for Demystifying Self-supervised Trojan Attacks
Figure 4 for Demystifying Self-supervised Trojan Attacks
Viaarxiv icon