Picture for Changjiang Li

Changjiang Li

CopyrightMeter: Revisiting Copyright Protection in Text-to-image Models

Add code
Nov 20, 2024
Viaarxiv icon

RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction

Add code
Oct 25, 2024
Figure 1 for RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction
Figure 2 for RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction
Figure 3 for RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction
Figure 4 for RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction
Viaarxiv icon

On the Difficulty of Defending Contrastive Learning against Backdoor Attacks

Add code
Dec 14, 2023
Figure 1 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 2 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 3 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 4 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Viaarxiv icon

Model Extraction Attacks Revisited

Add code
Dec 08, 2023
Viaarxiv icon

Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention

Add code
Nov 30, 2023
Viaarxiv icon

IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI

Add code
Oct 30, 2023
Figure 1 for IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI
Figure 2 for IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI
Figure 3 for IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI
Figure 4 for IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI
Viaarxiv icon

Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks

Add code
Sep 23, 2023
Figure 1 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 2 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 3 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 4 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Viaarxiv icon

On the Security Risks of Knowledge Graph Reasoning

Add code
May 03, 2023
Figure 1 for On the Security Risks of Knowledge Graph Reasoning
Figure 2 for On the Security Risks of Knowledge Graph Reasoning
Figure 3 for On the Security Risks of Knowledge Graph Reasoning
Figure 4 for On the Security Risks of Knowledge Graph Reasoning
Viaarxiv icon

Hijack Vertical Federated Learning Models with Adversarial Embedding

Add code
Dec 01, 2022
Viaarxiv icon

Demystifying Self-supervised Trojan Attacks

Add code
Oct 13, 2022
Figure 1 for Demystifying Self-supervised Trojan Attacks
Figure 2 for Demystifying Self-supervised Trojan Attacks
Figure 3 for Demystifying Self-supervised Trojan Attacks
Figure 4 for Demystifying Self-supervised Trojan Attacks
Viaarxiv icon