Picture for Chia-Mu Yu

Chia-Mu Yu

VP-NTK: Exploring the Benefits of Visual Prompting in Differentially Private Data Synthesis

Add code
Mar 20, 2025
Viaarxiv icon

Data Poisoning Attacks to Locally Differentially Private Range Query Protocols

Add code
Mar 05, 2025
Viaarxiv icon

Beyond Natural Language Perplexity: Detecting Dead Code Poisoning in Code Generation Datasets

Add code
Feb 28, 2025
Viaarxiv icon

Layer-Aware Task Arithmetic: Disentangling Task-Specific and Instruction-Following Knowledge

Add code
Feb 27, 2025
Viaarxiv icon

BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors

Add code
Jan 04, 2025
Viaarxiv icon

Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models

Add code
Nov 14, 2024
Figure 1 for Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models
Figure 2 for Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models
Figure 3 for Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models
Figure 4 for Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models
Viaarxiv icon

Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models

Add code
Oct 02, 2024
Figure 1 for Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models
Figure 2 for Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models
Figure 3 for Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models
Figure 4 for Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models
Viaarxiv icon

Exploring Robustness of Visual State Space model against Backdoor Attacks

Add code
Aug 22, 2024
Figure 1 for Exploring Robustness of Visual State Space model against Backdoor Attacks
Figure 2 for Exploring Robustness of Visual State Space model against Backdoor Attacks
Figure 3 for Exploring Robustness of Visual State Space model against Backdoor Attacks
Figure 4 for Exploring Robustness of Visual State Space model against Backdoor Attacks
Viaarxiv icon

Defending Against Repetitive-based Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-off

Add code
Jul 14, 2024
Viaarxiv icon

Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models

Add code
May 27, 2024
Figure 1 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 2 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 3 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 4 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Viaarxiv icon