Picture for Chia-Mu Yu

Chia-Mu Yu

Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models

Add code
Nov 14, 2024
Figure 1 for Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models
Figure 2 for Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models
Figure 3 for Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models
Figure 4 for Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models
Viaarxiv icon

Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models

Add code
Oct 02, 2024
Figure 1 for Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models
Figure 2 for Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models
Figure 3 for Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models
Figure 4 for Information-Theoretical Principled Trade-off between Jailbreakability and Stealthiness on Vision Language Models
Viaarxiv icon

Exploring Robustness of Visual State Space model against Backdoor Attacks

Add code
Aug 22, 2024
Figure 1 for Exploring Robustness of Visual State Space model against Backdoor Attacks
Figure 2 for Exploring Robustness of Visual State Space model against Backdoor Attacks
Figure 3 for Exploring Robustness of Visual State Space model against Backdoor Attacks
Figure 4 for Exploring Robustness of Visual State Space model against Backdoor Attacks
Viaarxiv icon

Defending Against Repetitive-based Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-off

Add code
Jul 14, 2024
Viaarxiv icon

Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models

Add code
May 27, 2024
Figure 1 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 2 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 3 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 4 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Viaarxiv icon

DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models

Add code
Feb 28, 2024
Figure 1 for DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models
Figure 2 for DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models
Figure 3 for DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models
Figure 4 for DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models
Viaarxiv icon

Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective

Add code
Nov 28, 2023
Viaarxiv icon

Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?

Add code
Oct 16, 2023
Figure 1 for Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?
Figure 2 for Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?
Figure 3 for Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?
Figure 4 for Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?
Viaarxiv icon

Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table Transformers

Add code
Sep 12, 2023
Viaarxiv icon

DPAF: Image Synthesis via Differentially Private Aggregation in Forward Phase

Add code
Apr 20, 2023
Figure 1 for DPAF: Image Synthesis via Differentially Private Aggregation in Forward Phase
Figure 2 for DPAF: Image Synthesis via Differentially Private Aggregation in Forward Phase
Figure 3 for DPAF: Image Synthesis via Differentially Private Aggregation in Forward Phase
Figure 4 for DPAF: Image Synthesis via Differentially Private Aggregation in Forward Phase
Viaarxiv icon