Picture for Yu-Lin Tsai

Yu-Lin Tsai

Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models

Add code
May 27, 2024
Figure 1 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 2 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 3 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 4 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Viaarxiv icon

Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?

Add code
Oct 16, 2023
Viaarxiv icon

Exploring the Benefits of Visual Prompting in Differential Privacy

Add code
Mar 22, 2023
Viaarxiv icon

Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise

Add code
Nov 02, 2022
Viaarxiv icon

Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations

Add code
Mar 03, 2021
Figure 1 for Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations
Figure 2 for Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations
Figure 3 for Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations
Figure 4 for Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations
Viaarxiv icon

Non-Singular Adversarial Robustness of Neural Networks

Add code
Feb 23, 2021
Figure 1 for Non-Singular Adversarial Robustness of Neural Networks
Viaarxiv icon