Picture for Chia-Yi Hsu

Chia-Yi Hsu

Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models

Add code
May 27, 2024
Figure 1 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 2 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 3 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Figure 4 for Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models
Viaarxiv icon

Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?

Add code
Oct 16, 2023
Viaarxiv icon

DPAF: Image Synthesis via Differentially Private Aggregation in Forward Phase

Add code
Apr 20, 2023
Figure 1 for DPAF: Image Synthesis via Differentially Private Aggregation in Forward Phase
Figure 2 for DPAF: Image Synthesis via Differentially Private Aggregation in Forward Phase
Figure 3 for DPAF: Image Synthesis via Differentially Private Aggregation in Forward Phase
Figure 4 for DPAF: Image Synthesis via Differentially Private Aggregation in Forward Phase
Viaarxiv icon

CAFE: Catastrophic Data Leakage in Vertical Federated Learning

Add code
Nov 02, 2021
Figure 1 for CAFE: Catastrophic Data Leakage in Vertical Federated Learning
Figure 2 for CAFE: Catastrophic Data Leakage in Vertical Federated Learning
Figure 3 for CAFE: Catastrophic Data Leakage in Vertical Federated Learning
Figure 4 for CAFE: Catastrophic Data Leakage in Vertical Federated Learning
Viaarxiv icon

Real-World Adversarial Examples involving Makeup Application

Add code
Sep 04, 2021
Figure 1 for Real-World Adversarial Examples involving Makeup Application
Figure 2 for Real-World Adversarial Examples involving Makeup Application
Figure 3 for Real-World Adversarial Examples involving Makeup Application
Figure 4 for Real-World Adversarial Examples involving Makeup Application
Viaarxiv icon

Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations

Add code
Mar 03, 2021
Figure 1 for Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations
Figure 2 for Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations
Figure 3 for Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations
Figure 4 for Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations
Viaarxiv icon

Adversarial Examples for Unsupervised Machine Learning Models

Add code
Mar 02, 2021
Figure 1 for Adversarial Examples for Unsupervised Machine Learning Models
Figure 2 for Adversarial Examples for Unsupervised Machine Learning Models
Figure 3 for Adversarial Examples for Unsupervised Machine Learning Models
Figure 4 for Adversarial Examples for Unsupervised Machine Learning Models
Viaarxiv icon

Non-Singular Adversarial Robustness of Neural Networks

Add code
Feb 23, 2021
Figure 1 for Non-Singular Adversarial Robustness of Neural Networks
Viaarxiv icon

On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces

Add code
Sep 24, 2018
Figure 1 for On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces
Figure 2 for On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces
Figure 3 for On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces
Figure 4 for On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces
Viaarxiv icon