Picture for Ashwinee Panda

Ashwinee Panda

Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs

Add code
Jun 25, 2024
Viaarxiv icon

Safety Alignment Should Be Made More Than Just a Few Tokens Deep

Add code
Jun 10, 2024
Viaarxiv icon

Teach LLMs to Phish: Stealing Private Information from Language Models

Add code
Mar 01, 2024
Viaarxiv icon

Private Fine-tuning of Large Language Models with Zeroth-order Optimization

Add code
Jan 09, 2024
Viaarxiv icon

Visual Adversarial Examples Jailbreak Large Language Models

Add code
Jun 22, 2023
Viaarxiv icon

Differentially Private Image Classification by Learning Priors from Random Processes

Add code
Jun 08, 2023
Viaarxiv icon

Differentially Private In-Context Learning

Add code
May 02, 2023
Figure 1 for Differentially Private In-Context Learning
Figure 2 for Differentially Private In-Context Learning
Figure 3 for Differentially Private In-Context Learning
Figure 4 for Differentially Private In-Context Learning
Viaarxiv icon

DP-RAFT: A Differentially Private Recipe for Accelerated Fine-Tuning

Add code
Dec 15, 2022
Viaarxiv icon

Neurotoxin: Durable Backdoors in Federated Learning

Add code
Jun 12, 2022
Figure 1 for Neurotoxin: Durable Backdoors in Federated Learning
Figure 2 for Neurotoxin: Durable Backdoors in Federated Learning
Figure 3 for Neurotoxin: Durable Backdoors in Federated Learning
Figure 4 for Neurotoxin: Durable Backdoors in Federated Learning
Viaarxiv icon

SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification

Add code
Dec 12, 2021
Figure 1 for SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Figure 2 for SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Figure 3 for SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Figure 4 for SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Viaarxiv icon