Picture for Ashwinee Panda

Ashwinee Panda

LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation

Add code
Apr 10, 2025
Viaarxiv icon

Using Attention Sinks to Identify and Evaluate Dormant Heads in Pretrained LLMs

Add code
Apr 04, 2025
Viaarxiv icon

Privacy Auditing of Large Language Models

Add code
Mar 09, 2025
Viaarxiv icon

Continual Pre-training of MoEs: How robust is your router?

Add code
Mar 06, 2025
Viaarxiv icon

Gemstones: A Model Suite for Multi-Faceted Scaling Laws

Add code
Feb 07, 2025
Figure 1 for Gemstones: A Model Suite for Multi-Faceted Scaling Laws
Figure 2 for Gemstones: A Model Suite for Multi-Faceted Scaling Laws
Figure 3 for Gemstones: A Model Suite for Multi-Faceted Scaling Laws
Figure 4 for Gemstones: A Model Suite for Multi-Faceted Scaling Laws
Viaarxiv icon

Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models

Add code
Dec 09, 2024
Figure 1 for Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models
Figure 2 for Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models
Figure 3 for Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models
Figure 4 for Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models
Viaarxiv icon

Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs

Add code
Jun 25, 2024
Figure 1 for Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs
Figure 2 for Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs
Figure 3 for Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs
Figure 4 for Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs
Viaarxiv icon

Safety Alignment Should Be Made More Than Just a Few Tokens Deep

Add code
Jun 10, 2024
Figure 1 for Safety Alignment Should Be Made More Than Just a Few Tokens Deep
Figure 2 for Safety Alignment Should Be Made More Than Just a Few Tokens Deep
Figure 3 for Safety Alignment Should Be Made More Than Just a Few Tokens Deep
Figure 4 for Safety Alignment Should Be Made More Than Just a Few Tokens Deep
Viaarxiv icon

Teach LLMs to Phish: Stealing Private Information from Language Models

Add code
Mar 01, 2024
Figure 1 for Teach LLMs to Phish: Stealing Private Information from Language Models
Figure 2 for Teach LLMs to Phish: Stealing Private Information from Language Models
Figure 3 for Teach LLMs to Phish: Stealing Private Information from Language Models
Figure 4 for Teach LLMs to Phish: Stealing Private Information from Language Models
Viaarxiv icon

Private Fine-tuning of Large Language Models with Zeroth-order Optimization

Add code
Jan 09, 2024
Figure 1 for Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Figure 2 for Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Figure 3 for Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Figure 4 for Private Fine-tuning of Large Language Models with Zeroth-order Optimization
Viaarxiv icon