Picture for Aditi Raghunathan

Aditi Raghunathan

Overtrained Language Models Are Harder to Fine-Tune

Add code
Mar 24, 2025
Viaarxiv icon

Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions

Add code
Mar 05, 2025
Viaarxiv icon

Mitigating Bias in RAG: Controlling the Embedder

Add code
Feb 24, 2025
Viaarxiv icon

Vulnerability of Text-Matching in ML/AI Conference Reviewer Assignments to Collusions

Add code
Dec 09, 2024
Viaarxiv icon

Scaling Laws for Precision

Add code
Nov 07, 2024
Viaarxiv icon

Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance

Add code
Oct 14, 2024
Figure 1 for Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance
Figure 2 for Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance
Figure 3 for Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance
Figure 4 for Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance
Viaarxiv icon

Adversarial Attacks on Multimodal Agents

Add code
Jun 18, 2024
Viaarxiv icon

Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning

Add code
May 30, 2024
Viaarxiv icon

Why is SAM Robust to Label Noise?

Add code
May 06, 2024
Figure 1 for Why is SAM Robust to Label Noise?
Figure 2 for Why is SAM Robust to Label Noise?
Figure 3 for Why is SAM Robust to Label Noise?
Figure 4 for Why is SAM Robust to Label Noise?
Viaarxiv icon

Scaling Laws for Data Filtering -- Data Curation cannot be Compute Agnostic

Add code
Apr 10, 2024
Viaarxiv icon