Picture for Aaditya K. Singh

Aaditya K. Singh

Evaluation data contamination in LLMs: how do we measure it and (when) does it matter?

Add code
Nov 06, 2024
Viaarxiv icon

Brevity is the soul of wit: Pruning long files for code generation

Add code
Jun 29, 2024
Figure 1 for Brevity is the soul of wit: Pruning long files for code generation
Figure 2 for Brevity is the soul of wit: Pruning long files for code generation
Figure 3 for Brevity is the soul of wit: Pruning long files for code generation
Figure 4 for Brevity is the soul of wit: Pruning long files for code generation
Viaarxiv icon

Quantifying Variance in Evaluation Benchmarks

Add code
Jun 14, 2024
Figure 1 for Quantifying Variance in Evaluation Benchmarks
Figure 2 for Quantifying Variance in Evaluation Benchmarks
Figure 3 for Quantifying Variance in Evaluation Benchmarks
Figure 4 for Quantifying Variance in Evaluation Benchmarks
Viaarxiv icon

What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation

Add code
Apr 10, 2024
Figure 1 for What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
Figure 2 for What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
Figure 3 for What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
Figure 4 for What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
Viaarxiv icon

Tokenization counts: the impact of tokenization on arithmetic in frontier LLMs

Add code
Feb 22, 2024
Viaarxiv icon

Decoding Data Quality via Synthetic Corruptions: Embedding-guided Pruning of Code Data

Add code
Dec 05, 2023
Viaarxiv icon

The Transient Nature of Emergent In-Context Learning in Transformers

Add code
Nov 15, 2023
Viaarxiv icon

Confronting Reward Model Overoptimization with Constrained RLHF

Add code
Oct 10, 2023
Figure 1 for Confronting Reward Model Overoptimization with Constrained RLHF
Figure 2 for Confronting Reward Model Overoptimization with Constrained RLHF
Figure 3 for Confronting Reward Model Overoptimization with Constrained RLHF
Figure 4 for Confronting Reward Model Overoptimization with Constrained RLHF
Viaarxiv icon

Know your audience: specializing grounded language models with the game of Dixit

Add code
Jun 16, 2022
Figure 1 for Know your audience: specializing grounded language models with the game of Dixit
Figure 2 for Know your audience: specializing grounded language models with the game of Dixit
Figure 3 for Know your audience: specializing grounded language models with the game of Dixit
Figure 4 for Know your audience: specializing grounded language models with the game of Dixit
Viaarxiv icon