Picture for Sharan Narang

Sharan Narang

Jack

Correlating and Predicting Human Evaluations of Language Models from Natural Language Processing Benchmarks

Add code
Feb 24, 2025
Viaarxiv icon

Law of the Weakest Link: Cross Capabilities of Large Language Models

Add code
Sep 30, 2024
Figure 1 for Law of the Weakest Link: Cross Capabilities of Large Language Models
Figure 2 for Law of the Weakest Link: Cross Capabilities of Large Language Models
Figure 3 for Law of the Weakest Link: Cross Capabilities of Large Language Models
Figure 4 for Law of the Weakest Link: Cross Capabilities of Large Language Models
Viaarxiv icon

The Llama 3 Herd of Models

Add code
Jul 31, 2024
Viaarxiv icon

Quantifying Variance in Evaluation Benchmarks

Add code
Jun 14, 2024
Figure 1 for Quantifying Variance in Evaluation Benchmarks
Figure 2 for Quantifying Variance in Evaluation Benchmarks
Figure 3 for Quantifying Variance in Evaluation Benchmarks
Figure 4 for Quantifying Variance in Evaluation Benchmarks
Viaarxiv icon

Effective Long-Context Scaling of Foundation Models

Add code
Sep 27, 2023
Viaarxiv icon

Llama 2: Open Foundation and Fine-Tuned Chat Models

Add code
Jul 19, 2023
Figure 1 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 2 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 3 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 4 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Viaarxiv icon

A Theory on Adam Instability in Large-Scale Machine Learning

Add code
Apr 25, 2023
Figure 1 for A Theory on Adam Instability in Large-Scale Machine Learning
Figure 2 for A Theory on Adam Instability in Large-Scale Machine Learning
Figure 3 for A Theory on Adam Instability in Large-Scale Machine Learning
Figure 4 for A Theory on Adam Instability in Large-Scale Machine Learning
Viaarxiv icon

UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining

Add code
Apr 18, 2023
Figure 1 for UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining
Figure 2 for UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining
Figure 3 for UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining
Figure 4 for UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining
Viaarxiv icon

Character-Aware Models Improve Visual Text Rendering

Add code
Dec 20, 2022
Figure 1 for Character-Aware Models Improve Visual Text Rendering
Figure 2 for Character-Aware Models Improve Visual Text Rendering
Figure 3 for Character-Aware Models Improve Visual Text Rendering
Figure 4 for Character-Aware Models Improve Visual Text Rendering
Viaarxiv icon

FCM: Forgetful Causal Masking Makes Causal Language Models Better Zero-Shot Learners

Add code
Oct 24, 2022
Viaarxiv icon