Picture for Jeffrey Pennington

Jeffrey Pennington

Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability

Add code
Aug 14, 2024
Figure 1 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Figure 2 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Figure 3 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Figure 4 for Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability
Viaarxiv icon

Scaling Exponents Across Parameterizations and Optimizers

Add code
Jul 08, 2024
Figure 1 for Scaling Exponents Across Parameterizations and Optimizers
Figure 2 for Scaling Exponents Across Parameterizations and Optimizers
Figure 3 for Scaling Exponents Across Parameterizations and Optimizers
Figure 4 for Scaling Exponents Across Parameterizations and Optimizers
Viaarxiv icon

4+3 Phases of Compute-Optimal Neural Scaling Laws

Add code
May 23, 2024
Viaarxiv icon

High dimensional analysis reveals conservative sharpening and a stochastic edge of stability

Add code
Apr 30, 2024
Viaarxiv icon

Training LLMs over Neurally Compressed Text

Add code
Apr 04, 2024
Figure 1 for Training LLMs over Neurally Compressed Text
Figure 2 for Training LLMs over Neurally Compressed Text
Figure 3 for Training LLMs over Neurally Compressed Text
Figure 4 for Training LLMs over Neurally Compressed Text
Viaarxiv icon

Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models

Add code
Dec 22, 2023
Viaarxiv icon

Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?

Add code
Nov 15, 2023
Figure 1 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 2 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 3 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 4 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Viaarxiv icon

Small-scale proxies for large-scale Transformer training instabilities

Add code
Sep 25, 2023
Figure 1 for Small-scale proxies for large-scale Transformer training instabilities
Figure 2 for Small-scale proxies for large-scale Transformer training instabilities
Figure 3 for Small-scale proxies for large-scale Transformer training instabilities
Figure 4 for Small-scale proxies for large-scale Transformer training instabilities
Viaarxiv icon

Second-order regression models exhibit progressive sharpening to the edge of stability

Add code
Oct 10, 2022
Figure 1 for Second-order regression models exhibit progressive sharpening to the edge of stability
Figure 2 for Second-order regression models exhibit progressive sharpening to the edge of stability
Figure 3 for Second-order regression models exhibit progressive sharpening to the edge of stability
Figure 4 for Second-order regression models exhibit progressive sharpening to the edge of stability
Viaarxiv icon

Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm

Add code
Jul 11, 2022
Figure 1 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Figure 2 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Figure 3 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Figure 4 for Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Viaarxiv icon