Picture for Adam Scherlis

Adam Scherlis

Understanding Gradient Descent through the Training Jacobian

Add code
Dec 09, 2024
Viaarxiv icon

Refusal in LLMs is an Affine Function

Add code
Nov 13, 2024
Viaarxiv icon

Polysemanticity and Capacity in Neural Networks

Add code
Oct 04, 2022
Figure 1 for Polysemanticity and Capacity in Neural Networks
Figure 2 for Polysemanticity and Capacity in Neural Networks
Figure 3 for Polysemanticity and Capacity in Neural Networks
Figure 4 for Polysemanticity and Capacity in Neural Networks
Viaarxiv icon

Adversarial Training for High-Stakes Reliability

Add code
May 04, 2022
Figure 1 for Adversarial Training for High-Stakes Reliability
Figure 2 for Adversarial Training for High-Stakes Reliability
Figure 3 for Adversarial Training for High-Stakes Reliability
Figure 4 for Adversarial Training for High-Stakes Reliability
Viaarxiv icon

The Goldilocks zone: Towards better understanding of neural network loss landscapes

Add code
Jul 06, 2018
Figure 1 for The Goldilocks zone: Towards better understanding of neural network loss landscapes
Figure 2 for The Goldilocks zone: Towards better understanding of neural network loss landscapes
Figure 3 for The Goldilocks zone: Towards better understanding of neural network loss landscapes
Figure 4 for The Goldilocks zone: Towards better understanding of neural network loss landscapes
Viaarxiv icon