Picture for Peter E. Latham

Peter E. Latham

Optimal Learning Rate Schedule for Balancing Effort and Performance

Add code
Jan 12, 2026
Viaarxiv icon

Saddle-to-Saddle Dynamics Explains A Simplicity Bias Across Neural Network Architectures

Add code
Dec 23, 2025
Viaarxiv icon

Training Dynamics of In-Context Learning in Linear Attention

Add code
Jan 27, 2025
Viaarxiv icon

When Are Bias-Free ReLU Networks Like Linear Networks?

Add code
Jun 18, 2024
Viaarxiv icon

A Theory of Unimodal Bias in Multimodal Learning

Add code
Dec 01, 2023
Viaarxiv icon

Powerpropagation: A sparsity inducing weight reparameterisation

Add code
Oct 06, 2021
Figure 1 for Powerpropagation: A sparsity inducing weight reparameterisation
Figure 2 for Powerpropagation: A sparsity inducing weight reparameterisation
Figure 3 for Powerpropagation: A sparsity inducing weight reparameterisation
Figure 4 for Powerpropagation: A sparsity inducing weight reparameterisation
Viaarxiv icon

Towards Biologically Plausible Convolutional Networks

Add code
Jun 22, 2021
Figure 1 for Towards Biologically Plausible Convolutional Networks
Figure 2 for Towards Biologically Plausible Convolutional Networks
Figure 3 for Towards Biologically Plausible Convolutional Networks
Figure 4 for Towards Biologically Plausible Convolutional Networks
Viaarxiv icon

Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks

Add code
Jun 12, 2020
Figure 1 for Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks
Figure 2 for Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks
Figure 3 for Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks
Figure 4 for Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks
Viaarxiv icon