Picture for L. F. Abbott

L. F. Abbott

Theory of coupled neuronal-synaptic dynamics

Add code
Feb 17, 2023
Viaarxiv icon

Dimension of Activity in Random Neural Networks

Add code
Aug 07, 2022
Figure 1 for Dimension of Activity in Random Neural Networks
Figure 2 for Dimension of Activity in Random Neural Networks
Figure 3 for Dimension of Activity in Random Neural Networks
Figure 4 for Dimension of Activity in Random Neural Networks
Viaarxiv icon

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks

Add code
Feb 05, 2022
Figure 1 for The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks
Figure 2 for The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks
Figure 3 for The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks
Figure 4 for The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks
Viaarxiv icon

Input correlations impede suppression of chaos and learning in balanced rate networks

Add code
Jan 24, 2022
Figure 1 for Input correlations impede suppression of chaos and learning in balanced rate networks
Figure 2 for Input correlations impede suppression of chaos and learning in balanced rate networks
Figure 3 for Input correlations impede suppression of chaos and learning in balanced rate networks
Figure 4 for Input correlations impede suppression of chaos and learning in balanced rate networks
Viaarxiv icon

Credit Assignment Through Broadcasting a Global Error Vector

Add code
Jun 08, 2021
Figure 1 for Credit Assignment Through Broadcasting a Global Error Vector
Figure 2 for Credit Assignment Through Broadcasting a Global Error Vector
Figure 3 for Credit Assignment Through Broadcasting a Global Error Vector
Figure 4 for Credit Assignment Through Broadcasting a Global Error Vector
Viaarxiv icon

Neural population geometry: An approach for understanding biological and artificial neural networks

Add code
Apr 17, 2021
Figure 1 for Neural population geometry: An approach for understanding biological and artificial neural networks
Figure 2 for Neural population geometry: An approach for understanding biological and artificial neural networks
Viaarxiv icon

Training dynamically balanced excitatory-inhibitory networks

Add code
Dec 29, 2018
Figure 1 for Training dynamically balanced excitatory-inhibitory networks
Figure 2 for Training dynamically balanced excitatory-inhibitory networks
Figure 3 for Training dynamically balanced excitatory-inhibitory networks
Figure 4 for Training dynamically balanced excitatory-inhibitory networks
Viaarxiv icon

Feedback alignment in deep convolutional networks

Add code
Dec 12, 2018
Figure 1 for Feedback alignment in deep convolutional networks
Figure 2 for Feedback alignment in deep convolutional networks
Figure 3 for Feedback alignment in deep convolutional networks
Figure 4 for Feedback alignment in deep convolutional networks
Viaarxiv icon

full-FORCE: A Target-Based Method for Training Recurrent Networks

Add code
Oct 09, 2017
Figure 1 for full-FORCE: A Target-Based Method for Training Recurrent Networks
Figure 2 for full-FORCE: A Target-Based Method for Training Recurrent Networks
Figure 3 for full-FORCE: A Target-Based Method for Training Recurrent Networks
Figure 4 for full-FORCE: A Target-Based Method for Training Recurrent Networks
Viaarxiv icon

Balanced Excitation and Inhibition are Required for High-Capacity, Noise-Robust Neuronal Selectivity

Add code
May 03, 2017
Figure 1 for Balanced Excitation and Inhibition are Required for High-Capacity, Noise-Robust Neuronal Selectivity
Figure 2 for Balanced Excitation and Inhibition are Required for High-Capacity, Noise-Robust Neuronal Selectivity
Figure 3 for Balanced Excitation and Inhibition are Required for High-Capacity, Noise-Robust Neuronal Selectivity
Figure 4 for Balanced Excitation and Inhibition are Required for High-Capacity, Noise-Robust Neuronal Selectivity
Viaarxiv icon