Picture for Johanni Brea

Johanni Brea

Should Under-parameterized Student Networks Copy or Average Teacher Weights?

Add code
Nov 03, 2023
Viaarxiv icon

Expand-and-Cluster: Exact Parameter Recovery of Neural Networks

Add code
Apr 25, 2023
Figure 1 for Expand-and-Cluster: Exact Parameter Recovery of Neural Networks
Figure 2 for Expand-and-Cluster: Exact Parameter Recovery of Neural Networks
Figure 3 for Expand-and-Cluster: Exact Parameter Recovery of Neural Networks
Figure 4 for Expand-and-Cluster: Exact Parameter Recovery of Neural Networks
Viaarxiv icon

MLPGradientFlow: going with the flow of multilayer perceptrons (and finding minima fast and accurately)

Add code
Jan 25, 2023
Viaarxiv icon

A taxonomy of surprise definitions

Add code
Sep 02, 2022
Figure 1 for A taxonomy of surprise definitions
Figure 2 for A taxonomy of surprise definitions
Figure 3 for A taxonomy of surprise definitions
Figure 4 for A taxonomy of surprise definitions
Viaarxiv icon

Kernel Memory Networks: A Unifying Framework for Memory Modeling

Add code
Aug 19, 2022
Figure 1 for Kernel Memory Networks: A Unifying Framework for Memory Modeling
Figure 2 for Kernel Memory Networks: A Unifying Framework for Memory Modeling
Figure 3 for Kernel Memory Networks: A Unifying Framework for Memory Modeling
Viaarxiv icon

Neural NID Rules

Add code
Feb 12, 2022
Viaarxiv icon

Fitting summary statistics of neural data with a differentiable spiking network simulator

Add code
Jun 18, 2021
Figure 1 for Fitting summary statistics of neural data with a differentiable spiking network simulator
Figure 2 for Fitting summary statistics of neural data with a differentiable spiking network simulator
Figure 3 for Fitting summary statistics of neural data with a differentiable spiking network simulator
Figure 4 for Fitting summary statistics of neural data with a differentiable spiking network simulator
Viaarxiv icon

Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances

Add code
May 25, 2021
Figure 1 for Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances
Figure 2 for Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances
Figure 3 for Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances
Figure 4 for Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances
Viaarxiv icon

An Approximate Bayesian Approach to Surprise-Based Learning

Add code
Jul 05, 2019
Figure 1 for An Approximate Bayesian Approach to Surprise-Based Learning
Figure 2 for An Approximate Bayesian Approach to Surprise-Based Learning
Figure 3 for An Approximate Bayesian Approach to Surprise-Based Learning
Figure 4 for An Approximate Bayesian Approach to Surprise-Based Learning
Viaarxiv icon

Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape

Add code
Jul 05, 2019
Figure 1 for Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Figure 2 for Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Figure 3 for Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Figure 4 for Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Viaarxiv icon