Picture for Stefano Spigler

Stefano Spigler

How isotropic kernels learn simple invariants

Add code
Jun 29, 2020
Figure 1 for How isotropic kernels learn simple invariants
Figure 2 for How isotropic kernels learn simple invariants
Figure 3 for How isotropic kernels learn simple invariants
Figure 4 for How isotropic kernels learn simple invariants
Viaarxiv icon

Disentangling feature and lazy learning in deep neural networks: an empirical study

Add code
Jun 19, 2019
Figure 1 for Disentangling feature and lazy learning in deep neural networks: an empirical study
Figure 2 for Disentangling feature and lazy learning in deep neural networks: an empirical study
Figure 3 for Disentangling feature and lazy learning in deep neural networks: an empirical study
Figure 4 for Disentangling feature and lazy learning in deep neural networks: an empirical study
Viaarxiv icon

Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm

Add code
Jun 06, 2019
Figure 1 for Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
Figure 2 for Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
Figure 3 for Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
Figure 4 for Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
Viaarxiv icon

Scaling description of generalization with number of parameters in deep learning

Add code
Jan 18, 2019
Figure 1 for Scaling description of generalization with number of parameters in deep learning
Figure 2 for Scaling description of generalization with number of parameters in deep learning
Figure 3 for Scaling description of generalization with number of parameters in deep learning
Figure 4 for Scaling description of generalization with number of parameters in deep learning
Viaarxiv icon

A jamming transition from under- to over-parametrization affects loss landscape and generalization

Add code
Oct 22, 2018
Figure 1 for A jamming transition from under- to over-parametrization affects loss landscape and generalization
Figure 2 for A jamming transition from under- to over-parametrization affects loss landscape and generalization
Figure 3 for A jamming transition from under- to over-parametrization affects loss landscape and generalization
Figure 4 for A jamming transition from under- to over-parametrization affects loss landscape and generalization
Viaarxiv icon

The jamming transition as a paradigm to understand the loss landscape of deep neural networks

Add code
Oct 03, 2018
Figure 1 for The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Figure 2 for The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Figure 3 for The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Figure 4 for The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Viaarxiv icon