Picture for Chris Mingard

Chris Mingard

Visualising Feature Learning in Deep Neural Networks by Diagonalizing the Forward Feature Map

Add code
Oct 05, 2024
Viaarxiv icon

Exploiting the equivalence between quantum neural networks and perceptrons

Add code
Jul 05, 2024
Viaarxiv icon

Do deep neural networks have an inbuilt Occam's razor?

Add code
Apr 13, 2023
Viaarxiv icon

Automatic Gradient Descent: Deep Learning without Hyperparameters

Add code
Apr 11, 2023
Viaarxiv icon

The Equilibrium Hypothesis: Rethinking implicit regularization in Deep Neural Networks

Add code
Oct 22, 2021
Figure 1 for The Equilibrium Hypothesis: Rethinking implicit regularization in Deep Neural Networks
Figure 2 for The Equilibrium Hypothesis: Rethinking implicit regularization in Deep Neural Networks
Figure 3 for The Equilibrium Hypothesis: Rethinking implicit regularization in Deep Neural Networks
Figure 4 for The Equilibrium Hypothesis: Rethinking implicit regularization in Deep Neural Networks
Viaarxiv icon

Is SGD a Bayesian sampler? Well, almost

Add code
Jun 26, 2020
Figure 1 for Is SGD a Bayesian sampler? Well, almost
Figure 2 for Is SGD a Bayesian sampler? Well, almost
Figure 3 for Is SGD a Bayesian sampler? Well, almost
Figure 4 for Is SGD a Bayesian sampler? Well, almost
Viaarxiv icon

Neural networks are a priori biased towards Boolean functions with low entropy

Add code
Sep 29, 2019
Figure 1 for Neural networks are a priori biased towards Boolean functions with low entropy
Figure 2 for Neural networks are a priori biased towards Boolean functions with low entropy
Figure 3 for Neural networks are a priori biased towards Boolean functions with low entropy
Figure 4 for Neural networks are a priori biased towards Boolean functions with low entropy
Viaarxiv icon