Picture for Tomohiro Hayase

Tomohiro Hayase

PanoTree: Autonomous Photo-Spot Explorer in Virtual Reality Scenes

Add code
May 27, 2024
Viaarxiv icon

MLP-Mixer as a Wide and Sparse MLP

Add code
Jun 02, 2023
Viaarxiv icon

Understanding Gradient Regularization in Deep Learning: Efficient Finite-Difference Computation and Implicit Bias

Add code
Oct 06, 2022
Figure 1 for Understanding Gradient Regularization in Deep Learning: Efficient Finite-Difference Computation and Implicit Bias
Figure 2 for Understanding Gradient Regularization in Deep Learning: Efficient Finite-Difference Computation and Implicit Bias
Figure 3 for Understanding Gradient Regularization in Deep Learning: Efficient Finite-Difference Computation and Implicit Bias
Figure 4 for Understanding Gradient Regularization in Deep Learning: Efficient Finite-Difference Computation and Implicit Bias
Viaarxiv icon

Asymptotic Freeness of Layerwise Jacobians Caused by Invariance of Multilayer Perceptron: The Haar Orthogonal Case

Add code
Apr 11, 2021
Figure 1 for Asymptotic Freeness of Layerwise Jacobians Caused by Invariance of Multilayer Perceptron: The Haar Orthogonal Case
Figure 2 for Asymptotic Freeness of Layerwise Jacobians Caused by Invariance of Multilayer Perceptron: The Haar Orthogonal Case
Viaarxiv icon

Layer-Wise Interpretation of Deep Neural Networks Using Identity Initialization

Add code
Feb 26, 2021
Figure 1 for Layer-Wise Interpretation of Deep Neural Networks Using Identity Initialization
Figure 2 for Layer-Wise Interpretation of Deep Neural Networks Using Identity Initialization
Figure 3 for Layer-Wise Interpretation of Deep Neural Networks Using Identity Initialization
Figure 4 for Layer-Wise Interpretation of Deep Neural Networks Using Identity Initialization
Viaarxiv icon

Selective Forgetting of Deep Networks at a Finer Level than Samples

Add code
Dec 31, 2020
Figure 1 for Selective Forgetting of Deep Networks at a Finer Level than Samples
Figure 2 for Selective Forgetting of Deep Networks at a Finer Level than Samples
Figure 3 for Selective Forgetting of Deep Networks at a Finer Level than Samples
Figure 4 for Selective Forgetting of Deep Networks at a Finer Level than Samples
Viaarxiv icon

The Spectrum of Fisher Information of Deep Networks Achieving Dynamical Isometry

Add code
Jul 07, 2020
Figure 1 for The Spectrum of Fisher Information of Deep Networks Achieving Dynamical Isometry
Figure 2 for The Spectrum of Fisher Information of Deep Networks Achieving Dynamical Isometry
Figure 3 for The Spectrum of Fisher Information of Deep Networks Achieving Dynamical Isometry
Figure 4 for The Spectrum of Fisher Information of Deep Networks Achieving Dynamical Isometry
Viaarxiv icon

Almost Surely Asymptotic Freeness for Jacobian Spectrum of Deep Network

Add code
Aug 22, 2019
Viaarxiv icon

Cauchy noise loss for stochastic optimization of random matrix models via free deterministic equivalents

Add code
Aug 05, 2018
Figure 1 for Cauchy noise loss for stochastic optimization of random matrix models via free deterministic equivalents
Figure 2 for Cauchy noise loss for stochastic optimization of random matrix models via free deterministic equivalents
Figure 3 for Cauchy noise loss for stochastic optimization of random matrix models via free deterministic equivalents
Figure 4 for Cauchy noise loss for stochastic optimization of random matrix models via free deterministic equivalents
Viaarxiv icon