Picture for Achraf Bahamou

Achraf Bahamou

Layer-wise Adaptive Step-Sizes for Stochastic First-Order Methods for Deep Learning

Add code
May 23, 2023
Viaarxiv icon

A Mini-Block Natural Gradient Method for Deep Neural Networks

Add code
Feb 16, 2022
Figure 1 for A Mini-Block Natural Gradient Method for Deep Neural Networks
Figure 2 for A Mini-Block Natural Gradient Method for Deep Neural Networks
Figure 3 for A Mini-Block Natural Gradient Method for Deep Neural Networks
Figure 4 for A Mini-Block Natural Gradient Method for Deep Neural Networks
Viaarxiv icon

Practical Quasi-Newton Methods for Training Deep Neural Networks

Add code
Jun 16, 2020
Figure 1 for Practical Quasi-Newton Methods for Training Deep Neural Networks
Figure 2 for Practical Quasi-Newton Methods for Training Deep Neural Networks
Figure 3 for Practical Quasi-Newton Methods for Training Deep Neural Networks
Figure 4 for Practical Quasi-Newton Methods for Training Deep Neural Networks
Viaarxiv icon

Stochastic Flows and Geometric Optimization on the Orthogonal Group

Add code
Mar 30, 2020
Figure 1 for Stochastic Flows and Geometric Optimization on the Orthogonal Group
Figure 2 for Stochastic Flows and Geometric Optimization on the Orthogonal Group
Figure 3 for Stochastic Flows and Geometric Optimization on the Orthogonal Group
Figure 4 for Stochastic Flows and Geometric Optimization on the Orthogonal Group
Viaarxiv icon

A Dynamic Sampling Adaptive-SGD Method for Machine Learning

Add code
Dec 31, 2019
Figure 1 for A Dynamic Sampling Adaptive-SGD Method for Machine Learning
Figure 2 for A Dynamic Sampling Adaptive-SGD Method for Machine Learning
Figure 3 for A Dynamic Sampling Adaptive-SGD Method for Machine Learning
Figure 4 for A Dynamic Sampling Adaptive-SGD Method for Machine Learning
Viaarxiv icon