Picture for Jinlan Liu

Jinlan Liu

Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters and Non-ergodic Case

Add code
Jul 20, 2023
Viaarxiv icon

UAdam: Unified Adam-Type Algorithmic Framework for Non-Convex Stochastic Optimization

Add code
May 09, 2023
Viaarxiv icon

Last-iterate convergence analysis of stochastic momentum methods for neural networks

Add code
May 30, 2022
Figure 1 for Last-iterate convergence analysis of stochastic momentum methods for neural networks
Figure 2 for Last-iterate convergence analysis of stochastic momentum methods for neural networks
Viaarxiv icon

Scaling transition from momentum stochastic gradient descent to plain stochastic gradient descent

Add code
Jun 12, 2021
Figure 1 for Scaling transition from momentum stochastic gradient descent to plain stochastic gradient descent
Figure 2 for Scaling transition from momentum stochastic gradient descent to plain stochastic gradient descent
Figure 3 for Scaling transition from momentum stochastic gradient descent to plain stochastic gradient descent
Figure 4 for Scaling transition from momentum stochastic gradient descent to plain stochastic gradient descent
Viaarxiv icon

Decreasing scaling transition from adaptive gradient descent to stochastic gradient descent

Add code
Jun 12, 2021
Figure 1 for Decreasing scaling transition from adaptive gradient descent to stochastic gradient descent
Figure 2 for Decreasing scaling transition from adaptive gradient descent to stochastic gradient descent
Figure 3 for Decreasing scaling transition from adaptive gradient descent to stochastic gradient descent
Figure 4 for Decreasing scaling transition from adaptive gradient descent to stochastic gradient descent
Viaarxiv icon