Picture for Guanghui Lan

Guanghui Lan

Value Mirror Descent for Reinforcement Learning

Add code
Apr 07, 2026
Viaarxiv icon

Stochastic Auto-conditioned Fast Gradient Methods with Optimal Rates

Add code
Apr 07, 2026
Viaarxiv icon

Actor-Accelerated Policy Dual Averaging for Reinforcement Learning in Continuous Action Spaces

Add code
Mar 10, 2026
Viaarxiv icon

One-Sided Matrix Completion from Ultra-Sparse Samples

Add code
Jan 18, 2026
Viaarxiv icon

Global Solutions to Non-Convex Functional Constrained Problems with Hidden Convexity

Add code
Nov 13, 2025
Viaarxiv icon

Can SGD Handle Heavy-Tailed Noise?

Add code
Aug 06, 2025
Viaarxiv icon

Projected gradient methods for nonconvex and stochastic optimization: new complexities and auto-conditioned stepsizes

Add code
Dec 18, 2024
Viaarxiv icon

Auto-conditioned primal-dual hybrid gradient method and alternating direction method of multipliers

Add code
Oct 02, 2024
Viaarxiv icon

Strongly-Polynomial Time and Validation Analysis of Policy Gradient Methods

Add code
Sep 28, 2024
Figure 1 for Strongly-Polynomial Time and Validation Analysis of Policy Gradient Methods
Figure 2 for Strongly-Polynomial Time and Validation Analysis of Policy Gradient Methods
Figure 3 for Strongly-Polynomial Time and Validation Analysis of Policy Gradient Methods
Figure 4 for Strongly-Polynomial Time and Validation Analysis of Policy Gradient Methods
Viaarxiv icon

A simple uniformly optimal method without line search for convex optimization

Add code
Oct 27, 2023
Figure 1 for A simple uniformly optimal method without line search for convex optimization
Figure 2 for A simple uniformly optimal method without line search for convex optimization
Figure 3 for A simple uniformly optimal method without line search for convex optimization
Figure 4 for A simple uniformly optimal method without line search for convex optimization
Viaarxiv icon