Picture for Vasileios Charisopoulos

Vasileios Charisopoulos

Solving Inverse Problems with Deep Linear Neural Networks: Global Convergence Guarantees for Gradient Descent with Weight Decay

Add code
Feb 21, 2025
Viaarxiv icon

Faster Adaptive Optimization via Expected Gradient Outer Product Reparameterization

Add code
Feb 03, 2025
Figure 1 for Faster Adaptive Optimization via Expected Gradient Outer Product Reparameterization
Figure 2 for Faster Adaptive Optimization via Expected Gradient Outer Product Reparameterization
Figure 3 for Faster Adaptive Optimization via Expected Gradient Outer Product Reparameterization
Figure 4 for Faster Adaptive Optimization via Expected Gradient Outer Product Reparameterization
Viaarxiv icon

Robust and differentially private stochastic linear bandits

Add code
Apr 23, 2023
Viaarxiv icon

Communication-efficient distributed eigenspace estimation with arbitrary node failures

Add code
May 31, 2022
Figure 1 for Communication-efficient distributed eigenspace estimation with arbitrary node failures
Viaarxiv icon

Communication-efficient distributed eigenspace estimation

Add code
Sep 05, 2020
Figure 1 for Communication-efficient distributed eigenspace estimation
Figure 2 for Communication-efficient distributed eigenspace estimation
Figure 3 for Communication-efficient distributed eigenspace estimation
Figure 4 for Communication-efficient distributed eigenspace estimation
Viaarxiv icon

Entrywise convergence of iterative methods for eigenproblems

Add code
Feb 19, 2020
Figure 1 for Entrywise convergence of iterative methods for eigenproblems
Figure 2 for Entrywise convergence of iterative methods for eigenproblems
Figure 3 for Entrywise convergence of iterative methods for eigenproblems
Figure 4 for Entrywise convergence of iterative methods for eigenproblems
Viaarxiv icon

Stochastic algorithms with geometric step decay converge linearly on sharp functions

Add code
Jul 22, 2019
Figure 1 for Stochastic algorithms with geometric step decay converge linearly on sharp functions
Figure 2 for Stochastic algorithms with geometric step decay converge linearly on sharp functions
Figure 3 for Stochastic algorithms with geometric step decay converge linearly on sharp functions
Figure 4 for Stochastic algorithms with geometric step decay converge linearly on sharp functions
Viaarxiv icon

Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence

Add code
Apr 22, 2019
Figure 1 for Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence
Figure 2 for Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence
Figure 3 for Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence
Figure 4 for Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence
Viaarxiv icon

Composite optimization for robust blind deconvolution

Add code
Jan 18, 2019
Figure 1 for Composite optimization for robust blind deconvolution
Figure 2 for Composite optimization for robust blind deconvolution
Figure 3 for Composite optimization for robust blind deconvolution
Figure 4 for Composite optimization for robust blind deconvolution
Viaarxiv icon

A Tropical Approach to Neural Networks with Piecewise Linear Activations

Add code
May 22, 2018
Figure 1 for A Tropical Approach to Neural Networks with Piecewise Linear Activations
Figure 2 for A Tropical Approach to Neural Networks with Piecewise Linear Activations
Figure 3 for A Tropical Approach to Neural Networks with Piecewise Linear Activations
Figure 4 for A Tropical Approach to Neural Networks with Piecewise Linear Activations
Viaarxiv icon