Picture for Eduard Gorbunov

Eduard Gorbunov

Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization

Add code
Dec 03, 2024
Viaarxiv icon

Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning

Add code
Nov 29, 2024
Viaarxiv icon

Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum

Add code
Oct 22, 2024
Viaarxiv icon

Low-Resource Machine Translation through the Lens of Personalized Federated Learning

Add code
Jun 18, 2024
Viaarxiv icon

Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed

Add code
Jun 06, 2024
Figure 1 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Figure 2 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Figure 3 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Figure 4 for Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Viaarxiv icon

Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad

Add code
Mar 05, 2024
Figure 1 for Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Figure 2 for Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Figure 3 for Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Figure 4 for Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Viaarxiv icon

Federated Learning Can Find Friends That Are Beneficial

Add code
Feb 14, 2024
Viaarxiv icon

Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences

Add code
Nov 23, 2023
Viaarxiv icon

Byzantine-Tolerant Methods for Distributed Variational Inequalities

Add code
Nov 08, 2023
Viaarxiv icon

Breaking the Heavy-Tailed Noise Barrier in Stochastic Optimization Problems

Add code
Nov 07, 2023
Viaarxiv icon